report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
In the aftermath of the terrorist attacks on September 11, 2001, security experts have raised concerns that terrorists may try to smuggle radiological or nuclear materials into the United States to produce either an RDD or IND. These experts have also raised concerns that terrorists could obtain radioactive materials used in medicine, research, agriculture, and industry to construct an RDD, or dirty bomb. This radioactive material is encapsulated, or sealed, in metal, such as stainless steel, titanium, or platinum, to prevent its dispersal and is commonly called a sealed radiological source. Sealed sources are used throughout the United States and other countries in equipment designed to, among other things, diagnose and treat illnesses, preserve food, detect flaws in pipeline welds, and determine the moisture content of soil. Depending on their use, sealed sources contain different types of radioactive material, such as strontium- 90, cobalt-60, cesium-137, plutonium-238, and plutonium-239. While no terrorists have detonated a dirty bomb in a city, Chechen separatists placed a canister containing cesium-137 in a Moscow park in the mid- 1990s. Although the device was not detonated and no radioactive material was dispersed, the incident demonstrated that terrorists have the capability and willingness to use radiological materials as weapons of terrorism. In contrast, detonating an IND would require a terrorist group to obtain nuclear weapons material—which is generally heavily secured— and to have highly sophisticated expertise and equipment to fabricate this material into a weapon. Another form of nuclear terrorism occurred with the dispersal of radioactive materials through a sequence of events in London during November and December 2006. On November 23, 2006, Alexander Litvinenko, a former officer of the Russian Federal Security Service, was poisoned with a milligram of polonium-210—about the size of a grain of salt. The dispersal of the polonium by the perpetrators of the crime and the victim resulted in widespread contamination across London and even abroad. His poisoning was detected only after he was hospitalized for a few weeks and tested for symptoms of radiation exposure because of hair loss. Following the poisoning, forensic investigators examined 47 sites across London for traces of polonium, both resulting from the handling of the polonium by his perpetrators and maybe other attempts to poison him. Of these locations, about 12 showed signs of this radioactive material, including a restaurant, hotel room, soccer stadium, and an eastbound British Airways plane. British investigators also identified over 1,000 people who might have been in various ways exposed to the polonium. Health officials tested 738 of them and found that 137 had reportable levels of this substance, although few of these individuals turned out to have levels that warranted medical attention. The decontamination activities at these sites spanned 19 days, involved a number of methods and technologies, and cost more than $200,000. However, the estimated total cost of managing this incident, including law enforcement investigation, testing individuals, sampling materials, and cleanup, was $4 million. FEMA has not completed planning to help cities and states recover from RDD or IND incidents as evidenced by not (1) developing a national disaster recovery strategy as required by law and (2) issuing specific guidance to coordinate federal, state, and city planning to recover from RDD or IND incidents. Moreover, federal agencies have conducted few exercises to test recovery plans for these incidents. FEMA has not developed a national disaster recovery strategy, as required by law and directed by executive guidance, or issued specific guidance to coordinate federal, state, and local government recovery planning for RDD or IND incidents. The Post-Katrina Emergency Management Reform Act of 2006 requires FEMA to develop, coordinate, and maintain a national disaster recovery strategy. Among other things, the strategy is to clearly define the roles, programs, authorities, and responsibilities of each agency that may provide assistance to the recovery from a major disaster. In addition, the National Strategy for Homeland Security also called on the federal government to prepare a recovery strategy. The federal government has placed a higher priority on developing a strategy to respond to domestic incidents, including RDD and IND incidents, than it has on developing a comparable strategy for recovering from these incidents. For example, the response strategy, captured in the 2008 National Response Framework, does not include guidance on long-term recovery activities. The FEMA coordinator for the development of a national disaster recovery strategy told us that while the previous administration had drafted a “white paper” addressing this strategy, the new administration has decided to rethink the entire approach. The FEMA coordinator also told us that FEMA recognizes its responsibility to prepare a national disaster recovery strategy but could not provide a time frame for its completion. This same official did say that in developing this strategy, FEMA plans to seek out opinions of nonfederal stakeholders. Once completed, the official said that the recovery strategy would provide guidance to federal, state, and local agencies in revising their operational plans for recovery activities, including recovery from RDD and IND incidents. Currently, the limited federal planning guidance related to the recovery from RDD and IND incidents can be found in a number of documents. There are several annexes to the National Response Framework that address, in part, federal agency responsibilities and assets to help state and local governments recover from these incidents. For example, a December 2004 emergency support function annex covering long-term community recovery and mitigation, led by FEMA, provides a framework for federal support to localities to enable community recovery from the long-term consequences of events of national significance. While this annex addresses FEMA’s responsibilities to coordinate the transition from response to recovery in field operations, it does not provide details on recovery planning for RDD and IND incidents. The January 2003 emergency support annex covering hazardous materials, led by EPA, provides the framework for federal support in response to an actual or potential discharge and release of hazardous materials following a major disaster or emergency. EPA officials informed us that this annex will give them a significant federal role in leading cleanup efforts after RDD or IND incidents, in coordination with affected state and local governments. The June 2008 nuclear and radiological incident annex describes federal responsibilities and provides some operational guidance for pertinent response activities and, to a lesser extent, recovery activities in support of state and local governments. DHS is identified as the technical lead for recovery activities, but may request support from other federal agencies— for example, EPA and the United States Army Corps of Engineers—that have cleanup and recovery experience and capabilities. According to this annex, the federal government, upon request of state and local governments, can assist in developing and executing recovery plans, but such plans would not generally be developed until after the incident occurs. The lack of a national disaster recovery strategy that would include RDD and IND incidents is problematic because, according to survey respondents, most localities would count on the federal government being prepared to carry out analysis and environmental cleanup activities following these incidents. Specifically, emergency management officials from almost all 13 cities and most of their 10 states indicated in our survey that they believe they would need to rely heavily on the federal government to conduct and fund all or almost all analysis and environmental cleanup activities associated with recovering from an RDD or IND incidents of the magnitude described in the national planning scenarios. They indicated that their technical and financial resources would be overwhelmed by a large RDD incident—and certainly by an IND incident. Most of these officials reported that they believe they could adequately address a smaller RDD incident, such as one that is confined to a city block or inside a building. Despite this anticipated reliance on the federal government, we obtained mixed responses as to whether these RDD and IND recovery activities should be primarily a federal responsibility. Almost half of the respondents from the cities (6 of 13), but most of those from states (8 of 10), indicated that these activities should be primarily a federal responsibility. The others stressed the need for shared responsibilities with the federal government. However, when respondents were asked in our survey to identify which federal agencies they would turn to for help in the analysis and environmental cleanup of areas contaminated with radioactive materials from RDD or IND incidents, they provided inconsistent responses and frequently listed several federal agencies for the same activity. These responses seem to indicate that there might be some confusion among city and state emergency management officials regarding federal agency responsibilities to provide assistance to them under these circumstances. In our view, this confusion, if not addressed, could hamper the timely recovery from these incidents and demonstrates the need for development and implementation of a national disaster recovery strategy. In commenting on the draft report, EPA indicated that as no single federal department or agency has the sole requisite technical capacity and capabilities to respond to the scope of RDD or IND incidents, it is expected that numerous federal agencies would need to work together in a single mission, such as through FRMAC. Nevertheless, EPA stated that our survey results underscore the importance having clear communication and notification among federal agencies, which if not addressed, could hamper recovery efforts. FEMA has not issued specific guidance describing how federal capabilities would be integrated into and support state and local plans for recovery from RDD or IND incidents, as called for by presidential directive. According to a senior FEMA official, the agency has delayed issuing this guidance pending the reevaluation of its planning approach by the new administration. However, a senior FEMA planning official told us that because FEMA is already aware that its planning system does not fully recognize the involvement of state and local governments, the agency is developing regional support plans—including for RDD and IND incidents—through its regional offices, which will reflect state and local government roles and responsibilities. Moreover, according to FEMA officials, in August 2008, DHS issued stop-gap guidance outside of FEMA’s planning guidance framework to provide some immediate direction to federal, state, and local emergency response officials in developing their own operational plans and response protocols for protection of emergency workers after RDD or IND incidents. In regard to recovery, EPA officials informed us that FEMA and other federal agencies worked together on this guidance in an attempt to clarify the processes for providing federal cleanup assistance following such an incident. These officials informed us that DHS’s guidance was intended to cover the existing operational guidelines for implementing the protective action guides and other response actions, and to encourage their use in developing specific response protocols. In responding to a draft of this report, EPA informed us that DOE had convened an interagency workgroup to address gaps in DHS’s guidance and had issued a preliminary report, for comment by September 30, 2009, containing additional operational guidelines to respond to an RDD incident. Moreover, these officials indicated that EPA has also worked with other federal agencies to examine its own 1992 protective action guides to address shortcomings and to incorporate more recent guidance. However, according to EPA officials, much work remains to convert the new guidance into operational guidance. In addition, DOD has established operational plans for consequence management following terrorist incidents, including RDD and IND attacks. Without federal guidance for coordinating federal, state, and local planning for recovery from RDD or IND incidents, cities and states lack a framework for developing their own recovery strategies. Emergency management officials representing all 13 cities and their states in our survey indicated that while their jurisdictions had prepared emergency response and recovery plans for domestic incidents, few of these plans specifically addressed RDD and IND recovery activities, particularly for the analysis and environmental cleanup of areas contaminated with radioactive materials. For example, few city respondents (3 of 13) indicated that their recovery plans included preparations for an RDD incident, although respondents from two cities indicated that their cities were drafting these plans. In regard to IND preparation, all city respondents informed us that recovery planning was still important despite the magnitude of such events, but none of them had prepared such plans. Respondents from all states in our survey indicated that they had prepared emergency response plans for domestic incidents, and most of them (8 of 10) indicated that these plans included a recovery component. However, we were told that few of these recovery plans address an RDD incident, or specific analysis and environmental cleanup activities following such an incident, although respondents from 8 states mentioned that they planned to prepare such plans. The lack of recovery planning for RDD and IND incidents may be due, in part, to the relatively low priority given to preparing for them by city and state emergency management officials that we surveyed when compared with other types of risks facing their jurisdictions. For example, the majority of city respondents indicated that natural disasters, such as severe weather and infrastructure failure, were the most significant risks facing their jurisdictions. Federal agencies and local jurisdictions have used existing federal guidance as a basis for planning RDD and IND response exercises and, to a much lesser extent, recovery exercises to test the adequacy of their plans and level of preparedness. According to DHS guidance, preparedness is the foundation of a successful national incident management system involving all levels of government and other nongovernmental organizations as necessary. The cycle of preparedness for prevention, protection, response, and recovery missions ends with adequate exercising, evaluation, and improvement. Our search of FEMA’s National Exercise Schedule—a scheduling system for federal, state, and local exercises—revealed 94 RDD or IND response exercises planned and carried out by these authorities from May 2003 through September 2009. These exercises were identified as either full-scale, tabletop, workshop, seminar, functional, or a drill, and some locations have conducted several of them over a period of time. While many of these exercises listed both response and recovery objectives, as well as other exercise objectives, officials with FEMA’s National Exercise Division told us that only three of them actually included a recovery component that exercised activities associated with environmental cleanup. However, our survey of city, state, and federal regional office emergency management officials found that many response and a few recovery exercises were conducted over the last 6 years that do not appear in FEMA’s National Exercise Schedule. We previously reported that information in the National Exercise Schedule database was unreliable. Nevertheless, for the purpose of this report, it is clear that very few RDD and IND response exercises have included a recovery component. According to National Exercise Division officials, a recovery discussion following an RDD or IND response exercise has typically not occurred because of the time needed to fully address the response objectives of the exercise, which are seen as a higher priority. While two response exercises in 2003 and 2007 included brief follow on recovery discussions, a more recent exercise set aside more time for this discussion. The most recent RDD response exercise, based in Albany, New York, set aside 2 days (June 16-17, 2009) for federal, state, and local agencies to discuss operational recovery issues. One unresolved operational recovery issue discussed during this exercise pertained to the transition of the leadership of FRMAC from the initial analysis of the contaminated area, led by DOE, to the later cleanup phase, led by EPA. For example, there are unresolved operational issues regarding the level and quality of the monitoring data necessary for EPA to accept the leadership of FRMAC from DOE. According to EPA officials, while this transitional issue has been discussed in exercises dating back to the development of the Federal Radiological Emergency Response Plan in 1984, it has only recently been discussed in RDD or IND response exercises. Another unresolved operational recovery issue discussed during this exercise pertained to the distribution of responsibilities for the ownership, removal, and disposal of radioactive debris from RDD or IND incidents. According to EPA exercise planning documents, both of these operational issues are to be addressed again in the first full-scale RDD recovery exercise—Liberty RadEx—set to take place April 26-30, 2010, in Philadelphia, Pennsylvania. According to an EPA coordinator for this event, this exercise is to focus on a few technical recovery issues involving intergovernmental coordination, such as setting environmental cleanup priorities and levels, as well as managing radioactive waste staging and disposal. Appendix II contains a brief summary of three national-level exercises, since May 2003, which contained a recovery component, along with the exercise objectives for the planned April 2010 RDD exercise, which is to contain a recovery component. In addition to this RDD recovery exercise, the National Exercise Schedule has listed two planned IND response exercises in 2010 that are to have some recovery components. It is uncertain whether federal capability is sufficient to effectively clean up from RDD or IND incidents because federal agencies have only carried out environmental cleanup of localized areas of radioactive materials, and some limitations exist in federal capabilities to help address the magnitude of the cleanup that would follow these incidents. Some federal agencies, such as DOE and EPA, have substantial experience using various analysis and environmental cleanup methods and technologies to address localized areas contaminated with radioactive materials, but little is known about how these methods and technologies might be applied in recovering from the magnitude of RDD or IND incidents. For example, DOE has invested hundreds of millions of dollars in research, development, and testing of methods and technologies for cleaning up and decommissioning contaminated structures and soils— legacies of the Cold War. In addition, since the passage of the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), which established the Superfund program, EPA has undertaken significant efforts to study, develop, and use technologies that can address radioactive contamination. DOD has also played a major role in studying potential applications for innovative technologies for its Superfund sites. As a result of federal agencies’ experience with radioactive materials, there is evidence that the agencies could effectively carry out the analysis and environmental cleanup of localized areas contaminated by these materials. In regard to analysis, DOE’s National Nuclear Security Administration (NNSA) has developed operational plans, orders, and publications on how to respond to a radiological or nuclear incident. NNSA has developed various FRMAC manuals to guide operational, assessment and monitoring activities. In addition, EPA’s National Decontamination Team has published guidelines that provide a framework for how to develop sampling plans to support decontamination efforts after a radiological release. In regard to environmental cleanup, EPA has published inventories of radiological methods and technology guidance for contaminated sites, surfaces, and media. The cleanup technologies are generally grouped into chemical and physical technologies. During the initial response phase to an incident, responders might rely on fairly simple cleanup approaches, such as washing down exposed people and surfaces, mowing grass, pruning trees, and sweeping up affected areas. The latter recovery phase might require no additional action or use of complex decontamination technologies depending on the level of desired cleanup. EPA has also published guidance for its On-Scene Coordinators at each regional office to aid in their response to a radiological incident. This guidance covers the full range of radiological incidents, but its focus is primarily on the early to intermediate phases of an RDD incident, as this incident is expected to present a challenge for these coordinators. This guidance addresses possible decontamination approaches for eight types of radionuclides that experts believe are most likely to be used in an RDD. As previously mentioned, federal agencies’ current approaches to analysis and environmental cleanup have only been applied in localized areas, as an RDD or IND incident has not occurred; however, decontamination research is currently under way to gain a better understanding of potential applications of current and experimental methods and technologies for primarily RDD incidents. According to decontamination experts at DOE’s Lawrence Livermore National Laboratory, current research has focused on predicting the effects of radiation release in urban settings through simulation, small-scale testing, and theory. In addition, researchers at EPA’s National Homeland Security Research Center informed us that while there are available methods that have proven successful for cleaning up various types of contamination, more research is needed to develop standard national guidance for their efficacious application in urban areas and to other RDD or IND incident scenarios. According to a decontamination expert at DOE’s Idaho National Laboratory, experience has shown that without guidance and discussion early in the response phase, a contractor might use a decontamination technology during this phase for no other reason than it was used before in an unrelated situation. The expert told us that this situation might lead to selecting environmental cleanup technologies that generate waste types that are more difficult to remove than the original material and that create more debris requiring disposal—leading to increased costs. For example, the Lawrence Livermore National Laboratory decontamination experts told us that the conventional use of high-pressure hosing to decontaminate a building is effective under normal conditions but could be the wrong cleanup approach for an RDD using cesium-137. In this case, the imbibing (absorbing) properties of some porous surfaces such as concrete would actually cause this soluble radioactive isotope to penetrate even further into surfaces making subsequent decontamination more difficult and destructive. A senior EPA official with the Office of Radiation and Indoor Air told us that the agency has studies under way to determine the efficacy of high-pressure hosing for removing contamination from porous urban surfaces that would result from the terrorists’ use of an RDD using certain radioisotopes. There are also limitations in federal capabilities to help address, in a timely manner, the magnitude of cleanup that would be associated with RDD or IND incidents. For example, we found that limitations in federal capabilities to complete some analysis and environmental cleanup activities might slow the recovery from an incident, including (1) characterizing the full extent of areas contaminated with radioactive materials, (2) completing laboratory validation of contaminated areas and levels of cleanup after applying decontamination approaches, and (3) removing and disposing of radioactive debris and waste. There are some limitations in the capability of federal agencies to efficiently characterize the full extent of the areas contaminated with radioactive materials in the event of RDD or IND incidents. For example, the current predictive capability of various plume models is not sufficient, and may never be sufficient to reduce the time necessary to fully characterize the extent of contaminated areas after RDD or IND incidents. According to a senior official at the Lawrence Livermore National Laboratory’s Interagency Modeling and Atmospheric Assessment Center (IMAAC), the predictive capabilities of existing plume models are not at the resolution necessary to produce this added value for urban areas, as modeling for this purpose is only theoretical at this point. This official told us while there are data about debris dispersal from building demolition and weapons testing, there is little research data on the likely dispersal patterns of concrete, asphalt, and glass that would result from use of an RDD or IND. However, some federal agency officials question the need to improve the predictive capabilities of these plume models. For example, the DHS IMAAC director told us that the current state-of-the-art and plume modeling approach is sufficient for its primary purpose in directing the protective actions of first responders. Nevertheless, NNSA officials informed us they are working with FEMA on a multiyear program to improve federal capabilities to model the release of material during a radiological or nuclear incident. However, they contend that plume modeling will never replace the need for actual measurements for radioactive contamination. In commenting on a draft of this report, EPA agreed that characterization of areas contaminated with radioactive materials from RDD or IND incidents would be challenging because existing plume models are not entirely applicable to urban areas. Moreover, EPA added that other types of contamination, such as in the drinking water and wastewater infrastructure, would also involve very complex systems that would be difficult to model. There are some limitations in federal capabilities to complete laboratory validation of contaminated areas and levels of cleanup after applying decontamination approaches. Moreover, FEMA’s proposed process for determining cleanup standards during the recovery phase for RDD and IND incidents has not been fully exercised, although there was a tabletop discussion among government officials in a June 2009 exercise. EPA has conducted an examination of federal, state, local, and private laboratory capabilities to conduct environmental sampling and testing in order to determine the nationwide laboratory capacity required to support environmental monitoring and decontamination of chemical, biological, and radiochemical-nuclear agents. EPA determined that there was a significant capacity and competency gap in efficiently meeting the laboratory evaluation needs for an RDD scenario. In addition, while EPA did not conduct a detailed assessment of the national planning scenario for an IND incident, it determined that such an incident could contaminate 3,000 square miles and require potentially millions of samples for laboratory analysis. According to EPA documentation, the gap in laboratory capacity would result in the lack of timely, reliable, and interpretable data, which would delay national and local response and recovery activities. EPA has documented that it is currently establishing an all-media Environmental Response Laboratory Network, and it is also conducting a demonstration project to enhance the capacity and capability of public laboratories. A related environmental cleanup issue pertains to the process for determining the cleanup standards that would be applied to urban areas contaminated with radioactive materials in recovering from RDD or IND incidents. According to a decontamination expert at the Idaho National Laboratory, an important consideration in decontamination is the starting level of radioactivity and desired ending level. This official told us that no technology removes all of the contamination all the time; some technologies are more efficient than others at removing certain kinds of contamination. The current DHS planning guidance for RDD and IND incidents recommends a framework for incident cleanup and recovery using a process called “site-specific optimization” for determining the level of environmental cleanup after RDD or IND incidents. The guidance recommends that this process include potential future land uses, technical feasibility, costs, cost-effectiveness, and public accountability. In commenting on a draft of this report, EPA informed us that draft guidance intended to outline the structure of, and responsibilities for the conduct of the optimization process as they pertain to EPA’s involvement in RDD or IND incidents is under review by the new Administrator. EPA added that it looks forward to the lessons to be learned from the upcoming Liberty RadEx exercise in 2010, which officials believe should provide significant insights into the issues under discussion in this report. There are also limitations in federal capabilities to help state and local governments address the interim storage and eventual disposal of the radioactive waste that would arise from RDD or IND incidents. The National Science and Technology Council’s 2008 report found gaps in our nation’s capabilities to effectively remove and dispose of radioactive debris in the event of an RDD or IND incident. This is due, in part, to current restrictions on accessing possible disposal facilities for the radioactive debris stemming from such incidents. According to NNSA officials, DOE’s disposal sites currently can only accept low-level and mixed low-level radioactive waste from its own and DOD facilities under certain circumstances. Moreover, according to an EPA decontamination expert, EPA is concerned about access to commercial radioactive waste disposal sites in the event of such an incident. Currently, there is only one low-level radioactive waste disposal site located in Utah that could be used by most states for radioactive debris disposal, although a limited number of states have access to low-level radioactive waste disposal facilities for waste generated by users of radioactive materials in their states. Another issue is paying for waste disposal. In the Superfund program, EPA can bill the responsible party, if known. However, covering the cost of waste disposal would be complicated in the case of RDD or IND incidents. One additional complicating factor would be the mixing and problematic separation of radioactive, biological, and chemical materials in the debris that would stem from such incidents. According to a recent research paper on disposal issues, the proper characterization of the quantity, properties, and level of debris contamination and decontamination residue from an RDD or other radiological incidents can have significant impacts on cleanup costs and restoration timelines. In commenting on a draft of the report, EPA officials informed us that its Office of Research and Development is currently developing a suite of decision support tools for the management of waste and debris from a variety of different events, including radiological incidents. Concerns about limitations in these federal capabilities were expressed by many city, state, and federal regional office emergency management officials who responded to our survey. Respondents representing most of the cities (9 of 13), states (7 of 10), FEMA regional offices (6 of 9), and almost all EPA regional offices (9 of 10) expressed concerns about the capabilities of federal agencies to provide the assistance needed to complete the necessary analysis and environmental clean up activities in the event of RDD or IND incidents. For example, respondents from several cities told us that they were concerned about how rapidly the federal government could provide this assistance, despite the strengthening of some capabilities since the terrorist attack of September 11, 2001. Respondents from most states expressed the same expectations of the federal government. For example, one state was particularly concerned about current federal capabilities to handle multiple and simultaneous RDD incidents across the country. The National Science and Technology Council’s 2008 report also found that cities and states would need to rely heavily on a strong federal response to a radiological incident. This report identified similar limitations in federal capabilities to rapidly characterize an incident site and contaminated critical infrastructure, contain and control contaminant migration, decontaminate and cleanup affected areas, and remove and dispose of the waste to facilitate long-term recovery. Moreover, the report claimed that catastrophic effects of RDD or IND incidents could be reduced and the path to recovery shortened with more effective decontamination, mitigation, and rapid recovery operations. City and state emergency management officials responding to our survey, as well as emergency management officials at EPA and FEMA regional offices across the country, provided a number of suggestions for ways to improve federal recovery preparedness for RDD and IND incidents, particularly with the environmental cleanup of areas that would be contaminated with radioactive materials from such incidents. Respondents from nearly all the cities and states expressed the need for a national disaster recovery strategy to address gaps and overlaps in current federal guidance in the context of RDD and IND incidents. This is important because, according to one city official, “recovery is what it is all about.” In developing such a recovery strategy, respondents from the cities, like those from their states, want the federal government to consult with them in the initial formulation of a recovery strategy through working and focus groups, perhaps organized on a regional basis. Respondents representing most cities (10 of 13) and states (7 of 10) also provided specifics on the type of planning guidance necessary, including integration and clarification of responsibilities among federal, state, and local governments. For example, respondents from some of the cities sought better guidance on monitoring radioactivity levels, acceptable cleanup standards, and management of radioactive waste. Most respondents from cities expressed the need for greater planning interactions with the federal government and more exercises to test recovery plans. One city respondent cited the need for recovery exercises on a regional basis so the cities within the region might better exchange lessons learned. Respondents from most cities (11 of 13) and their states (7 of 10) said that they planned to conduct RDD and IND recovery exercises in the future. Finally, emergency management officials representing almost all cities and states in our survey offered some opinions on the need for intelligence information on RDD and IND threats. They generally said that sharing information with law enforcement agencies is necessary for appropriate planning for RDD or IND incidents and that the law enforcement fusion centers were a step in the right direction. However, only half of the respondents indicated that they were getting sufficient intelligence information from law enforcement sources. The EPA and FEMA regional office emergency management officials that responded to our survey also offered a number of suggestions on ways to improve federal preparedness to recover from RDD and IND incidents, generally concurring with the suggestions of the city and state respondents. The majority of the EPA regional offices (6 of 10) and FEMA regional offices (7 of 9) indicated that a national disaster recovery strategy was needed to address overlaps and gaps in current government responsibilities in the context of RDD and IND incidents. Almost all of them stressed the need to reach out and involve state and local governments in developing this recovery strategy. The majority of the EPA regional office (7 of 10) and FEMA regional office (5 of 9) respondents indicated that additional guidance was needed on the distribution of government responsibilities for the recovery phase of RDD or IND incidents, including the transfer of FRMAC responsibilities and the process for determining acceptable cleanup levels. Many of the federal regional office respondents mentioned the need to conduct recovery exercises that involve state and local governments. Finally, EPA and FEMA regional office respondents differed somewhat on the need for standard national guidance on the application of approaches for environmental cleanup of areas contaminated with radioactive materials. While about half of the EPA regional office respondents expressed the need for guidance on the application of existing approaches for RDD or IND incidents, most FEMA regional office respondents (7 of 9) indicated that it would be beneficial to synchronize existing guidance from multiple and disparate sources to ensure that they are complementary and not competing. While it was more limited in scope than what is usually envisioned as an RDD incident, the aftermath of the 2006 polonium poisoning incident in London had many of the characteristics of an RDD incident, including testing hundreds of people who may have been exposed to radiation and a cleanup of numerous radiation-contaminated areas. Because of its experience in dealing with the cleanup from this incident and from other actions the United Kingdom has taken to prepare for an RDD or IND attack, we met with officials from this country to obtain a better understanding of their approach to recovery preparedness. These officials told us that the attention to recovery in their country is rooted in decades of experience with the conflict in Northern Ireland, dealing with widespread contamination from the Chernobyl nuclear power plant accident, and a national history of resilience—that is, the ability to manage and recover from hardship. We found that actions the United Kingdom reported taking to prepare for recovery from RDD and IND incidents are similar to many of the suggestions for improvement in federal preparedness that we obtained through our survey of city, state, and federal regional office emergency management officials in the United States. For example, we found that the United Kingdom reported taking the following actions: Enacted civil protection legislation in 2004. This civil protection legislation includes subsequent emergency response and recovery guidance, issued in 2005, to complement the legal framework established for emergency preparedness. This guidance describes the generic framework for multiagency response and recovery for all levels of government. The guidance emphasizes that response and recovery are not discrete activities and do not occur sequentially; rather, recovery should be an integral part of response from the very beginning, as actions taken at all times can influence longer-term outcomes for communities. Established a Government Decontamination Service in 2005. This organization was created out of recognition that it would not be cost- effective for each entity—national, regional, and local government—to maintain the level of expertise needed for cleaning up chemical, biological, radiological, and nuclear materials, given that such events are rare. The Government Decontamination Service provides advice and guidance to local governments, maintains and builds a framework of specialized analysis and environmental cleanup contractors, and advises the national government regarding response capabilities. This service implemented its responsibilities by assisting the City of Westminster respond to the analysis and environmental cleanup needs following the November 2006 polonium poisoning of Alexander Litvinenko. Developed online national recovery guidance in 2007. This guidance reinforces and updates the early emergency response and recovery guidance by establishing, among other things, a recovery planning process during the response phase so that the potential impacts of early advice and actions are explored and understood for the future recovery of the affected areas. Moreover, the guidance—reviewed every 3 months and updated as necessary—emphasizes the need for training recovery personnel on essential roles, responsibilities, and procedures to test competencies, as well as to design and conduct recovery exercises. Updated the recovery handbooks for radiation incidents in 2008 and 2009. The handbooks are intended to aid decision makers in developing recovery strategies for contaminated food production systems, drinking water, and inhabited areas following the release of radioactive materials into the environment. The handbooks were first published in 2005 in response to the Chernobyl nuclear power plant accident. The current handbooks include management options for application in the prerelease, emergency and longer-term phases of an incident. Sources of contamination considered in the handbooks include nuclear accidents, radiological dispersion devices, and satellite accidents. The handbooks are divided into several independent sections comprising supporting scientific and technical information, an analysis of the factors influencing recovery, compendia of comprehensive, state-of-the-art datasheets for around 100 management options, guidance on planning in advance, a decision-aiding framework comprising color-coded selection tables, look-up tables and decision trees, and several worked examples. The handbooks can also be applied for training purposes and during emergency exercises. Conducted a full-scale RDD recovery exercise in 2008. This exercise, involving several hundred participants, provided a unique opportunity to examine and test the recovery planning process within the urgency of a compressed time frame. The exercise, which took place 6 weeks after the response exercise, had participants address three scenarios: rural contamination of crops and livestock, contamination of the urban transit infrastructure, and disruption of the water supply. The lessons learned from this exercise were incorporated into the United Kingdom’s recovery strategy. One key lesson is the benefit of exercising the handover of government leadership during the response phase to leadership of the recovery phase. Established a national risk register in 2008. This register provides information on the risks facing the country, including malicious attacks such as with an RDD. This threat information was previously held confidential by the government. The government reported that the release of this information is intended to encourage public debate on security and help organizations, individuals, families, and communities that want to prepare for these emergencies. This register is designed to complement community risk registers that have been published by local emergency planners since passage of the 2004 civil protection legislation. The community risk registers are based on local judgments of risks, as well as from information contained in the national risk assessment—a 5-year planning assessment that is still a classified document. The government has conducted this risk assessment since 2005. Issued specific nuclear recovery planning guidance in 2009. This guidance, the UK Nuclear Recovery Plan Template, provides a generic recovery strategy and structures needed to address a radiological release from a civil or defense nuclear reactor, as well as incidents involving nuclear weapons and special nuclear materials in transit. It is also considered applicable to recovery from RDD and IND incidents. Among other things, it provides guidance on the formation of a Recovery Advisory Group and Science and Technology Advisory Cell early in the response phase. The Recovery Advisory Group would be charged with identifying immediate and high-level strategic recovery objectives—recorded in templates to keep the process focused and on track—for, among other activities, cleanup levels, management of radioactive waste, compensation arrangements, and recovery costs. This advisory group would transition into a broader Strategic Recovery Coordinating Group during the recovery phase. The guidance requires that all high-risk cities in the United Kingdom prepare recovery plans. Finally, according to United Kingdom officials, the 2006 polonium incident in London showed the value of recovery planning. In particular, through this incident, United Kingdom officials gained an appreciation for the need to have an established cleanup plan, including a process for determining cleanup levels, sufficient laboratory capacity to analyze a large quantity of samples for radiation, and procedures for handling the radioactive waste. Furthermore, they found that implementing cleanup plans in the polonium poisoning incident and testing plans in the November 2008 recovery exercise have helped the United Kingdom to better prepare for larger RDD or IND incidents. Appendix III contains a more thorough review of the approach to recovering from RDD and IND incidents in the United Kingdom. Recovering from RDD or IND incidents would likely be difficult and lengthy. Completing the analysis and environmental cleanup of areas contaminated with radioactive materials would be among the first steps in the recovery process after the initial response to save lives. A faster recovery—meaning people can return sooner to their homes and businesses and get back to the routines of everyday life—would help lessen the consequences of RDD and IND incidents. In fact, being fully prepared to recover from such an incident may also serve as a deterrent to those who would do us harm. However, our work demonstrates that the federal government is not fully prepared to help cities and states with the analysis and environmental cleanup of areas contaminated with radioactive materials from RDD and IND incidents. To date, FEMA has not developed a national disaster recovery strategy, as required by law, which would help guide RDD and IND recovery planning, or issued specific guidance to coordinate federal, state, and city recovery planning for these incidents. Federal agencies have also included only a few recovery discussions in the response exercises to these incidents. The lack of clearly communicated guidance on federal responsibilities and activities has left emergency management officials in the cities and states we surveyed confused about which federal agency to turn to for assistance, and many federal regional office officials we surveyed were not certain about which environmental cleanup methods and technologies would be the most successful in removing radioactive materials from buildings and infrastructure. As the United States moves forward in recovery preparation, some insights might be gained from the actions already taken by the United Kingdom to increase its preparedness to recover from acts of nuclear and radiological terrorism, many of which are similar to those suggested by the city, state, and federal emergency management officials we surveyed for improving federal preparedness to recover from RDD and IND incidents. To better prepare federal agencies to coordinate with state and local governments on the analysis and environmental cleanup of areas contaminated with radioactive materials following RDD or IND incidents, we recommend that the Secretary of Homeland Security direct the Federal Emergency Management Agency Administrator to prepare a national disaster recovery strategy that would clarify federal responsibilities for assisting state and local governments with the analysis and environmental cleanup of areas contaminated with radioactive materials in the event of RDD or IND incidents; issue guidance that describes how federal capabilities would be integrated into and support state and local plans for recovery from RDD and IND incidents; and schedule additional recovery exercises, in partnership with other federal, state, and local governments that would, among other things, specifically assess the preparedness of federal agencies and their contractors to conduct effective and efficient analysis and environmental cleanup activities associated with RDD and IND incidents. GAO provided DHS, DOE, and EPA with a draft of this report for their review and comment. DHS and FEMA concurred with the recommendations in the report. DOE, through NNSA, generally agreed with our report findings and provided technical comments, which we incorporated as appropriate. EPA did not agree or disagree with the report findings, but offered technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees as well as to the Secretaries of Homeland Security and Energy; the Administrators of NNSA and EPA; and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In our review, we examined (1) the extent to which federal agencies are planning to fulfill their responsibilities to help cities and states clean up areas contaminated with radiation materials from radiological dispersal device (RDD) and improvised nuclear device (IND) incidents, (2) what is known about the federal government’s capability to effectively clean up areas contaminated with radioactive materials from RDD and IND incidents, and (3) suggestions from government emergency management officials for improving federal preparedness to help cities and states recover from RDD and IND incidents. In addition, we are providing information on actions taken in the United Kingdom to prepare for recovering from RDD and IND incidents. To determine the extent to which federal agencies are planning to fulfill their responsibilities to help cities and states clean up areas contaminated with radioactive materials from RDD and IND incidents, we reviewed pertinent federal law, presidential directives, and other executive guidance; interviewed cognizant officials from the Department of Homeland Security (DHS), Department of Energy (DOE), Environmental Protection Agency (EPA), Federal Emergency Management Administration (FEMA), and Nuclear Regulatory Commission (NRC); conducted a survey of 13 cities considered to be at high or medium risk to such attacks and their states, and all federal FEMA and EPA regional offices; and reviewed information on the number and type of RDD and IND response and recovery exercises that have been conducted from May 2003 through September 2009. More specifically, we reviewed existing planning documents for domestic incidents to determine the extent to which they addressed recovery issues, particularly from RDD and IND incidents. For example, we found limited discussion of recovery planning for these incidents in various annexes to the National Response Framework, such as its emergency support function annexes and nuclear and radiological incident annex, as well as other planning documents. In addition, after speaking with emergency management officials in San Francisco and comparable state officials near Sacramento, California, we developed a semistructured telephone survey instrument—pretested in Denver, Colorado—in order to obtain the perspectives of city and state emergency management officials on government responsibilities and plans to fulfill them. We originally selected 13 high- and medium-risk cities and their 11 states to cover the mostly likely target cities for a terrorist attack and to ensure that we had at least 1 city in each of the 10 EPA and FEMA regions. The cities included Atlanta, Boston, Chicago, Dallas, Denver, Detroit, Houston, Los Angeles, New York, Philadelphia, San Francisco, Seattle, and St. Louis. While Washington, D.C., is considered a high-risk city, we excluded it from our survey because it is unlike other cities in its reliance on the federal government and the agencies that would take over analysis and environmental remediation activities. Emergency management officials representing these cities and their states responded to our survey, except for Atlanta and the states of Georgia and Massachusetts. After repeated attempts to include this city and the two states in our survey, we decided to drop them. We replaced Atlanta and the state of Georgia with Miami and the state of Florida, which are in the same federal region. Because we decided to retain Boston despite receiving no response from Massachusetts, we ended up with 10 states in our survey. We also visited EPA regional offices in San Francisco and Denver, and the FEMA regional office in Oakland, to develop questions to survey all 10 EPA and FEMA regional offices in order to obtain a federal field perspective on this issue. All EPA and FEMA regional offices responded to our survey, except FEMA region 8. We tabulated the yes and no responses to each pertinent question from the city, state, and federal surveys and conducted a content analysis of the explanatory statements accompanying many of the questions. FEMA’s National Exercise Schedule database was used to identify the location and types of RDD and IND response and recovery exercises—based on national planning scenarios. Because we determined in our April 2009 report (GAO-09-369) that this database is unreliable, we asked each city, state, and federal regional office in our survey to list RDD and IND response and recovery exercises that had taken place in their jurisdiction, as well as any plans for future exercises to check the accuracy of the federal exercise database. In addition, we attended the first full-scale recovery tabletop exercise— Empire09—based on an RDD incident scenario in Albany, New York that was conducted on June 16-17, 2009, and an interagency planning session held in Philadelphia on October 28-29, 2009, to prepare for the Liberty RadEx recovery exercise scheduled for April 26-30, 2010 in Philadelphia. To determine what is known about the federal government’s capabilities to effectively clean up areas contaminated with radioactive materials from RDD and IND incidents, we reviewed pertinent guidance on available methods and technologies and obtained information from subject matter experts at the federal agencies and national laboratories about their potential application for RDD and IND incidents. More specifically, we spoke with subject matter experts at the National Nuclear Security Administration, EPA, and FEMA, as well as at DOE’s Lawrence Livermore National Laboratory and Idaho National Laboratory and EPA’s Andrew W. Breidenbach Environmental Research Center, National Air and Radiation Environmental Laboratory, National Decontamination Team, National Homeland Security Research Center, and the Radiation and Indoor Environments National Laboratory. We also observed a demonstration of the capabilities of the Interagency Modeling and Atmospheric Assessment Center at Lawrence Livermore National Laboratory and some decontamination research projects at the National Homeland Security Research Center. In addition, we reviewed reports and documents from these agencies, national laboratories, and research centers that addressed methods and technologies for analysis and environmental remediation of areas contaminated with radioactive materials as well as some that specifically discussed their potential use for RDD or IND incidents. Moreover, we included questions about the potential use of these approaches in our semistructured phone survey of federal, state, and city emergency management officials. To identify suggestions from government emergency management officials for improving federal preparedness to help cities and states recover from RDD and IND incidents, we included relevant questions in our semistructured phone survey of federal, state, and city officials. We conducted a content analysis of these questions to identify patterns in the responses, that is, what types of suggestions were most prevalent. We also reviewed past GAO reports and other documents that addressed areas for improvement in federal preparedness. In addition, to broaden our review of potential areas for improvement in federal involvement in planning and preparing for the recovery from RDD and IND incidents, we included the United Kingdom in our scope. This country has actual experience with recovery from a radiological incident in an urban area and was suggested to us by EPA officials as a country that is one of the leaders in recovery planning. We interviewed selected central and regional government officials responsible for response and recovery planning and preparation, and we visited a decontamination contractor that performed environmental remediation activities in the aftermath of the 2006 radioactive poisoning of Alexander Litvinenko in London. We also reviewed documents provided by these officials and from other sources to obtain a better understanding of this system and how it might apply to the United States. Two officials from the United Kingdom who we interviewed during our site visit reviewed a draft of the information contained in appendix III for content and accuracy. We conducted this performance audit from October 2008 to January 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 1 provides a brief summary of four RDD exercises, since May 2003, which contained recovery objectives including a planned exercise for April 2010. The United Kingdom provides an example of another country’s efforts to prepare to recover from a terrorist attack using chemical, biological, radioactive, or nuclear materials. This country’s attention to recovery needs is reflected in promulgating emergency response and recovery legislation, establishing a government decontamination service, creating online national recovery guidance, updating a recovery handbook for radiation incidents, conducting a full-scale RDD recovery exercise, establishing a community and national risk register system, and preparing specific nuclear recovery planning guidance. The particular emphasis on recovery activities in the United Kingdom has been linked to decades of experience with the conflict in Northern Ireland, widespread contamination from the Chernobyl nuclear power plant accident, and a national history of resilience—that is, the ability to manage and recover from hardship. The United Kingdom has established a framework for addressing the release of radiological materials that prompted planning for the recovery from these events. This framework was primarily established through the 2001 Radiation (Emergency Preparedness and Public Information) Regulations and the 2004 Civil Contingencies Act, as well as guidance issued pursuant to the Civil Contingencies Act. According to a senior official from the United Kingdom’s Health Protection Agency, the radiation regulations were developed in response to a European Union directive following the 1986 Chernobyl, Ukraine, nuclear power plant accident. These regulations require preparation of on- and off-site emergency management plans for release of radioactive materials in the event of a nuclear power plant accident, as well as the conduct of exercises to test preparedness to respond to radiological releases. According to this official, while the radiation regulations did not include directives to prepare for recovery from such accidents, they established a Nuclear Emergency Planning Liaison Group, which formed a Recovery Subgroup to begin addressing this planning need. The 2004 Civil Contingencies Act was enacted following a government consultation exercise that concluded that previous legislation provided an inadequate framework for civil protection against twenty-first century risks, including terrorism. The Civil Contingencies Act established a statutory framework of roles and responsibilities for local responders to address the effects of the most serious emergencies facing the country. Guidance issued pursuant to this legislation established an integrated emergency management system, not unlike that in the United States, comprising six related activities: anticipation, assessment, prevention, preparation, response, and recovery. The November 2005 guidance addressing emergency response and recovery covers the principles, practical considerations, operational doctrine, and examples of good practice for these activities. This guidance describes the generic framework for multiagency response and recovery activities at all levels of government, emphasizing that these activities are not separate activities that occur sequentially. Instead, this guidance contends that recovery considerations should take place early in the response phase, as initial decisions can affect the long-term outcomes for communities. Moreover, because the government recognized that no single approach could meet the needs of every affected area, it did not intend this guidance to be either prescriptive or an operational manual. In 2005, the United Kingdom established a special Government Decontamination Service to address issues associated with contaminated land, buildings, open space, infrastructure, and transportation routes from both deliberate and accidental releases of chemical, biological, radiological, and nuclear materials. This service was established because the national government recognized that it would not be cost-effective for each responsible authority—national, regional, and local governments—to maintain the level of expertise needed for the analysis and environmental cleanup of affected areas given that the release of such material would be a rare event. The Government Decontamination Service has no statutory powers itself, nor does it directly provide analysis and environmental remediation services. Instead, it provides advice and guidance to local governments, maintains and builds a framework of specialized contractors to conduct these activities, and advises the national government regarding response capabilities. In regard to advice to local governments, in November 2006, the Government Decontamination Service was requested to respond to an incident involving the poisoning of Alexander Litvinenko with a milligram—about the size of a grain of salt—of polonium-210. This service was asked to assist the City of Westminster, within greater London, given the international nature of the event, even though the incident was classified as a hazardous materials event rather then a terrorist incident. According to the recovery planning process, the city selected a contractor from the Government Decontamination Service list of specialized contractors for the remediation work and used a model contract developed by this service for this purpose. This model contract contains allowable costs per unit, equipment charges, and charge out rates for the emergency response. Under the contract, the selected specialized contractor agrees to start off with nonaggressive, simple, and less expensive decontamination approaches, and then apply more sophisticated approaches, if necessary, to meet the desired cleanup level. The actual payments for these services were made by the owners of properties, such as a hotel where the perpetrators of the crime had stayed, that were contaminated with polonium. However, the cleaning up of public premises was a responsibility of the local government. The national government has established ways to help cover the costs of such incidents. This includes insurance coverage for damages resulting from acts of terrorism. For large commercial concerns, the insurance industry offers terrorist insurance that is underwritten by the government. For smaller companies, terrorist insurance is offered for an additional 20 percent surcharge on an existing policy. Other funding is available for local governments if such an event would overwhelm their financial resources, such as applying for grants from the national government or European Union. In regard to its framework of specialized contractors, the service has identified three specialized contractors that have capabilities to address various decontamination scenarios, and it certifies their capabilities through testing. A specialized contractor is invited to visit the location, receives a briefing on the incident scenario, and is asked to develop a recommended decontamination strategy. The Government Decontamination Service then assesses the contractor’s approach and recommendations to identify issues, strengths, and weaknesses. In addition, the service develops improvement plans, backed with exercises, to address identified performance gaps. For example, in December 2007, the Government Decontamination Service tested and evaluated the capabilities of one of its specialized contractors to analyze and clean up areas contaminated with radioactivity from an RDD event scenario in downtown Birmingham. In Exercise Streetwise, a specialized contractor was fully tested at the venue on its capability to detect and clean up actual radioactive materials. According to a senior official with the Government Decontamination Service, “you cannot get a realistic picture of recovery needs and issues through only tabletop exercises.” Finally, in regard to advice to national government, the Government Decontamination Service participates in efforts to identify, prioritize, and as necessary maintain decontamination-related research projects, and it has established a library of the relevant knowledge and experiences drawn from national and international sources. For example, a Government Decontamination Service official told us that this agency is currently engaged in learning more about how to deal with the disposal of radioactive waste that has no known owner, which might be similar to the radioactive waste stemming from an RDD incident. The issue is not only ownership, but where to put the radioactive debris and how to cover the cost of storage and disposal. In this regard, the United Kingdom has a clearance rule for allowing very low-level radioactive waste to be disposed of in less expensive and more numerous solid and hazardous waste landfill sites without specific regulatory approval or exemption. In addition, the United Kingdom and the United States have agreed to increase the exchange of information and personnel regarding the research, development, testing, evaluation, and development of technical standards and operations to address chemical, biological, radiological, and nuclear incidents. While passage of the 2004 Civil Contingencies Act was an important legislative step to further emergency preparedness, the reaction of local responders to several domestic incidents following passage of this act made it clear to the national government that these responders needed more comprehensive guidance than that contained in the 2005 guidance for emergency response and recovery activities. One such event was the July 2005 subway bombing in London by a terrorist group that killed 52 people. This incident, in conjunction with other events in 2005, such as the Buncefield Fire and severe flooding, prompted the government in 2006 to form a National Recovery Working Group to address the need for additional recovery guidance for multiple risk scenarios. This working group was comprised of a wide range of government departments and agencies, as well as other stakeholders who had been involved in the recovery phase following these events. The government charged this working group with, among other things (1) producing national recovery guidance for local responders, (2) identifying gaps in the country’s recovery capability with recommendations to address them, and (3) contributing to the ongoing review of the 2005 nonstatutory guidance for emergency response and recovery activities. In 2007, the working group produced a National Recovery Guidance document. This guidance establishes a planning process for involving recovery stakeholders during the response phase to ensure that the potential impact of early advice and actions for the future recovery of the area are explored and understood. This online guidance covers 14 generic issues, such as recovery structures and processes, training and exercises, and a lessons learned process, which are reviewed every 3 months and updated as necessary. For example, the National Recovery Guidance addresses the need for training recovery personnel on essential roles, responsibilities, and procedures to test competencies, as well as the need to design and conduct recovery exercises. While acknowledging that recovery training and exercises lag behind those for response, the National Recovery Working Group found that many organizations had already conducted small-scale recovery exercises and had applied lessons learned from them. One of the lessons identified was the need to exercise the shift from the response phase to the recovery phase. The 2009 version of the UK Recovery Handbooks for Radiological Incidents is considered relevant to radiological releases—accidental and intentional—from the nuclear and nonnuclear industry sectors. The handbooks, first published in 2005 by the United Kingdom’s Health Protection Agency, were developed in response to the need for further recovery guidance following the Chernobyl nuclear power plant accident. The development of these handbooks was sponsored by six government departments and agencies representing national and local governments. According to a senior official from the Health Protection Agency, the European Union also supported the development of a series of generic recovery handbooks for use by other countries based on the structure, format, and content of the handbook developed for the United Kingdom. This official told us that member countries of the European Union are currently customizing their handbooks for use at national, regional, and local levels. The current handbooks, updated from the 2008 version, include management options for application in the prerelease, emergency and longer-term phases of an incident. Sources of contamination considered in the handbooks include nuclear accidents, radiological dispersion devices, and satellite accidents. The handbooks are divided into several independent sections comprising supporting scientific and technical information, an analysis of the factors influencing recovery, compendia of comprehensive, state-of-the-art datasheets for around 100 management options, guidance on planning in advance, a decision-aiding framework comprising color-coded selection tables, look-up tables and decision trees, and several worked examples. The handbooks can be applied as part of the decision-aiding process to develop a recovery strategy following an incident, for training purposes, and during emergency exercises. An example of a datasheet for one of the management options—high pressure hosing—contained in the UK Recovery Handbooks for Radiation Incidents: 2009, is provided in figure 1. In November 2008, Exercise Green Star tested government capabilities to recover from a terrorist attack based on RDD scenarios. This was the first time that complex recovery issues had been considered in a national- level exercise. In this exercise, several hundred participants were wholly focused on recovery issues. About 6 weeks after an initial RDD tabletop response exercise, which set the scene for the participants, a 2-day recovery exercise took place involving three scenarios: rural contamination of crops and livestock, contamination of the urban transit infrastructure, and disruption of the water supply. On day one of the exercise, participants looked at immediate cleanup issues, including resource priorities and management responsibilities. On day two, participants considered the longer-term issues of environmental contamination, monitoring strategies, and financial considerations. The use of a real radioactive isotope within the exercise scenario ensured that participants were able to investigate their own and wider mechanisms for obtaining scientific advice during an incident. A scientific advisory group was put in place to expedite the recovery process by helping to manage scientific input into the decision-making process. An after-action report was prepared following this exercise to capture lessons learned. One observation was that this exercise provided a unique opportunity to develop remediation policies within a compressed time frame, resulting in the development of a sound framework for recovery. The United Kingdom has developed a comprehensive program to ensure an effective response to a range of disruptive emergencies that might affect the country. The country uses the term “resilience” as the ability of organizations, like individuals, to withstand or recover easily and quickly from hardships, such as major flooding or a terrorist attack. Community risk registers have been published by local emergency management planners since passage of the 2004 Civil Contingencies Act. These community risk registers address specific risks identified by representatives from local emergency services and public, private, and voluntary organizations. Local resilience forums are required to develop and maintain these registers that include a description of potential outcomes, likelihoods, impacts, and ratings for various risk categories and subcategories of events. One of the risk categories is an actual terrorist attack using an explosive device. The national government does not expect communities to directly track these risks, but rather to improve their own preparedness based on information from the national risk assessment, which is a classified document. In 2008, the government published a national risk register, which is based on this classified assessment and discusses the likelihood and potential impacts of a range of risks facing the country, including attacks using chemical, biological, radiological, and nuclear materials. This national risk register contains information that was previously held confidential within government but was published to encourage public debate on security and to help organizations, individuals, families, and communities prepare for encountering threats. The government reports that while there have been very few examples of attacks such as the 1995 release of Sarin gas in a Tokyo subway, it still recognizes the need to prepare and plan for them. In March 2009, the Nuclear Emergency Planning Liaison group published a UK Nuclear Recovery Plan Template based on the National Recovery Guidance and Recovery Plan Guidance Template. This document provides generic guidance for a recovery strategy and structures needed to address a radiological release from a civil or defense nuclear reactor accident, as well as from incidents involving nuclear weapons or special nuclear materials in transit. This guidance is based on examples from existing local government recovery plans and experiences. While not specific to malicious use of radiological and nuclear materials, according to a senior government official with the Health Protection Agency, this guidance and associated monitoring templates would have potential application for recovery from RDD or IND incidents. The UK Nuclear Recovery Plan Template considers recovery to be more than simply the replacement of what has been destroyed and the rehabilitation of those affected—it is a complex social and developmental process rather than just a remediation process. The manner in which recovery processes are undertaken is thus critical to their success and, therefore, best achieved when the affected community is able to exercise a high degree of self-determination. As such, this document provides that during the initial response phase, a Strategic Coordinating Group, which manages this phase of the process, would receive input from a Recovery Advisory Group and a Science and Technology Advisory Cell. The Recovery Advisory Group would be charged with identifying immediate and high-level strategic objectives for recovery early in the response phase, including, among other actions, determining remediation levels and when to stop remediation, managing radiation-contaminated waste, and managing compensation arrangements and recovery costs. These objectives would be accompanied by targets and milestones that the community would use as a basis to track recovery progress—for example, cleanup activities—with the aid of various predesigned templates. The Science and Technology Advisory Cell would include experts to advise on health and welfare, environment and infrastructure, and monitoring response and recovery activities. On transition to the recovery phase of an incident, the Strategic Coordinating Group would be replaced by a Strategic Recovery Coordinating Group. The Strategic Recovery Coordinating Group would be supported by specific subgroups. These subgroups would include ones for finance and legal, communications, business and economic recovery, health and welfare, environment and infrastructure, and monitoring. For example, the subgroup on environment and infrastructure would identify viable options for remediation of food production systems, drinking water, and inhabited areas, including identifying options for the restoration and cleanup of the physical infrastructure and natural environment. The guidance suggests that this subgroup consider forming task groups to, among other things, address waste management and disposal, criteria to determine when remediation can cease, evaluate feasibility, and recommend remediation options for defined affected areas. The templates would be referred to throughout the recovery to ensure that the work of the Strategic Recovery Coordinating Group is focused and on track. In addition to the person named above, individuals who made important contributions to this report were Dr. Timothy Persons (Chief Scientist), Ned Woodward (Assistant Director), Nancy Crothers, James Espinoza, Tracey King, Thomas Laetz, Jay Smale, Vasiliki Theodoropoulos, and Keo Vongvanith. | A terrorist's use of a radiological dispersal device (RDD) or improvised nuclear device (IND) to release radioactive materials into the environment could have devastating consequences. GAO was asked to examine (1) the extent to which the federal government is planning to fulfill its responsibilities to help cities and their states clean up contaminated areas from RDD and IND incidents, (2) what is known about the federal government's capability to effectively clean up these contaminated areas, and (3) suggestions for improving federal preparedness to help cities and states recover from these incidents. The report also discusses recovery activities in the United Kingdom. GAO reviewed federal laws and guidance; interviewed officials from the Department of Homeland Security (DHS), Federal Emergency Management Agency (FEMA), Department of Energy (DOE), and Environmental Protection Agency (EPA); and surveyed emergency management officials from 13 cities at high risk of attack, their 10 states, and FEMA and EPA regional offices. FEMA, the DHS agency responsible for developing a comprehensive emergency management system, has not developed a national disaster recovery strategy, as required by law, or issued specific guidance to coordinate federal, state, and local government recovery planning for RDD and IND incidents, as directed by executive guidance. To date, most federal attention has been given to developing a response framework, with less attention to recovery. Responding to an attack would involve evacuations and providing treatment to those injured; recovering from an attack would include cleaning up the radioactive contamination to permit people to return to their homes and businesses. Existing federal guidance provides limited direction for federal, state, and local agencies to develop recovery plans and to conduct exercises to test recovery preparedness. Of the over 90 RDD and IND exercises to test response capabilities in the last 6 years, only 3 included a recovery component. GAO's survey found that almost all 13 cities and most states believe they would need to rely heavily on the federal government to conduct and fund analysis and environmental cleanup activities. However, city and state officials were inconsistent in views on which federal agencies to turn to for help, which could hamper the recovery effort. Although DOE and EPA have experience cleaning up localized radiation-contaminated areas, it is unclear whether this federal capability is sufficient to effectively direct the clean up after RDD or IND incidents, and to efficiently address the magnitude of cleanup that would follow these incidents. According to an expert at DOE's Idaho National Laboratory, experience has shown that not selecting the appropriate decontamination technology can generate waste types that are more difficult to remove than the original material and can create more debris requiring disposal--leading to increased costs. Limitations in laboratory capacity to rapidly test potentially millions of material samples during cleanup, and uncertainty regarding where to dispose of radioactive debris could also slow the recovery process. At least two-thirds of the city, state, and federal respondents expressed concern about federal capability to provide the necessary cleanup actions after these incidents. Nearly all survey respondents had suggestions to improve federal recovery preparedness for RDD and IND incidents. For example, almost all the cities and states identified the need for a national disaster recovery strategy to address gaps and overlaps in federal guidance. All but three cities wanted additional guidance, for example, on monitoring radioactivity levels, cleanup standards, and management of radioactive waste. Most cities wanted more interaction with federal agencies and joint exercising to test recovery preparedness. Finally, GAO's review of the United Kingdom's preparedness to recover from radiological terrorism showed that it has already taken actions similar to those suggested by GAO's survey respondents, such as issuing national recovery guidance, conducting a full-scale recovery exercise, and publishing national recovery handbooks for radiation incidents. |
Almost all American workers are covered under Social Security, and in 2011, 55 million Americans are receiving Social Security benefits. The statement is a key tool for communicating with the public about these benefits and the long-term financial challenges the system faces. At present, the cost of Social Security benefits is projected to exceed sources of funding, and the program is projected to be unable to pay a portion of scheduled benefits by 2036. The shortfall stems primarily from the fact that people are living longer and labor force growth has slowed. In 2010, for the first time since 1983, the Social Security trust funds began paying out more in benefits than they received through payroll tax revenue, although trust fund interest income more than covers the difference, according to the 2011 report of the Social Security trust funds’ Board of Trustees. The projected long-term insolvency of the Social Security program necessitates system reform to restore its long-term stability and assure its sustainability. Accomplishing these goals for the long-term requires that either Social Security receive additional income (revenue increases), reduce costs (benefit reductions), or undertake some combination of the two. A wide variety of options for reform have been proposed. Some of the reform options focus on restoring long-term stability; however, a few aim to enhance benefits for specific groups, such as widows and low-earners who are especially at risk for poverty. Our prior work has noted that reform proposals should be evaluated as packages that strike a balance among the individual elements of a proposal and the interactions among those elements, and that the overall evaluation of any particular reform proposal depends on the weight individual policy makers place on various criteria. Our framework for evaluating reform proposals considers not only solvency but other aspects of the program as well. Specifically, the framework uses three basic criteria: the extent to which a proposal achieves long-term stability and how it would affect the economy, including overall savings rates, and the federal budget; the relative balance struck between the goals of individual equity (rates of return on individual contributions) and income adequacy (level and certainty of benefits); and how readily a proposal could be implemented, administered, and explained to the public. If reform is enacted, educating the public about program changes and how they will affect benefits will likely be a high priority for SSA, and the statement is likely to be one of the agency’s key mechanisms for accomplishing this goal. The Social Security Act requires SSA to provide annual statements with benefits and earnings information to individuals age 25 and older who have a Social Security number and wages or net earnings from self- employment, or whose pattern of earnings indicate a likelihood of noncovered employment. The law requires each statement to contain the following: an estimate of the potential monthly Social Security retirement, disability, survivor, and auxiliary benefits and a description of the benefits under Medicare; the amount of wages paid to the employee and income from self- employment; an estimate of the individual’s aggregate contributions paid to Social Security, including employer contributions; an estimate of the individual’s aggregate contributions paid to Medicare, including employer contributions; and for individuals with noncovered employment, an explanation of the potential effects of the Windfall Elimination Provision and the Government Pension Offset on their monthly Social Security benefits. The requirement to provide the annual statements was phased in beginning in fiscal year 1995, when SSA was required to provide the statement—then named the Personal Earnings and Benefit Estimate Statement (PEBES)—to eligible workers who had attained the age of 60 by October 1, 1994, who were not receiving Social Security benefits and for whom a current mailing address could be determined. Starting in fiscal year 2000, SSA was required to provide the annual statement—now called the Social Security Statement—to eligible workers age 25 and older. These statements generally have been provided about 3 months before the worker’s birthday. In addition, since fiscal year 1990, eligible workers have had the option of requesting a copy of the statement from SSA at any time. Between 1995, when SSA began providing this information to workers annually, and March 2011, when the agency suspended this effort due to budgetary concerns, SSA has mailed the statement to workers by using addresses on file with the Internal Revenue Service. In addition, between March and April 1997, SSA permitted online dissemination of the statement in an attempt to respond to customer information needs and move toward electronic service delivery. However, the brief effort was suspended after public outcry amid concerns about the privacy of sensitive information on the Internet. Indeed, we have identified federal information security as a governmentwide high-risk area and emphasized that ineffective information security controls can result in significant risks, including inappropriate access to sensitive information, such as personal information, and the undermining of agency missions due to embarrassing incidents that diminish public confidence in government. The current statement has evolved over the years, partly in response to our recommended changes. The initial PEBES was a six-page document and contained information such as the worker’s earnings record, benefits estimates, and a question-and-answer section about Social Security. However, in a previous report, we found that PEBES did not clearly communicate the complex information that workers needed to understand SSA’s programs and benefits. In response, SSA made significant changes to the format and presentation of the PEBES and began mailing a four-page Social Security Statement to the public in October 1999. While the newer statement was shorter, better organized, and easier to read, our follow-up review in 2000 identified some remaining areas of concern, including clarity of the statement’s purpose and explanations of benefits. In our 2005 review of the statement’s understandability, we again found weaknesses in the statement’s design and recommended that SSA develop a plan for regularly evaluating and revising the statement. In 2006, SSA implemented changes to the content of the statement as a result of new requirements included in the Social Security Protection Act of 2004. These changes included adding a description of the Windfall Elimination Provision and the Government Pension Offset of the Social Security Act. However, at that time and since, SSA has not made some of the changes to the statement we recommended in 2005, such as using graphics to aid readers in quickly comprehending information. Over time, SSA also began including inserts for specific age groups with the statement. For example, in October 2000, the agency began sending a “Thinking of retiring?” insert to workers age 55 and older. Because SSA considers the statement to be one of the three key elements of the agency’s financial literacy initiative, “Encourage Saving,” this insert was updated in 2008 to improve and clarify benefits information provided to this age group and a new insert was created to provide age-appropriate benefits information to younger workers. Since February 2009, SSA has sent “What young workers should know about Social Security and saving” with the statement to workers age 25 to 35. Since SSA suspended mailings of the statement in March 2011, the agency has been assessing the feasibility of making the statement available online. Based on the Commissioner of SSA’s earlier testimony to Congress, as well as our interviews with officials from various SSA offices, SSA is currently considering making the statement available online for all eligible individuals and resuming mailings of the statement to eligible individuals age 60 and older who have not yet begun claiming benefits. Further, when mailings resume, SSA expects to allow anyone to request the paper statement, including those younger than 25. At best, officials said the provision of the statement, both online and through the mail, may be resumed early in calendar year 2012, though they are currently unsure of the timeline. Although the Commissioner has not yet announced a final decision on how SSA will proceed with the provision of the statement, SSA officials expect he will make a final decision this summer. In the interim, copies of the statement are not available from SSA, and SSA is instead directing individuals to the agency’s online Retirement Estimator to estimate their future benefits. However, because the estimator does not provide individuals with their earnings records or personalized information on other SSA benefits, such as those for disability, certain statement information is currently unavailable. Although SSA’s first attempt to make the statement available online in 1997 was short-lived due to privacy concerns, SSA may now be better positioned to move forward with this approach, though it is unknown when the agency will be fully ready. SSA is developing a new electronic authentication system and a “MySocialSecurity” Web page to allow individuals to access personalized SSA information online. Officials report that both the authentication system and the “MySocialSecurity” Web page have already undergone initial testing to assess their feasibility and public opinion about such an approach. While the agency had not determined what information would initially be made available through this portal, when the Commissioner suspended mailings of the statement in March of this year, SSA decided that the statement would be the top priority. According to officials, both the authentication system and the statement page for “MySocialSecurity” are currently in the initial development phase, as staff build the prototypes. Once the prototypes are completed, SSA will conduct additional testing both internally and with the public, on an iterative basis, until the agency determines that both the authentication system and the statement page on “MySocialSecurity” provide sufficient safeguards and are user-friendly. Recently, officials conducted a risk assessment of the information contained in the statement to determine that the authentication system has the appropriate safeguards in place. SSA officials said that testing of the online statement page will begin in August, but they could not provide a date for when the authentication system testing will begin. Because officials do not know how long the testing phase will last, they could not provide a date for when the statement will be available to the public online. Although officials told us that they plan to fully assess the portal’s safeguards before moving ahead with the online statement, SSA’s Inspector General recently expressed concerns about the agency’s information technology systems, including service delivery. Specifically, in a recent report on SSA management challenges, while the Inspector General noted his support of SSA’s decision to offer more services online to enhance customer service, he cautioned the agency to proceed carefully with this initiative, ensuring proper authentication controls are in place before full implementation. As we have reported over the years, protecting sensitive information is a governmentwide concern. Consistent with the evolving and growing nature of the threats to federal information systems, federal agencies are reporting an increasing number of security incidents. These incidents put sensitive, personally identifiable information at risk, which can expose individuals to loss of privacy, identity theft, and financial crimes. One of the three most prevalent types of incidents reported by federal agencies during fiscal year 2010 was unauthorized access, where an individual gains logical or physical access to a system without permission. While SSA officials reported that upcoming tests of the portal will focus on its user-friendliness, they do not have plans in place for publicizing the online statement. Specifically, the project lead for the online statement said that an internal work group is currently considering options for SSA’s public roll-out of the online statement, but the agency has not yet developed a plan for carrying it out. However, if significant numbers of workers do not choose to access the statement online, SSA could face increased requests for mailed paper copies of the statement and higher administrative costs. Key SSA officials involved in the project said they are optimistic that once the statement is available online, many people will want real-time access to this information. Nonetheless, through its own 2010 survey of statement recipients, SSA found that only 21 percent expressed a preference for receiving the statement electronically instead of by mail, including 8 percent who said they would prefer to receive the statement on request via e-mail and 13 percent who said they would prefer to obtain it online. These data suggest that SSA will need to employ a substantial public relations strategy to ensure workers are made aware of and encouraged to access the online statement. SSA officials also could not provide information on how they plan to address access issues related to the online statement. Although SSA currently has a pilot project underway that has made computer workstations available to the public in selected field offices, SSA officials have not yet determined how those could be used to access the portal and online statement. However, such use may be needed by individuals who do not otherwise have Internet access. In addition, key officials involved in the online statement project could not provide information on any other plans SSA is developing to address Internet access issues. Concerning access to the online statement for workers with limited English proficiency, officials explained that they would like to develop Spanish versions of the portal and online statement in the future, but the first publicly released versions will be in English only. However, when SSA resumes mailings of the paper statement, workers will be able to request paper statements in English or Spanish. Finally, while the Commissioner cited budget constraints as the reason for suspending mailings of the statement March 2011, total costs associated with the agency’s plans for resuming provision of the statement are unknown. Although SSA is required to provide the statement annually to eligible individuals who meet the statutory requirements, officials said SSA suspension of the statement mailings is expected to save the agency $30 million in fiscal year 2011. Further, officials said that the agency decided not to renew the contract for mailing the statements for another year as one of several measures to ensure SSA remained within its budget. SSA officials indicated that the agency’s fiscal year 2011 appropriation for administrative expenses, which is used for the provision of the statement, was $1 billion below the President’s budget request and about $23 million below its fiscal year 2010 appropriation. While the suspension may save SSA administrative costs in the short-term, the full extent of savings is unknown, as officials could not provide us with an estimate of the total cost of the online statement project. SSA officials said they expect that this project will be cost- effective over time; however, it may involve greater costs in the short-term for development, testing, and publicity. Further, SSA officials in charge of drafting the new contract for mailing statements acknowledged that it is unknown how many workers will request mailed statements once the online statement is available, as that number will depend on factors such as individuals’ willingness and capability to access the statement online. The new contract for mailing statements, likely only for those age 60 and older who have not yet begun claiming benefits and those who request it, is currently being drafted as a cost reimbursement contract, which is considered high risk for the government because of the potential for cost escalation. SSA officials said they expect to improve the usefulness of the statement for some by moving it to an online format. While they acknowledged that budgetary constraints were the driving factor behind the agency’s consideration of online statements, they also suggested that the online format has advantages for individuals. Specifically, individuals will have immediate access to their statements through a format that is commonly used by banks, health care providers, and others. For those with Internet access, the information will be available whenever they are thinking about retirement planning. Further, in the event that they spot an error in their earnings information and request a correction, verification that their earnings history has been accurately updated will be easier because the online statement will be readily available. Further, SSA plans to make some limited design enhancements to the online version of the statement. Specifically, officials told us the online format offers the agency an opportunity to provide links to related information, thereby allowing SSA to minimize some lengthy descriptions and add richer information without adding to the statement’s length. For example, officials told us they plan to link some of the information currently contained on the last page of the statement to the related benefits and earnings data in the online version, as well as add links to SSA’s online tools for estimating and applying for retirement benefits. We have previously noted that the length of a document can influence how useful it is to beneficiaries, and some groups have concerns that too much information can overwhelm beneficiaries. The project lead for the online statement acknowledged this and told us SSA plans to draw upon industry best practices for screen design and layout in order to make the online statement more reader-friendly than the paper statement. (According to officials, a prototype was not available at the time of our review.) In addition, SSA plans to integrate content from its special inserts for workers age 55 and older and those age 25 to 35 in the online statement and make electronic facsimiles of the paper inserts available for viewing and printing. While SSA is making these limited changes, the project lead said the first publicly released version of the online statement will be as similar to the current mailed statement as possible. Further, the same content that is currently available on the inserts will be made available online. While SSA plans to incorporate some graphics into the online statement, these are limited to what is already present in the paper statement and inserts. However, in our prior work, using graphics to replace text and make information more quickly and easily understandable was a common theme that emerged in the suggestions made by focus groups and a benefits consulting firm. Officials told us they have no current plans to update the paper statement, which will still be in use, even though we and others have suggested ways to improve its design. Under SSA’s current plans, in addition to mailing the paper statement to individuals meeting certain criteria and any other eligible individuals who request it, SSA will allow individuals to view and print a facsimile of the paper statement when accessing the online statement. Officials told us they do not plan to change the statement’s content because much of it is statutorily required and individuals have expressed a high level of satisfaction with the statement in SSA’s surveys and focus groups. For example, in SSA’s 2010 survey, 41 percent of respondents said they were very satisfied with the statement overall and another 44 percent reported being somewhat satisfied. However, we and others have previously identified ways in which SSA could modify the design of the paper statement to improve its usefulness for recipients. For example, in our 2005 report, we noted that the paper statement lacks white space and is text-intensive, which means that important concepts may not stand out. Similarly, in SSA’s own focus groups, participants frequently noted that the statement has too much text, although in the agency’s 2010 survey, some respondents said the statement is missing key information about their retirement benefit amount. According to a 2009 report from the Social Security Advisory Board, “information is presented as a laundry list of facts and data, rather than cogent summaries of things that people need to know to make informed decisions.” Furthermore, even though we and others have previously reported that certain information contained in the statement is confusing, SSA has no plans at this time to change its content in either the paper or online version. In focus groups conducted for our 2005 report, during which participants reviewed the statement’s content and design and compared them with those of a private sector benefits statement, participants provided detailed insights about the areas of the statement they understood and how confusing information might be improved. For example, participants identified some cases where they did not understand the actual meaning of a word or phrase, such as “actuary” or “intermediate assumption.” Also, the phrase “compact between the generations,” used to describe the pay-as-you-go nature of Social Security, was unclear to many. Participants across focus groups also did not understand explanations of certain concepts discussed in the statement, such as the role of credits in determining their eligibility for retirement benefits versus the role of earnings in determining their actual retirement benefit. Information about the financial stability of the Social Security system was also confusing for most focus group participants. Phrases such as “we will need to resolve these issues soon” did not provide the information many felt they needed to understand the problem and what personal action, if any, they were expected to take. Additionally, according to a benefits consulting firm that evaluated the statement, it does not compare the retirement benefit with how much income a person may need in retirement or offer suggestions and strategies for meeting income goals through other sources of retirement income. As a result, we concluded workers may not fully understand their benefits and the role Social Security should play in their retirement planning. SSA’s own financial literacy initiative also provides detailed information on opportunities for improving the statement’s usefulness, particularly to help people plan for retirement, and SSA considers the statement to be a key component of this initiative. However, the extent to which staff from the office in charge of the initiative have been consulted on the design or content of the online statement, or on decisions about which groups should continue to receive a print statement, is unclear. According to the key SSA official responsible for the initiative, the initiative’s studies and the work of other researchers have found that the way information is packaged can affect how individuals respond. One research project that was funded through SSA’s Financial Literacy Research Consortium found that the presentation of benefit information can affect the age at which people report they will claim Social Security benefits. For example, emphasizing gains (delaying claiming by one year will increase your annuity by $X per month) rather than losses (claiming one year earlier will reduce your annuity by $X per month) led to study respondents reporting they would delay claiming. Another project examined how alternative approaches for presenting information on future estimated benefits might assist individuals in retirement planning and reduce the potential for confusion. Other SSA-funded research also provides insights into areas where SSA might focus future efforts to improve the statement’s usefulness. One study of near-retirees’ ability to estimate their Social Security benefits found the accuracy of their estimates has not improved since the statement has been universally distributed. According to the researcher, individuals’ ability to accurately estimate their benefits has implications for their savings rates and investment decisions, among other considerations. Additionally, the study found that many people may interpret Social Security benefits as accruing to households rather than to individuals and therefore estimate their benefits at either half or double their actual value. This misunderstanding may be attributable to the statement lacking a general explanation of spousal benefits and not cautioning recipients that the estimates are based on their own individual earnings records and may also depend on their spouses’ earnings if they are married. A separate project examined what people know about various aspects of the benefits offered by Social Security and assessed their knowledge gaps. Half of those responding to a seven-question quiz on basic Social Security knowledge, included as part of a survey of working-aged adults, received a grade of D or F. While two-thirds of the respondents reported that they recalled receiving the statement within the last 6 months, very few respondents understood how their Social Security benefits are calculated. Although SSA’s budgetary decision to suspend statement mailings will leave some Americans without a statement this year, it has also created the impetus for SSA to seek new and more cost-effective ways to distribute this information. Providing the statement online could be one of those ways, and if SSA can assure the security of this sensitive information, this approach holds real promise: it can both meet the electronic demands of an increasingly Internet-literate population while providing flexibility for improved statement design. Yet because the decision to suspend was made relatively abruptly, the agency faces pressure to take quick action that will restore public access to the statements. As a result, officials currently are not in a position to fully redesign the statement to improve its usefulness and clarity. Furthermore, SSA has not yet considered how they will reach those who cannot or will not obtain the statement online, though at least some will not be able to read statements provided only in English. Because people in these groups may likely be lower earners, they can least afford to remain uninformed about their Social Security benefits. Access must be addressed before the online statement can be considered a success, yet because the statement is currently unavailable, there is limited time for SSA to consider these important questions in a measured way. Still, it is vital that SSA address the issues of access and design. Any changes made to the Social Security program to restore fiscal stability or for any other reasons must be explained to the American people clearly and quickly, to assure that participants in this important social insurance program understand what benefits they can expect and when. The statement is SSA’s best option for communicating this important information and, as such, deserves to occupy a position of higher priority in SSA planning and decisionmaking. Therefore, as SSA considers moving forward with an online statement, we recommend the following: the Commissioner of SSA should take steps to ensure access to the statement for all eligible workers, including those without Internet access or English proficiency. Doing so will assure that the statement remains an important tool for communicating with all workers about the Social Security program. We provided a draft of this testimony to SSA for review and comment. SSA provided technical comments, which we incorporated as appropriate. Chairman Johnson, Ranking Member Becerra, and Members of the Committee, this concludes my prepared statement. I would be happy to respond to questions. For further questions on this testimony, please contact me at (202) 512- 7215 or bovbjergb@gao.gov. Individuals who made key contributions to this testimony include Michael Collins, Rachel Frisk, Kristen Jones, Amy Anderson, David Chrisinger, Carla Craddock, Sarah Cornetto, Sheila McCoy, Susan Offutt, Frank Todisco, Walter Vance, Christie Motley, Mike Alexander, David Hong, and Brandon Pettis. Appendix I: 2011 Social Security Statement This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Social Security Statement (the statement) is the federal government's main document for communicating with more than 150 million workers about their Social Security benefits. Provided annually, it serves as a key financial literacy tool that can educate the public about Social Security Administration (SSA) program benefits, aid in financial planning, and ensure that workers' earnings records are complete and accurate. The statement is also a key tool for communicating with the public about the long-term financial challenges the Social Security system faces. However, due to budget constraints, SSA chose to suspend mailings of the statement in March 2011. GAO examined (1) the current status of the statement and (2) ways SSA plans to improve the usefulness of the statement. To address these issues, GAO interviewed SSA officials and reviewed agency documents and our prior work on the statement's understandability. GAO also provided a draft of this testimony to SSA for review and comment. SSA is currently preparing to make the statement available online; however, the agency does not yet know the timeline for implementation and has not finalized its plans for publicizing its availability or addressing access issues. SSA is developing a new Web portal to allow individuals to access personalized SSA information online. However, because the portal and online statement are currently in the initial development phases and thus have not yet been fully tested, agency officials do not know when the online statement will be available to the public. In addition, SSA does not yet have plans in place for publicizing the online statement or ensuring access for individuals without Internet access or English proficiency. Finally, because the agency does not have a total cost estimate for the online statement project, and it is unclear how many workers will request mailed statements after this information is made available online, it is unknown if SSA will realize the budget savings it expects from suspending statement mailings, at least in the short-term. Although SSA expects to improve the usefulness of the statement for some by moving it to an online format, the agency is taking only limited steps to improve the statement's overall content and design. A key agency official said that the first publicly released version of the online statement will be as similar to the mailed paper statement as possible, and SSA has no plans to update the paper statement's content or design at this time. However, over the years, GAO and others have reported that the design of the statement could be modified and certain information contained in the statement could be clarified to improve the statement's usefulness for recipients. For example, focus group participants in our prior study suggested that using graphics to replace text would make information more easily understandable. Furthermore, while SSA's own financial literacy initiative also provides detailed information on ways to improve the statement's usefulness in helping people plan for retirement, the extent to which staff from SSA's office responsible for this initiative have been consulted on the design or content of the online statement is unclear. As SSA considers moving forward with an online statement, we recommend that the Commissioner of SSA ensure access to the statement for all workers, including those without Internet access or English proficiency. In comments, SSA noted that paper statements will continue to be available, on request, in English and Spanish. |
Each year, OMB and federal agencies work together to determine how much the government plans to spend on IT projects and how these funds are to be allocated. Planned federal IT spending has now risen to an estimated $79.4 billion for fiscal year 2011, a 1.2 percent increase from the 2010 level of $78.4 billion. OMB plays a key role in helping federal agencies manage their investments by working with them to better plan, justify, and determine how much they need to spend on projects and how to manage approved projects. To assist agencies in managing their investments, Congress enacted the Clinger-Cohen Act of 1996, which requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by federal agencies and report to Congress on the net program performance benefits achieved as a result of these investments. Further, the act places responsibility for managing investments with the heads of agencies and establishes chief information officers (CIO) to advise and assist agency heads in carrying out this responsibility. Another key law is the E-Government Act of 2002, which requires OMB to report annually to Congress on the status of e- government. In these reports, referred to as the Implementation of the E- Government Act reports, OMB is to describe the Administration’s use of e- government principles to improve government performance and the delivery of information and services to the public. To help carry out its oversight role and assist the agencies in carrying out their responsibilities as assigned by the Clinger-Cohen Act, OMB developed a Management Watch List in 2003 and a High Risk List in 2005 to focus executive attention and to ensure better planning and tracking of major IT investments. Consistent with the Clinger-Cohen Act, OMB reported on the status of investments on the Management Watch List and High Risk List in its annual budget documents. Over the past several years, we have reported and testified on OMB’s initiatives to highlight troubled projects, justify investments, and use project management tools. We have made multiple recommendations to OMB and federal agencies to improve these initiatives to further enhance the oversight and transparency of federal projects. Among other things, we recommended that OMB develop a central list of projects and their deficiencies and analyze that list to develop governmentwide and agency assessments of the progress and risks of the investments, identifying opportunities for continued improvement. In addition, in 2006 we also recommended that OMB develop a single aggregate list of high-risk projects and their deficiencies and use that list to report to Congress on progress made in correcting high-risk problems. As a result, OMB started publicly releasing aggregate data on its Management Watch List and disclosing the projects’ deficiencies. Furthermore, OMB issued governmentwide and agency assessments of the projects on the Management Watch List and identified risks and opportunities for improvement, including risk management and security. Table 1 provides a historical perspective of the number of projects on the Management Watch List and their associated budgets for the period of time during which OMB updated the Management Watch List. The table shows that while the number of projects and their associated budgets on the list generally decreased, the number of projects on the Management Watch List increased by 239 projects and $13 billion for fiscal year 2009, and represented a significant percentage of the total budget. More recently, to further improve the transparency into and oversight of agencies’ IT investments, and to address data quality issues, in June 2009, OMB publicly deployed a Web site, known as the IT Dashboard, which replaced the Management Watch List and High Risk List. It displays information on federal agencies’ cost, schedule, and performance information for the approximately 800 major federal IT investments at 28 federal agencies. According to OMB, these data are intended to provide a near real-time perspective of the performance of these investments, as well as a historical perspective. Further, the public display of these data are intended to allow OMB, other oversight bodies, and the general public to hold the government agencies accountable for results and progress. The Dashboard was initially deployed in June 2009 based on each agency’s Exhibit 53 and Exhibit 300 submissions. After the initial population of data, agency CIOs have been responsible for updating cost, schedule, and performance fields on a monthly basis, which is a major improvement from the quarterly reporting cycle OMB previously used for the Management Watch List and High Risk List. For each major investment, the Dashboard provides performance ratings on cost and schedule, a CIO evaluation, and an overall rating which is based on the cost, schedule, and CIO ratings. The cost rating is determined by a formula that calculates the amount by which an investment’s aggregated actual costs deviate from the aggregated planned costs. Table 2 displays the rating scale and associated category for cost variations. An investment’s schedule rating is calculated by determining the average days late or early. Table 3 displays the rating scale and associated category for schedule deviations. Each major investment on the Dashboard also includes a rating determined by the agency CIO, which is based on his or her evaluation of the performance of each investment. The rating is expected to take into consideration the following criteria: risk management, requirements management, contractor oversight, historical performance, and human capital. This rating is to be updated when new information becomes available that would impact the assessment of a given investment. Lastly, the Dashboard calculates an overall rating for each major investment. Figure 1 identifies the Dashboard’s overall ratings scale. This overall rating is an average of the cost, schedule, and CIO ratings, with each representing one-third of the overall rating. However, when the CIO’s rating is lower than both the cost and schedule ratings, the CIO’s rating will be the overall rating. Of the 792 major investments on the Dashboard as of May 2010, 540 (68 percent) were green, 204 (26 percent) were yellow, and 48 (6 percent) were red. Earned value management is a technique that integrates the technical, cost, and schedule parameters of a development contract and measures progress against them. During the planning phase, a performance measurement baseline is developed by assigning and scheduling budget resources for defined work. As work is performed and measured against the baseline, the corresponding budget value is “earned.” Using this earned value metric, cost and schedule variances, as well as cost and time to complete estimates, can be determined and analyzed. Without knowing the planned cost of completed work and work in progress (i.e., the earned value), it is difficult to determine a program’s true status. Earned value allows for this key information, which provides an objective view of program status and is necessary for understanding the health of a program. As a result, earned value management can alert program managers to potential problems sooner than using expenditures alone, thereby reducing the chance and magnitude of cost overruns and schedule slippages. Moreover, earned value management directly supports the institutionalization of key processes for acquiring and developing systems and the ability to effectively manage investments—areas that are often found to be inadequate on the basis of our assessments of major IT investments. In August 2005, OMB issued guidance that all major and high- risk development projects, among other things, develop comprehensive policies to ensure that their major IT investments use earned value management to manage their investments. Cost and schedule performance ratings were not always accurate for the selected investments we reviewed. A key reason for the inaccuracies is that the Dashboard’s cost and schedule ratings do not reflect current performance. Another issue with the ratings is that large inconsistencies exist in the number of milestones that agencies report on the Dashboard. The cost and schedule performance ratings of selected investments were not always accurate. There were several instances of inaccurate cost ratings; however, two investments experienced notable discrepancies while the other discrepancies were not as dramatic. Specifically, 5 of the 8 selected investments on the Dashboard had inaccurate cost ratings: BioSense, Financial Management Modernization Initiative, Joint Precision Approach and Landing System, Law Enforcement Wireless Communication, and Unified Financial Management System. For example, the Dashboard rated the Law Enforcement Wireless Communication investment a 10 for cost (less than 5 percent variance) every month from July 2009 through January 2010. However, our analysis shows the investment’s cost rating during December 2009 and January 2010 is equivalent to an 8 (a variance of 10 percent to less than 15 percent). Accordingly, this investment’s cost performance should have been rated a “yellow” instead of a “green,” meaning it needed attention. Further, the Dashboard’s cost rating for the Financial Management Modernization Initiative reported that this investment was “yellow,” while it should have been “green” for 7 months. Maneuver Control System, Sequoia Platform, and Risk Management Agency-13 are the three investments that had accurate cost ratings. Figure 2 shows the comparison of selected investments’ Dashboard cost ratings to GAO’s ratings for the months of July 2009-January 2010. There were fewer instances of discrepancies with the schedule ratings; however, these discrepancies were also notable. Specifically, of the 8 selected investments, the Dashboard’s schedule ratings were inaccurate for 2 investments: Risk Management Agency-13 and the Unified Financial Management System. The Unified Financial Management System’s last completed milestone was in May 2009 and the Dashboard rating for the investment’s schedule has been a 10 since July 2009. However, investment data we examined showed the schedule rating should have been a 5 (greater than or equal to 30 days and less than 90 days behind schedule) from September 2009 through December 2009. As a result, this investment’s schedule performance should have been rated a “yellow” instead of a “green” for those months. Additionally, the Dashboard’s schedule rating for Risk Management Agency-13 reported that this investment was “red” for two months, while it should have been “green,” and “yellow” for four months, when it should have been “green.” BioSense, Financial Management Modernization Initiative, Joint Precision Approach and Landing System, Law Enforcement Wireless Communication, Maneuver Control System, and Sequoia Platform are the 6 investments that had accurate schedule ratings. Figure 3 shows the comparison of selected investments’ Dashboard schedule ratings to GAO’s ratings for the months of July 2009-January 2010. In addition to determining that cost and schedule ratings are not always accurate, we found other data inaccuracies. Specifically, rebaseline information on the Dashboard was not always accurate. Best practices and GAO’s Cost Estimating Guide state that a rebaseline should occur when the current cost and schedule baseline does not adequately represent the amount of work to be completed, causing difficulty in monitoring progress of the program. However, OMB reports all major and minor corrections to planned information on the Dashboard, includ typographical fixes, as a rebaseline. More specifically, while the Dashboard allows agencies to provide reasons for baseline changes, the current version of the Dashboard, at a high level, identifies all changes to planned information as rebaselines. For example, according to the Dashboard, DOJ’s Law Enforcement Wireless Communication investment has been rebaselined four times. However, program officials stated that the program has only been rebaselined once. Similarly, the Dashboard shows that the Sequoia Platform and Integrated Management Navigation System investments at DOE have both been rebaselined four times. However, program officials stated that neither of these programs had actually been rebaselined. Rather, they stated that this number represents instances in which they made minor corrections to the data on the Dashboard. Table 4 shows the selected investments whose program officials reported a lower number of rebaselines than what was reported on the Dashboard. A primary reason why the cost and schedule ratings were not always accurate is that the cost and schedule ratings do not take current performance into consideration for many investments on the Dashboard, though it is intended to represent near real-time performance information on all major IT investments. Specifically, as of April 2010, the formula to calculate the cost ratings on the Dashboard intentionally only factored in completed portions of the investments (referred to as milestones) to determine cost ratings. As such, milestones that are currently under way are not taken into account. Table 5 identifies each selected investment’s last completed milestone and the number of days that the Dashboard’s cost rating is out of date for each selected investment. OMB officials agreed that the ratings not factoring in current performance is an area needing improvement and said that they are planning on upgrading the Dashboard application in July 2010 to include updated cost and schedule formulas that factor in the performance of ongoing milestones; however, they have not yet made this change. One step OMB has taken toward collecting the information needed for the new formulas is that it now requires agencies to provide information on their investment milestones’ planned and actual start dates. In addition, OMB officials stated that they plan to use a previously unused data field—percent complete. These are key data points necessary to calculate the performance of ongoing milestones. Another issue with the ratings is that there were wide variations in the number of milestones agencies reported. For example, DOE’s Integrated Management Navigation System investment lists 314 milestones, whereas DOD’s Joint Precision Approach and Landing System investment lists 6. Having too many milestones may mask recent performance problems because the performance of every milestone (i.e., historical and recently completed) is equally averaged into the ratings. Specifically, investments that perform well during many previously completed milestones and then start performing poorly on a few recently completed milestones can maintain ratings that still reflect good performance. A more appropriate approach could be to give additional weight to recently completed and ongoing milestones when calculating the ratings. Too many detailed milestones also defeat the purpose of an executive-level reporting tool. Conversely, having too few milestones can limit the amount of information available to track work and rate performance and allows agencies to potentially skew the performance ratings. In commenting on a draft of this report, the Federal CIO stated that OMB has a new version of the Dashboard that implements updated cost and schedule calculations. He stated that the new calculations greatly increase the weight of current activities. As of July 1, 2010, this updated Dashboard had not been released. An OMB analyst subsequently told us that the agency plans to release the new version in July 2010. Additionally, OMB officials have provided us with documentation of the new calculations and demonstrated the new version of the Dashboard that will be released soon. The Federal CIO also added that OMB will consider additional changes to the ratings in the future. Table 6 demonstrates the large inconsistencies in the number of milestones reported for each selected investment. In June 2009, OMB issued guidance that agencies are responsible for providing quality data and, at minimum, should provide milestones that consist of major segments of the investment, referred to as work breakdown structure level 2, but prefers that agencies provide lower-level milestones within each segment (work breakdown structure level 3). A work breakdown structure is the cornerstone of every program because it defines in detail the work necessary to accomplish a program’s objectives. Standardizing a work breakdown structure is considered a best practice because it enables an organization to collect and share data among programs. Further, standardizing work breakdown structures allows data to be shared across organizations. However, certain agencies are not following OMB’s guidance and list milestones that they consider to be at work breakdown structure level 1, which are high-level milestones. Specifically, of the 5 agencies we reviewed, officials at DOD, USDA, and DOE stated that they were reporting work breakdown structure level 1 milestones to the Dashboard for each of their selected investments. OMB officials acknowledge that not all agencies are following their guidance, but stated that OMB analysts are working with agencies to try to improve compliance. Furthermore, the guidance that OMB has provided is not clear on the level of detail that it wants agencies to report in their milestones and has left it to the agencies to individually interpret their general guidance. Specifically, while OMB states that agencies should report milestones that are, at a minimum, work breakdown structure level 2, there is no commonly accepted definition among federal agencies on the level of detail that should comprise each of these levels. OMB officials acknowledged that they have not provided clear guidance, but recently stated that they have begun exploring ways to ensure more uniformity across agencies’ reporting. Specifically, in commenting on a draft of this report, the Federal CIO stated that OMB has recently chartered a working group comprised of representatives from several federal agencies, with the intention of developing clear guidance for standardizing and improving investment activity reporting. OMB and agencies acknowledge that additional improvements can be made beyond the cost and schedule ratings and have taken certain steps to try to improve the accuracy of the data. For example, OMB implemented an automated monthly data upload process and created a series of data validation rules that detect common data entry errors, such as investment milestone start dates that occur after completion dates. In addition, four of the five agencies we reviewed indicated that they have processes in place aimed at improving the accuracy of the data. For instance, HHS has established a process wherein an official has been assigned responsibility for ensuring the Dashboard is accurately updated. Further, DOJ has developed an automated process to find missing data elements in the information to be uploaded on the Dashboard. Despite these efforts, until OMB upgrades the Dashboard application to improve the accuracy of the cost and schedule ratings to include ongoing milestones, explains the outcome of these improvements in its next annual report to Congress on the Implementation of the E-Government Act (which is a key mechanism for reporting on the implementation of the Dashboard), provides clear and consistent guidance to agencies that standardizes milestone reporting, and ensures agencies comply with the new guidance, the Dashboard’s cost and schedule ratings will likely continue to experience data accuracy issues. Officials at three of the five agencies we reviewed—DOD, DOJ, and HHS— stated that they are not using the Dashboard to manage their investments, and the other two agencies, DOE and USDA, indicated that they are using the Dashboard to manage their investments. Specifically, officials from the three agencies are not using the Dashboard to manage their investments because they have other existing means to do so: DOD officials indicated that they use the department’s Capital Planning and Investment Control process to track IT investment data—including cost and schedule. DOJ uses an internal dashboard that the office of the CIO developed that provides for more detailed management of investments than OMB’s Dashboard. HHS officials said they use a portfolio investment management tool, which they indicated provides greater insight into their investments. Officials from the other two agencies—DOE and USDA— noted that they are using the Dashboard as a management tool to supplement their existing internal processes to manage their IT investments. DOE officials stated that since their current process is based on a quarterly review cycle, the monthly reporting nature of the Dashboard has allowed officials to gain more frequent insight into investment performance. As a result, DOE officials say that they are able to identify potential issues before these issues present problems for investments. USDA officials stated that they use the ratings on the Dashboard to identify investments that appear to be problematic and hold meetings with the investments’ program managers to discuss corrective actions. Additionally, in OMB’s fiscal year 2009 Report to Congress on the Implementation of the E-Government Act of 2002, 11 agencies reported on how the Dashboard has increased their visibility and awareness of IT investments. For example, the Department of Veterans’ Affairs terminated 12 IT projects, partly because of the increased visibility that the CIO obtained from the Dashboard. OMB indicated that it is using the Dashboard to manage IT investments. Specifically, OMB analysts are using the Dashboard’s investment trend data to track changes and identify issues with investments’ performance in a timely manner and are also using the Dashboard to identify and drive investment data quality issues. The Federal CIO stated that the Dashboard has greatly improved oversight capabilities compared to previously used mechanisms. He also stated that the Dashboard has increased the accountability of agencies’ CIOs and established much-needed visibility. According to OMB officials, the Dashboard is one of the key sources of information that OMB analysts use to identify investments that are experiencing performance problems and select them for a TechStat session—a review of selected IT investments between OMB and agency leadership that is led by the Federal CIO. OMB has identified factors that may result in a TechStat session, such as policy interests, Dashboard data inconsistencies, recurring patterns of problems, and/or an OMB analyst’s concerns with an investment. As of June 2010, OMB officials indicated that 27 TechStat sessions have been held with federal agencies. According to OMB, this program enables the government to improve or terminate IT investments that are experiencing performance problems. OMB has taken significant steps to enhance the oversight, transparency, and accountability of federal IT investments by creating its IT Dashboard. However, the cost and schedule ratings on the Dashboard were not always accurate. Further, the rebaseline data were not always accurate. The cost and schedule inaccuracies were due, in part, to calculations of ratings that did not factor in current performance. Additionally, there were large inconsistencies in the number of milestones that agencies report on the Dashboard because OMB has not fully defined the level of detail that federal agencies should use to populate the Dashboard and several selected agencies decided not to follow OMB’s general guidance. Moreover, the performance of historical and recently completed milestones are equally averaged in the cost and schedule ratings, which is counter to OMB’s goal to report near real-time performance on the Dashboard. While the use of the Dashboard as a management tool varies, OMB has efforts under way to include the performance of ongoing milestones and its officials acknowledge that additional improvements are needed. Nevertheless, until OMB explains in its next annual Implementation of the E-Government Act report how the upgrade to the Dashboard application has improved the accuracy of the cost and schedule ratings, and provides clear and consistent guidance that enables agencies to report standardized information on their milestones, the accuracy of the data on the Dashboard may continue to be in question. To better ensure that the IT Dashboard provides meaningful ratings and accurate investment data, we are recommending that the Director of OMB take the following two actions: include in its next annual Implementation of the E-Government Act report the effect of planned formula changes on the accuracy of data; and develop and issue clear guidance that standardizes milestone reporting on the Dashboard. In addition, we are recommending that the Secretaries of the Departments of Agriculture, Defense, and Energy direct their Chief Information Officers to ensure that they comply with OMB’s guidance on standardized milestone reporting, once it is available. We received written comments on a draft of this report from the Federal CIO and DOE’s Associate CIO for IT Planning, Architecture, and E- Government. Letters from these agencies are reprinted in appendixes III and IV. In addition, we received technical comments via e-mail from a Coordinator at HHS, which we incorporated where appropriate. In addition, the Deputy CIO from USDA, the Principal Director to the Deputy Assistant Secretary of Defense for Resources from DOD, and an Audit Liaison Specialist from DOJ indicated via e-mail that they had reviewed the draft report and did not have any comments. In OMB’s comments on our draft report, which contained four recommendations to the OMB Director, the Federal CIO stated that he agreed with two recommendations and disagreed with two because of actions OMB has recently taken. After reviewing these actions, we agreed that they addressed our concerns and will not make these two recommendations. OMB agreed with our recommendation that it include in its next annual Implementation of the E-Government Act report how the planned formula changes have improved the accuracy of data. OMB agreed with our recommendation that it develop and issue clear guidance that standardizes milestone reporting on the Dashboard. Additionally, the Federal CIO asked that we update the report to reflect that they have recently chartered a working group comprised of representatives from several federal agencies, with the intention of developing clear guidance for standardizing and improving investment activity reporting. We have incorporated this additional information into the report. In response to our draft recommendation that OMB revise the IT Dashboard and its guidance so that only major changes to investments are considered to be rebaselines, OMB provided us with its new guidance on managing IT baselines, which was issued on June 28, 2010. The guidance, among other things, describes when agencies should report baseline changes on the Dashboard. OMB also provided documentation of the specific modifications that will be made in an upcoming release of the Dashboard to improve the way baseline changes are displayed. We agree that these recent changes address our recommendation. As such, we updated the report to acknowledge and include this additional information, where appropriate. Regarding our recommendation that OMB consider weighing recently completed and ongoing milestones more heavily than historical milestones in the cost and schedule ratings, the Federal CIO stated that OMB has a new version of the Dashboard that implements updated cost and schedule calculations. He stated that the new calculations greatly increase the weight of current activities. As previously stated, as of July 1, 2010, this updated Dashboard had not been released. An OMB analyst subsequently told us that the agency plans to release the new version in July 2010. Additionally, OMB officials have provided us with documentation of the new calculations and demonstrated the new version of the Dashboard that will be released soon. The Federal CIO also added that OMB will consider additional changes to the ratings in the future. We agree that these recent changes address our recommendation. As such, we updated the report to acknowledge and include this additional information, where appropriate. Additionally, OMB will report on the effect of the upcoming changes to the calculations in its next annual Implementation of the E-Government Act report. OMB also provided additional comments, which we address in appendix III. In DOE’s comments on our draft report, the Associate CIO for IT Planning, Architecture, and E-Government indicated that she agreed with our assessment of the implementation of the IT Dashboard across federal agencies and with the recommendations presented to OMB. Additionally, in response to our recommendation that the CIO of DOE comply with OMB guidance on milestone reporting once it is available, the Associate CIO stated that once OMB releases the additional guidance, DOE officials will work to ensure the appropriate level of detail is reported on the Dashboard. DOE also provided an additional comment, which we address in appendix IV. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees; the Director of the Office of Management and Budget; the Secretaries of the Departments of Agriculture, Defense, Energy, Health and Human Services, and Justice; and other interested parties. In addition, the report will be available at no charge on our Web site at http://www.gao.gov. If you or your staffs have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to (1) examine the accuracy of the cost and schedule performance ratings on the Dashboard for selected investments and (2) determine whether the data on the Dashboard are used as a management tool to make improvements to IT investments. To address both objectives, we selected five agencies and ten investments to review. To select these agencies and investments, we first identified ten agencies with large IT budgets as reported in the Office of Management and Budget’s (OMB) fiscal year 2010 Exhibit 53. We then identified the five largest investments at each of the ten agencies, according to the fiscal year 2010 budget, that were spending more than half of their budget on IT development, modernization, and enhancement work, and were primarily carried out by contractors. In narrowing the list to five agencies and ten total investments, we considered several factors to ensure there were two viable investments at each agency: The investment is not part of our ongoing audit work related to cost, schedule, and technical performance. The investment is not part of a recent governmentwide earned value management review. The investment has not been highlighted as an investment needing significant attention. The collective list of investments creates a balance of investment sizes to include both larger and smaller investments. The five agencies are: the Departments of Agriculture (USDA), Defense (DOD), Energy (DOE), Health and Human Services (HHS), and Justice (DOJ). The ten investments are: USDA’s Financial Management Modernization Initiative and Risk Management Agency-13 Program; DOD’s Joint Precision Approach and Landing System and Maneuver Control System; DOE’s Integrated Management Navigation System and Sequoia Platform; HHS’s BioSense Program and Electronic Research Administration System; DOJ’s Law Enforcement Wireless Communication and Unified Financial Management System (see appendix II for descriptions of each investment). To address the first objective, we evaluated earned value data of the selected investments to determine their cost and schedule performance and compared it to the ratings on the Dashboard. The investment earned value data was contained in contractor earned value management performance reports obtained from the programs. To perform this analysis, we compared the investment’s cumulative cost variance for each month from July 2009 through January 2010 to the cost variance reported on the Dashboard for those months. Similarly, we calculated the number of months each investment was ahead or behind schedule over the same period on the Dashboard. We also assessed 13 months of investment data to analyze trends in cost and schedule performances. To further assess the accuracy of the cost data, we compared it with other available supporting program documents, including monthly and quarterly investment program management reports; electronically tested the data to identify obvious problems with completeness or accuracy; and interviewed agency and program officials about the data and earned value management systems. For the purposes of this report, we determined that the cost data at eight of the investments were sufficiently reliable to use for our assessment. For the two remaining investments, we determined that based on their methods of earned value management, the data would not allow us to sufficiently assess and rate monthly investment performance. We did not test the adequacy of the agency or contractor cost-accounting systems. Our evaluation of these cost data was based on the documentation the agency provided. We also reviewed and analyzed OMB’s and the selected agencies’ processes for populating and updating the Dashboard. Additionally, we interviewed officials from OMB and the selected agencies and reviewed OMB guidance to obtain additional information on OMB’s and agencies’ efforts to ensure the accuracy of the investment performance data and cost and schedule performance ratings on the Dashboard. We used the information provided by OMB and agency officials to identify the factors contributing to inaccurate cost and schedule performance ratings on the Dashboard. Moreover, we used this information to examine the accuracy of the rebaseline information on the Dashboard, we interviewed agency and program officials about the number of rebaselines each investment has had, and compared these data with the rebaseline information listed on the Dashboard. To address our second objective, we analyzed related agency documentation to assess what policies or procedures they have implemented for using the data on the Dashboard to make management decisions. We also interviewed agency and program officials regarding the extent to which they use the data on the Dashboard as a management tool. Additionally, we attended one of OMB’s TechStat sessions, which are reviews of selected IT investments between OMB and agencies. We conducted this performance audit from January to July 2010 at the selected agencies’ offices in the Washington, D.C., metropolitan area. Our work was done in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Below are descriptions of each of the selected investments that are included in this review. The Financial Management Modernization Initiative is USDA’s financial management system modernization program. It is intended to be the central financial system for USDA and is to consolidate the current financial management system environment from 19 legacy systems into one Web-based system. USDA’s Risk Management Agency-13 program is intended to support the reengineering of all business systems associated with the crop insurance program and provide a central financial system that will provide Web- based tools and applications for accessing Risk Management Agency data. DOD’s Joint Precision Approach and Landing System investment is intended to provide a precision approach and landing capability for all DOD ground and airborne systems. It is intended to enable U.S. forces to safely land aircraft on any suitable surface worldwide (land and sea), with ceiling and/or visibility the limiting factor. DOD’s maneuver control system investment is intended to provide, among other things, the warfighter environment and collaborative and situational awareness tools used to support executive decision making, planning, rehearsal, and execution management. This system is to be used throughout the Army to provide a common view of critical information. DOE’s Integrated Management Navigation System consists of 5 major projects and is intended to standardize and integrate accounting, data warehouse, human resource, procurement, and budget processes throughout DOE. The Integrated Management Navigation System incorporates enterprisewide projects from DOE’s Office of the Chief Financial Officer, Office of Human Capital Management, and Office of Management. DOE’s Sequoia Platform is a supercomputer being developed for use by three weapons laboratories—Los Alamos, Lawrence Livermore, and Sandia National Laboratories—to contribute dramatically to the national security enterprise. This supercomputer will also be used in maintaining the nuclear deterrence and areas of nonproliferation, nuclear counterterrorism, and support to the intelligence community. HHS’s BioSense program is intended to improve the nation’s capabilities for disease detection, monitoring, and near real-time health situational awareness by creating a system that uses data from existing health-related databases to identify patterns of disease symptoms prior to specific diagnoses. HHS’s Electronic Research Administration program is the National Institutes of Health’s system for conducting interactive electronic transactions for the receipt, review, monitoring, and administration of grant awards to biomedical investigators worldwide. It is also intended to provide the technology capabilities for the agency to efficiently and effectively perform grants administration functions. DOJ’s Law Enforcement Wireless Communication System, also known as the Integrated Wireless Network, is to support the replacement and modernization of failing radio systems and achieve communication standards at DOJ’s law enforcement agencies. This program is intended to provide all four law enforcement components with a shared unified radio network, which should eliminate redundant coverage and duplicative radio sites, while providing efficient and comparable coverage. DOJ’s Unified Financial Management System is to improve the existing and future financial management and procurement operations across DOJ. Upon full implementation, the Unified Financial Management System will replace five financial management systems and multiple procurement systems with an integrated commercial off-the-shelf solution. This is to streamline and standardize business processes and procedures across the DOJ components. Table 7 provides additional details for each of the selected investments in our review. The following is GAO’s response to the Office of Management and Budget’s (OMB) additional comments. 1. We agree that the Dashboard has increased transparency, accountability, and oversight; therefore, we updated the report to discuss additional uses of the Dashboard, such as the use of trend data, improved oversight capabilities, and enhancements to agencies’ investment management processes. We also updated the number of Techstat sessions that have taken place. 2. While additional data quality issues need to be addressed in the Dashboard, we agree that the Dashboard is an improvement when compared to OMB’s previous oversight tools such as the Management Watch List and High Risk List. As such, we modified the report to highlight these improvements. For example, we added to the report that the Dashboard’s monthly reporting cycle is a significant improvement in the quality of the data from the Management Watch List and High Risk List, which were updated on a quarterly basis. 3. As stated in the report, we found that the ratings were not always accurate. We based this characterization on the fact that there were several instances in which the ratings were inconsistent with the performance indicated in our analysis of the investments’ earned value management (EVM) reports and were notably different (e.g., ratings of “green” versus “yellow”). We agree that EVM data generally only covers the contracted development parts of the investments. As such, as part of our methodology, we specifically selected investments where the majority of each investment was focused on development efforts (versus operational) and primarily carried out by contractors. As such, we maintain that the comparison between the selected investments’ Dashboard ratings and the performance indicated in their EVM reports is a fair assessment. 4. We acknowledge that the quality of EVM reports can vary. As such, we took steps to ensure that the EVM reports we used were reliable enough to evaluate the ratings on the Dashboard, and as OMB’s comments indicate, we discounted two of the ten selected investments after determining that their data was insufficient for our needs. We do not state that OMB should base their ratings solely on EVM data. 5. We agree that the original cost and schedule calculations are performing as planned (i.e., are not defective) and we further clarified this point in the report. We also note that planned changes to the rating calculations will incorporate current performance. However, these calculations, as originally planned and implemented, do not factor in the performance of ongoing milestones, which we and OMB agree is an area for improvement. 6. We agree that the severity of the discrepancies were not always dramatic. However, 4 of the 8 investments had notable discrepancies on either their cost or schedule ratings. Specifically, as demonstrated in the report, there were multiple instances in which the ratings were discrepant enough to change the color of the ratings. The difference between a “green” rating (i.e., normal performance) and a “yellow” rating (i.e., needs attention) is the difference between whether an investment is flagged for needing attention or not, which we believe is an important point to highlight. 7. We agree that agencies have a responsibility to provide quality milestone data; however, we maintain that OMB’s existing guidance on which milestones to report is too general for agencies to ensure they are reporting consistently. OMB acknowledges that this is an area for improvement and has established a working group to address this issue. 8. As previously discussed, on June 28, 2010, OMB issued its new guidance on managing IT baselines. This guidance, among other things, describes when agencies should report baselines changes to the Dashboard. Officials also provided information on the upcoming release of the Dashboard—which is intended to be released in July 2010—that will change the way baseline changes are displayed. We agree that these recent changes address the issues we identified. 9. We acknowledge that the Dashboard has made significant improvements to oversight and transparency, in comparison to OMB’s previous methods of overseeing IT investments, and we have added additional information to the background of the report to highlight this point. The following is GAO’s response to the Department of Energy’s (DOE) additional comment. OMB’s guidance required agencies to provide data at one consistent work breakdown structure level, rather than a mix of multiple levels. OMB and others confirmed that agencies were able to transmit milestones at a single consistent level. For this report, we observed agencies uploading at levels 1 through 4 and, thus, disagree that agencies were unable to transmit milestones lower than level 1. In addition to the contact name above, the following staff also made key contributions to this report: Shannin O’Neill, Assistant Director; Carol Cha; Eric Costello; Rebecca Eyler; Emily Longcore; Bradley Roach; and Kevin Walsh. | Federal IT spending has risen to an estimated $79 billion for fiscal year 2011. To improve transparency and oversight of this spending, in June 2009 the Office of Management and Budget (OMB) deployed a public website, known as the IT Dashboard, which provides information on federal agencies' major IT investments, including assessments of actual performance against cost and schedule targets (referred to as ratings). According to OMB, these data are intended to provide both a near real-time and historical perspective of the performance of these investments. GAO was asked to (1) examine the accuracy of the cost and schedule performance ratings on the Dashboard for selected investments and (2) determine whether the data on the Dashboard are used as a management tool to make improvements to IT investments. To do so, GAO selected 8 major investments from 5 agencies with large IT budgets, compared its analyses of the selected investments' performance to the ratings on the Dashboard, and interviewed agency officials about their use of the Dashboard to manage investments. The cost and schedule ratings on OMB's Dashboard were not always accurate for the selected investments. GAO found that 4 of the 8 selected investments had notable discrepancies on either their cost or schedule ratings. For example, the Dashboard indicated one investment had a less than 5 percent variance on cost every month from July 2009 through January 2010. GAO's analysis shows the investment's cost performance in December 2009 through January 2010 had a variance of 10 percent to less than 15 percent. Additionally, another investment on the Dashboard reported that it had been less than 30 days behind schedule since July 2009. However, investment data GAO examined showed that from September to December 2009 it was behind schedule greater than or equal to 30 days and less than 90 days. A primary reason for the data inaccuracies was that while the Dashboard was intended to represent near real-time performance information, the cost and schedule ratings did not take into consideration current performance. As a result, the ratings were based on outdated information. For example, cost ratings for each of the investments were based on data between 2 months and almost 2 years old. As of July 1, 2010, OMB plans to release an updated version of the Dashboard in July that includes ratings that factor in the performance of ongoing milestones. Another issue with the ratings was the wide variation in the number of milestones agencies reported, which was partly because OMB's guidance to agencies was too general. Having too many milestones can mask recent performance problems because the performance of every milestone (dated and recent) is equally averaged into the ratings. Specifically, investments that perform well during many previously completed milestones and then start performing poorly on a few recently completed milestones can maintain ratings that still reflect good performance. Conversely, having too few milestones limits the amount of information available to rate performance and allows agencies to potentially skew the ratings. OMB officials stated that they have recently chartered a working group with the intention of developing guidance for standardizing milestone reporting. However, until such guidance is available, the ratings may continue to have accuracy issues. Officials at three of the five agencies stated they were not using the Dashboard to manage their investments because they maintain they already had existing means to do so; officials at the other two agencies indicated that they were using the Dashboard to supplement their existing management processes. OMB officials indicated that they relied on the Dashboard as a management tool, including using the Dashboard's investment trend data to identify and address issues with investments' performance. According to OMB officials, the Dashboard was one of the key sources of information that they used to determine if an investment requires additional oversight. In addition, the Federal Chief Information Officer (CIO) stated that the Dashboard has greatly improved oversight capabilities compared to previously used mechanisms. He also stated that the Dashboard has increased the accountability of agencies' CIOs and established much needed visibility. GAO recommends that OMB report on its planned changes to the Dashboard to improve the accuracy of performance information and provide guidance to agencies that standardizes milestone reporting. OMB agreed with these recommendations, but disagreed with aspects of the draft report that GAO addressed, as appropriate. |
EO 13166, Improving Access to Services for Persons with Limited English Proficiency, requires that all federal agencies take reasonable steps to ensure meaningful access to their programs and services for people with LEP. DOJ has issued guidance that spells out four factors agencies need to consider in determining whether they are taking reasonable steps in this regard: (1) the number or proportion of LEP persons in the eligible service population; (2) the frequency with which LEP individuals come in contact with the program; (3) the importance of the services provided by the program; and (4) the resources available. Reasonable steps to ensure meaningful access could include developing language access services and guidance for implementing these services. The EO required that each federal agency create a plan outlining steps it will take consistent with DOJ guidance to ensure meaningful access to its services by LEP individuals. The EO did not require DOJ to evaluate the plans or to monitor the implementation of these plans. In 2002, in response to EO 13166, VA created and implemented VHA Directive 2002-006. It provided a framework for VA medical centers to assess and determine if there was a need to develop language assistance policies and language access services. The medical centers retain flexibility to determine exactly how they will comply with the EO, but they must do so in accordance with the four factors as outlined by DOJ and reiterated in the LEP Directive. In February 2007, VA issued VHA Directive 2007-009, which renewed VA’s guidance on language assistance policies. In its LEP Directive, VA outlined the steps that constitute an effective language assistance program at its medical centers, including an assessment of the language needs of the veteran population served and identification of the non-English languages encountered by medical center staff. If a VA medical center identifies a specific language need among its veteran service population, VA’s LEP Directive also indicated that the medical center should develop and implement a language assistance policy to ensure meaningful communication. According to the directive, the policy should describe how the medical center plans to provide language access services and ensure that all veterans receive meaningful access to VA health care services, regardless of the veterans’ level of English proficiency. VA’s LEP Directive also provided medical centers with examples of ways to provide language access services—including, translating written materials, hiring bilingual staff, and contracting with interpreter services. VA’s Equal Employment Opportunity (EEO) office is responsible for overseeing the implementation of VA’s LEP Directive. As such, this office conducted surveys of VA medical centers to assess whether medical center officials were following the steps outlined in the LEP Directive, such as conducting an assessment of language needs, or otherwise taking reasonable steps to provide language access services for veterans being served. VA’s Center for Minority Veterans (CMV) is also involved in helping medical centers meet the language and cultural needs of the veteran population. CMV is responsible for ensuring that eligible minority veterans receive VA benefits and services. Culturally appropriate health care is care that is respectful of and responsive to the cultural needs of patients. According to HHS, providing culturally appropriate services to culturally diverse patients has the potential to improve access to care, quality of care, and ultimately, health outcomes. HHS has published a set of standards for all medical facilities regarding the delivery of culturally appropriate care. Other national organizations also recognize the importance of culturally appropriate care and have established standards or recommendations for its provision. For example, the Joint Commission has standards related to culturally appropriate health care that must be met by hospitals, including VA medical centers, to receive accreditation. VA medical centers are implementing VA’s LEP Directive in terms of assessing the language needs of its veteran service population, and, if necessary, developing language assistance policies. VA medical centers and facilities have offered language access services that include providing translated materials and interpretation services to meet the needs of veterans with LEP. VA medical center officials reported a low utilization of these language access services. However, VA and medical center officials told us that they expect the demand for language access services to grow as the increasingly diverse servicemember population transitions to veteran status. VA stated that by June 2007 all of its medical centers had taken actions to implement the guidance in VA’s LEP Directive. According to VA, all of its medical centers have assessed the language needs of its veteran service population, and, as necessary, developed language assistance policies. Our visits to three VA medical centers and in-depth telephone interviews with staff at three VA medical centers provided a more detailed account of the variety of language access services being offered at VA medical centers. VA first surveyed each of VA’s medical center directors in December 2005 to assess if medical centers were following the guidance in VA’s LEP Directive. The survey contained 10 “yes” or “no” questions to gauge the extent of medical centers’ efforts to implement the LEP Directive. The questions ranged from issues such as overall language assistance policies to efforts to provide language access services. If a “no” response was provided for any question, the medical center directors completing the survey were instructed to indicate a tentative date by which they would take action to address the item. VA required medical center directors and VISN directors to ensure that the responses for individual medical centers were completed. However, VA did not require that VISN or medical center directors provide documentation to support their “yes” responses to the survey. The results of the 2005 survey showed that 65 percent of VA medical centers had assessed the language needs of their veteran service population and that 60 percent of the centers had developed a language assistance policy. While completing the survey, VA medical center directors reported information about other medical center efforts to meet the needs of LEP veterans, including efforts to translate documents and hire bilingual interpreters. For example, 87 percent of VA medical center directors reported establishing a list of staff available for interpretation services and that 24 percent of VA medical centers had translated written documents into languages other than English. After conducting its initial survey, VA took several steps to help medical centers improve their efforts to implement the LEP Directive, according to a VA official. VA staff made follow-up calls to VA officials from the medical centers that did not respond to the survey or that were identified by the survey as not conducting efforts consistent with the LEP Directive. During these follow-up efforts, VA staff offered guidance to medical center officials on conducting language needs assessments and developing language assistance policies in ways that were consistent with the LEP Directive. According to officials we interviewed at two VA medical centers, the guidance was helpful in their facilities’ assessment of language needs among their service population and development of language assistance policies. According to VA, the follow-up efforts proved successful, as all medical centers reported that they had assessed the language needs of their veteran service population, and, as necessary, developed language assistance policies. In July 2007, VA reported that as a result of its follow- up efforts, all of VA’s medical centers, in accordance with the LEP Directive, had assessed the language needs of their veteran service population and developed language assistance policies as needed. VA concluded that because of the progress and efforts made by its medical centers to implement the LEP Directive, VA would not conduct any additional evaluations of medical center implementation of the LEP Directive. Instead, VA said it would rely on the medical centers to monitor their own LEP language access needs and programs. VA medical centers and other VA facilities have access to a variety of translation services. At the national level, VA has translated its widely distributed benefits publication into Spanish and makes information from this publication available in Spanish on its Web site. All VA medical centers have computer software that offers medical treatment consent forms in Spanish and additional software that allows VA staff to access patient education materials in several languages other than English, such as Spanish and Korean. The VA medical centers and facilities included in our in-depth review also provide translated materials to meet the various language needs of their veteran service populations. We found that all six medical centers in our in-depth review translate written materials on their own. For example, staff at one medical center we interviewed told us that they translated educational materials on traumatic brain injuries into Spanish. However, staff at the medical centers reported that they primarily rely on publicly available translated documents rather than translating written materials on their own because of the cost of independently translating documents. The sources of these publicly available materials range from other federal agencies to results of an Internet search. For example, according to VA officials, patient educators at some medical centers use patient education materials on a range of topics including heart disease and diabetes that have been translated into Spanish by HHS’s Food and Drug Administration. VA medical center staff can also use materials translated by staff at other medical centers. For example, staff at one medical facility we reviewed reported that the VA medical center located in San Juan, Puerto Rico, has shared patient education materials they have translated into Spanish with other VA medical centers. Medical center staff we interviewed also reported using professional groups within VA, such as EEO managers or patient educators, to identify and share existing translated materials. However, these groups are limited in their membership and, as such, might not be aware of all translated materials available at VA medical centers. Additionally, VA medical facilities included in our review generally offer translated materials specific to the services they provide, when needed. For example, one Vet Center we reviewed translated a pamphlet on post-traumatic stress disorder into Spanish for its largely Hispanic veteran service population. As part of language access services, VA medical centers we reviewed in depth provide language interpretation services to help address the language needs of veterans with LEP. Staff we interviewed at all six medical centers we reviewed had the ability to provide interpretation services to veterans with LEP and were doing so in several different ways. For example, staff members at all six of these medical centers maintained a list of bilingual medical center staff who volunteered to provide interpretation services during a clinical encounter between a provider and a veteran with LEP. Medical center staff primarily used people from this list to provide needed interpretation services. In addition, staff at five of the six VA medical centers had contract telephone interpretation services available as a means to help effectively communicate with veterans and their families with LEP. Moreover, two of the three medical centers we visited advertised within the medical center, in languages other than English, the availability of language interpretation services to veterans and their families with LEP. In these medical centers, we observed signs posted near entrances and elevators that advertised, in multiple languages, free language interpretation services for veterans and their family members. In addition to efforts made by VA’s medical centers to provide language access services, some of VA’s Vet Centers have also made efforts to provide language access services to ensure that veterans with LEP have meaningful access to counseling and other services. Vet Centers provide language access services to veterans’ family members with LEP to ensure that they are able to participate in counseling sessions, such as marital and family counseling. For example, at one Vet Center we visited, the entire staff was bilingual to help accommodate the needs of its mostly Hispanic veteran service population. In cases where bilingual staff were not available, four of the five Vet Centers where we conducted interviews had agreements with the local VA medical center to access its list of bilingual staff available for interpretation services. Officials at the VA medical centers and facilities included in our in-depth review reported that veterans seldom use VA’s language access services. For example, officials and staff we interviewed from five of the six medical centers in our review stated that their facility had a contract in place for telephone interpretation services but only one medical center reported ever utilizing these services. Staff at the medical center that reported utilizing the interpretation service stated that the use was infrequent. Moreover, staff we interviewed at the six VA medical centers we reviewed reported that most veterans speak English and staff at one medical center reported that veterans prefer to receive written materials in English. Staff at one medical center told us that they stopped routinely offering translated materials after veterans—for whom English was not their primary language—stated their preference for materials in English. However, translated documents were made available upon request. Despite the low utilization of interpretation services, such as the use of a contracted telephone interpretation service, officials at all six medical centers in our in-depth review reported using bilingual staff to serve as volunteer interpreters when needed. In addition, in our review of 17 other medical centers’ language needs assessments, officials from one medical center volunteered utilizing telephone interpretation services four times in the 2 years prior to our request in July 2007, while another medical center volunteered in its assessment that veterans at the facility never used the facility’s contracted telephone interpretation service. VA medical center officials told us that they expect the demand for language access services to grow as the increasingly diverse servicemember population transitions to veteran status. The servicemember population is more diverse—in terms of race, and ethnicity—than the current veteran population. VA officials we interviewed projected that the increased diversity of the military servicemember population will directly translate to an increased level of diversity in the veteran population as these servicemembers end their military careers and become veterans who may be eligible for VA health care services. Staff from several VA facilities told us that they have recently witnessed demographic changes in their service population. For example, two Vet Centers we visited told us that they have experienced an increase in the number of veterans and family members needing language access services in Spanish to facilitate marital and family counseling sessions. In an effort to address the cultural differences represented in its veteran service population, VA medical centers have conducted training programs to increase staff awareness about cultural diversity and the need for culturally appropriate health care services. Additionally, VA medical centers and facilities tailored a variety of health care services to different segments of the veteran population and promoted the availability of culturally appropriate health care services by targeting outreach efforts to different segments of the veteran population. VA medical centers have provided a variety of training programs for staff to both raise cultural awareness and to assist medical center staff in providing culturally appropriate health care services. According to VA medical center officials we interviewed, medical center staff are required to annually complete one mandatory VA-developed training course on the health care needs of veterans of various age groups. The six VA medical centers we reviewed have offered training to help staff understand cultural diversity as well as appreciate the need for culturally appropriate health care. These training efforts included locally-developed training on diversity given to new staff during orientation and on-line diversity training that is available to all staff. One of the six medical centers we reviewed in depth also developed training to help staff better understand what it was like for a veteran in general to serve in the military, as well as what it was like for a veteran who served during a particular military service era, such as the Vietnam War. The training materials also provided information on the types of medical diagnoses that may be related to a veteran’s service, such as exposure to environmental hazards. Additionally, individual medical centers developed programs designed to increase awareness of veteran diversity and different cultural practices. For example, four VA medical centers we reviewed reported using celebrations and events in conjunction with heritage months (e.g., African American Heritage Month and Women’s History Month) as educational opportunities to increase medical center staff awareness of veteran cultures and diversity. Programs included speakers, cultural fairs, and presentations open to staff and veterans at the individual VA medical centers. VA medical centers and facilities have provided numerous health care services designed to meet the needs of the culturally diverse veteran population that differs in terms of race, ethnicity, sex, as well as age. According to VA officials, these services have varied across VA medical centers, CBOCs, and Vet Centers, depending on the needs of the veteran populations served. During our in-depth review of 16 VA medical centers and facilities, officials identified a number of health care services that are provided in a culturally appropriate manner: Two medical centers and one Vet Center offer spiritual services, which include the use of medicine men and traditional healing rituals, in order to meet the needs of Native American veterans. Three medical centers and one CBOC have increased the use of modern technology, such as text-messaging appointment reminders, to communicate more effectively with younger veterans, who are typically accustomed to such means of communication. One Vet Center offered a counseling group exclusively for African American veterans and one Vet Center offered counseling groups for women veterans. According to staff we interviewed, services tailored to different segments of the population are often designed using information gained from specific veteran requests, veteran focus groups, or through recommendations of special-emphasis population groups. To facilitate the delivery of culturally appropriate health care services, all VA medical centers have a minority veterans program coordinator. The role of the minority veterans program coordinator is to identify barriers to health care and advise medical center officials in developing services to make health care more accessible and culturally appropriate for minority veteran populations. Minority veterans program coordinators also work directly with minority veterans in an effort to facilitate access to and use of VA health care services. To promote the availability of culturally appropriate care, the six VA medical centers included in our in-depth review have implemented a variety of targeted outreach efforts to different veteran populations. For example, officials at two of the six medical centers we reviewed reported working closely with military and National Guard bases located near the medical center to increase awareness of VA health benefits among younger veterans and their families. According to VA staff, these outreach efforts helped younger veterans understand that VA was not just “their grandfather’s VA” and that VA medical centers serve veterans from all military conflicts. At one medical center, officials we interviewed reported outreach efforts to help Hispanic, younger, and female veterans recognize when they might need medical services, for example treatment for post- traumatic stress disorder or depression. These outreach efforts included participating in community health fairs and ceremonies held to welcome home servicemembers from the combat theaters. VA staff said they tailored these efforts to different communities, and staff at one medical center reported including materials in Spanish. VA reviewed a draft of this report and sent us comments by email. VA agreed with the information presented as it pertained to VA. In commenting on the development of resources and education to help facilitate the delivery of culturally competent care, VA noted that there are different solutions based on local needs and supports a multimodality strategy as opposed to a “one module fits all” approach. We agree and as we discussed in our report, VA medical facilities do conduct training for staff and tailor health care services in an effort to address the differing needs for culturally appropriate health care services in particular locations. These efforts and services are often locally developed in response to the characteristics and needs of the veteran population served. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after its issuance date. At that time, we will send copies of this report to the Secretary of Veterans Affairs. We will also provide copies to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix I. In addition to the contact above, Marcia Mann, Assistant Director; Melanie Anne Egorin; Krister Friday; Adrienne Griffin; Samantha Poppe; and James Walker made contributions to this report. | The Department of Veterans Affairs (VA) faces challenges in bridging language and cultural barriers as it seeks to provide quality health care services to an increasingly diverse veteran population in terms of race, ethnicity, sex, and age. To meet the needs of veterans with limited English proficiency (LEP), VA issued an LEP Directive that provides guidance for medical centers in assessing language needs and, if needed, developing language access services designed to ensure effective communication between English-speaking providers and those with LEP. In addition, VA is also challenged to deliver health care services in ways that are culturally appropriate--that is, respectful of and responsive to the cultural values of a diverse veteran population. In light of these challenges, GAO was asked to discuss the (1) actions VA has taken to implement its LEP Directive and the status of veterans' utilization of language access services, and (2) efforts VA has made to provide culturally appropriate health care services. GAO reviewed VA's policies and the LEP Directive, interviewed VA officials and reviewed efforts by 6 VA medical centers and 10 other VA facilities to implement VA's LEP Directive and to provide culturally appropriate health care services. GAO also reviewed documents from 17 other VA medical centers related to implementation of the LEP Directive. VA reported that as of June 2007, all of its medical centers had taken action to implement the guidance in VA's LEP Directive. Specifically, medical center officials told VA that they had assessed the language needs of their veteran service populations, and, if necessary, developed language assistance policies and offered language access services, including providing translated materials and interpretation services. The VA medical centers GAO reviewed provided translated materials to meet the various language needs of their veteran service populations and offered interpretation services as well. For example, VA medical centers maintained a list of bilingual medical center staff who can provide interpretation services during a clinical encounter between a provider and a veteran with LEP. In addition, five of the six VA medical centers GAO reviewed can access telephone interpretation services that are provided through a contract to help ensure that medical staff can communicate with veterans and their families with LEP. According to officials at medical centers GAO reviewed, utilization of language access services is low. However, VA officials told GAO that they expect the demand for language access services to grow as the increasingly diverse military servicemember population transitions to veteran status. VA medical centers are addressing the need for culturally appropriate health care services through staff training and tailoring health care services. Medical centers provide training for medical center staff to facilitate the delivery of culturally appropriate health care services including an annual mandatory training on the health care needs of veterans in various age groups. VA medical centers and other VA facilities GAO reviewed have implemented a variety of measures to meet the needs of their culturally diverse veteran populations. For example, three VA facilities GAO reviewed offer spiritual services, such as the use of medicine men and traditional healing rituals, in order to meet the needs of Native American veterans. Also, VA has minority veterans program coordinators at each medical center to identify barriers to health care for minorities and advise medical center officials in developing services to make health care more accessible and culturally appropriate for minority veteran populations. VA medical centers GAO reviewed have also initiated outreach efforts to promote the availability of culturally appropriate care. In commenting on a draft of this report, VA stated that it agreed with the information presented as it pertained to VA. |
The basic goal of the elections system in the United States is straightforward: All eligible persons, but only eligible persons, should be able to cast their votes and, if such votes have been properly cast by the voters, have those votes counted accurately. Faith in the fairness and accuracy of the U.S. election system is at the foundation of our democracy. Reports of problems encountered in the close 2000 presidential election with respect to voter registration lists, absentee ballots, ballot counting, and antiquated voting equipment raised concerns about the fairness and accuracy of certain aspects of the U.S. election system. After the events surrounding the November 2000 general election, the Help America Vote Act of 2002 (HAVA) was enacted and major election reforms are now being implemented. The November 2004 general election highlighted some of the same challenges as 2000 as well as some new challenges in areas such as electronic voting technology and implementation of some HAVA requirements. The issues that arose in both elections highlighted the importance of the effective interaction of people, processes, and technology in ensuring effective election operations and maintaining public confidence that our election system works. Since 2001, GAO has issued a series of reports covering aspects of the election process primarily with respect to federal elections. This report focuses on the changing of such election processes in the United States and the November 2004 general election. Specifically, primarily with respect to federal elections, our objectives were to examine each major stage of the election process to (1) identify changes to election systems since the 2000 election, including steps taken to implement HAVA, and (2) describe the issues and challenges encountered by election officials in the November 2004 election. Election authority is shared by federal, state, and local officials in the United States. Congressional authority to affect the administration of elections derives from various constitutional sources, depending upon the type of election. Congress has passed legislation in several major areas of the voting process. For example, the National Voter Registration Act of 1993 (NVRA), expanded the opportunities for citizens to register to vote for federal elections by, among other things, requiring most states to accept registration applications for federal elections by mail and at state motor vehicle agencies (MVA) and at certain other state agencies. The act also requires that in the administration of elections for federal office, states are to take certain steps to accurately maintain voter registration lists, and it limits the circumstances for removing names from voter lists. The Uniformed and Overseas Citizens Absentee Voting Act of 1986 (UOCAVA) requires states to, among other things, permit uniformed services voters absent from the place of residence where they are otherwise qualified to vote, their dependents, and U.S. citizens residing outside the country to register and vote absentee in elections for federal office. The Help America Vote Act was enacted into law on October 29, 2002. As discussed below, the act includes a number of provisions related to voter registration, provisional voting, absentee voting, voting equipment, and other election administration provisions, and authorizes the appropriation of funds to be used toward implementing the law’s requirements. HAVA also provides that the choices on the methods of implementation of such requirements, for example, a computerized statewide voter registration list, provisional voting, voter information requirements at the polling place, identification requirements, and voting system standards (for ballot verification, manual audit capacity, accessibility, and error rates), are left to the discretion of the states. HAVA further specifies that such requirements are minimum requirements and should not be construed to prevent states from establishing election technology and administration requirements that are stricter than HAVA requirements as long as they are not inconsistent with certain other specified provisions. HAVA, in general, applies to all 50 states and the District of Columbia. Areas covered by the law include Computerized statewide voter registration list: HAVA requires most states to implement a single, uniform, centralized, computerized statewide voter registration list to serve as the official voter registration list for the conduct of all elections for federal office in each such state. Under HAVA, the computerized statewide voter registration list was to have been implemented by 2004. However, 40 states and the District of Columbia received waivers to extend the deadline until January 1, 2006. States are required to perform regular maintenance of the voter list by comparing it to state records on felons and deaths, and to match voter registration applicant information on the voter list with information in the state motor vehicle agency’s records and Social Security Administration records, as appropriate. Absentee ballots: HAVA contains various amendments to UOCAVA regarding absentee voting for absent uniformed service voters and certain other civilian voters residing outside of the United States. The amendments, among other things, (1) required that the secretaries of each military department, to the maximum extent practicable, provide notice to military personnel of absentee ballot deadlines, (2) extended the time that can be covered by a single absentee ballot application from UOCAVA voters, and (3) prohibited states from refusing to accept or process, with respect to federal elections, a voter registration application or an absentee ballot application by an absent uniformed services voter on the ground that the application was submitted before the first date that the state otherwise accepts or processes applications for that year from nonuniformed service absentee voters. Provisional ballots: HAVA requires most states to implement provisional voting for elections for federal office. Under HAVA, in an election for federal office, states are to provide a provisional ballot to an individual asserting (1) to be registered in the jurisdiction for which he or she desires to vote and (2) eligible to vote in a federal election but (3) whose name does not appear on the official list of eligible voters for the polling place. Provisional ballots are also to be provided in elections for federal office to individuals who an election official asserts to be ineligible to vote, and for court-ordered voting in a federal election after the polls have closed. These various types of individuals, under HAVA, are to be permitted to cast the provisional ballot upon the execution of written affirmation at the polling place that they are registered voters in the jurisdiction and that they are eligible to vote in that election. If election officials determine that the individual is eligible under state law to vote, the individual’s provisional ballot is to be counted as a vote in accordance with state law. HAVA also requires that a free access system be established to inform voters if their votes were counted, and if not, the reason why. Polling places: HAVA provisions targeted, among other things, improving information at polling places and Election Day procedures. To improve the knowledge of voters regarding voting rights and procedures, HAVA requires election officials to post voting information at each polling place on the days of elections for federal office, including, for example, a sample ballot, polling place hours, how to vote, instructions for first-time voters who registered by mail, and general information on federal and state voting rights laws and laws prohibiting fraud and misrepresentation. The act also authorized the appropriation of funds for payments to states for educating voters concerning voting procedures, voting rights, and voting technology. Under HAVA, voting systems used in elections for federal office are required to meet specified accessibility requirements for individuals with disabilities. With respect to improving accessibility, HAVA also authorized the appropriation of funds for payments to states to be used for improved accessibility of polling places for, among others, individuals with disabilities and those with limited English proficiency. HAVA also requires that such voting systems provide individuals with disabilities with the same opportunity for access and participation (including privacy and independence) as for other voters. In connection with this requirement, HAVA provides for the use of at least one direct recording electronic (DRE) device or other voting system equipped for individuals with disabilities at each polling place. Identification requirements: Under HAVA, states are to require that certain voters who register by mail to provide specified types of identification when voting at the polls or send a copy of the identification with their mailed applications. Acceptable identification includes a current and valid photo identification or current utility bill, bank statement, government check, paycheck, or other government document that shows the name and address of the voter. Under HAVA, voters at the polls who have not met the identification requirement may cast a vote under HAVA’s provisional voting section. Similarly, mail-in ballots from persons who have not provided the required identification also are to be counted as HAVA provisional ballots. Election administration: HAVA also established an agency with wide-ranging duties to help improve state and local administration of federal elections. The Election Assistance Commission is to be involved with, among other things, providing voluntary guidance to states implementing certain HAVA provisions, serving as a national clearinghouse and resource for information with respect to the administration of federal elections, conducting studies, administering programs that provide federal funds for states to make improvements to some aspects of election administration, and helping to develop testing for voting systems, and standards for election equipment. EAC is led by four Commissioners, who are to be appointed by the President and confirmed by the Senate. The Commissioners, who, under HAVA, were to be appointed by February 26, 2003, were appointed by the President in October 2003 and confirmed by the Senate in December 2003. Since beginning operations in January 2004, EAC has achieved many of its objectives. Among other things, EAC has held hearings on the security of voting technologies and the national poll worker shortage; established a clearinghouse for information on election administration by issuing two best practices reports; distributed payments to states for election improvements, including payments for voter education and voting equipment replacement; drafted changes to existing federal voluntary standards for voting systems; and established a program to accredit the national independent certified laboratories that test electronic voting systems against the federal voluntary standards. However, EAC has reported that its delayed start-up affected its ability to conduct some HAVA-mandated activities within the time frames specified in the act. In turn, according to its fiscal year 2004 annual report, the delayed EAC start-up affected states’ procurement of new voting equipment and the ability of some states and local jurisdictions to meet related HAVA requirements by statutory deadlines. Voting systems: One of the primary HAVA provisions relates to encouraging states to replace punch card voting systems and lever voting systems and authorizing appropriations for payments to support states in making federally mandated improvements to their voting systems. A voting system includes the people, processes, and technology associated with any voting method. It encompasses the hardware and software used to define the ballot, conduct the vote, and transmit and tally results, and system maintenance and testing functions. With respect to standards for voting systems used in elections for federal office, HAVA requirements for such systems include providing voters with the ability to verify their votes before casting their ballots, producing permanent paper records for manual auditing of voting systems, and compliance of voting system ballot counting error rates with those set out in specified federal voting system standards. HAVA also directs that updates to the federal voluntary voting system standards for these requirements be in place by January 1, 2004, and provides for additional updates to the voluntary standards as approved by the Election Assistance Commission. Mechanisms are also specified that can be used by states and localities in acquiring and operating voting systems, including accreditation of laboratories to independently test and evaluate voting systems and federal certification for voting systems that undergo independent testing. The time frames for implementing various HAVA requirements ranged from as early as 45 days after enactment (a deadline for establishing a grant program for payment to the states for improved election administration) to as late as January 1, 2006, for various voting system standards. Several key deadlines were set for January 1, 2004, including implementation of HAVA’s provisional voting requirements and the establishment of a statewide voter registration list (or to request a waiver from the deadline until January 1, 2006). States receiving funds to replace punch card voting systems or lever voting systems could also request a waiver until January 1, 2006; otherwise such systems were to be replaced in time for the November 2004 general elections. The deadline for states and jurisdictions to comply with specific requirements for voting systems, such as producing a paper record for audit purposes, was January 1, 2006. HAVA vests enforcement authority with the Attorney General to bring a civil action against any state or jurisdiction as may be necessary to carry out specified uniform and nondiscriminatory election technology and administration requirements under HAVA. These requirements pertain to HAVA voting system standards, provisional voting and voting information requirements, the computerized statewide voter registration list requirements, and requirements for persons who register to vote by mail. The enforcement of federal statutes pertaining to elections and voting has, with certain exceptions, been delegated by the Attorney General to the Civil Rights Division. The U.S. election system is highly decentralized and based upon a complex interaction of people (election officials and voters), processes, and technology. Each of the 50 states and the District of Columbia has its own election system with a somewhat distinct approach. Within each of these 51 systems, the guidelines and procedures established for local election jurisdictions can be very general or specific. Each election system generally incorporates elements that are designed to allow eligible citizens to vote and ensures that votes are accurately counted. While election systems vary from one local jurisdiction to another, most election systems have the elements identified in figure 7. Typically, states have decentralized elections so that the details of administering elections are determined at the local jurisdiction. States can be divided into two groups according to how they delegate election responsibilities to local jurisdictions. The first group include 41 states where election responsibilities are delegated to counties, with a few of these states delegating election responsibilities to some cities, and 1 state that delegates these responsibilities to election regions. We included the District of Columbia along with this group. The second group is composed of 9 states that delegate election responsibilities to subcounty governmental units, known by the U.S. Census Bureau as minor civil divisions (MCD). However, in 1 of these states, Minnesota, election functions are split between county-level governments and MCDs. For example, registration is handled exclusively by county officials, and functions, such as polling place matters, are handled by MCDs. Overall, about 10,500 local government jurisdictions are responsible for conducting elections nationwide, with the first group of states containing about one- fourth of the local election jurisdictions and about three-fourths of the local election jurisdictions located in the states delegating responsibilities to MCDs. Although more election jurisdictions are in the 9 states, most of the population (88 percent of the U.S. population based on the Census of 2000) lives in the states delegating responsibilities primarily to counties. While voter registration is not a federal requirement, the District of Columbia and all states, except North Dakota, generally require citizens to register before voting. The deadline for registering, and what is required to register, varies; at a minimum, state eligibility provisions typically require a person to be a U.S. citizen, at least 18 years of age, and a resident of the state, with some states requiring a minimum residency period. Citizens apply to register to vote in various ways, such as at motor vehicle agencies, during voter registration drives, by mail, or at local voter registrar offices. Election officials process registration applications and compile and maintain the list of registered voters to be used throughout the administration of an election. Prior to HAVA, voter registration lists were not necessarily centralized at the state level, and separate lists were often managed by local election officials. HAVA requires voter registration information for federal elections to be maintained as a statewide computerized list and matched with certain state data, and that voter registration application information be matched with certain state data and, in some cases, with federal data, to help ensure that the voter list is accurate. All states and the District of Columbia have provisions allowing voters to cast their ballot before Election Day by voting absentee with variations on who may vote absentee, whether the voter needs an excuse, and the time frames for applying and submitting absentee ballots. In addition, some states also allow early voting, in which the voter goes to a specific location to vote in person prior to Election Day. As with absentee voting, the specific circumstances for early voting—such as the dates, times, and locations—are based on the state and local requirements. In general, early voting allows voters from any precinct in the jurisdiction to cast their vote before Election Day either at one specific location or at one of several locations. The early voting locations are staffed by poll workers who have a registration list for the jurisdiction and ballots specific to each precinct. The voter is provided with and casts a ballot for his or her assigned precinct. Election officials perform a broad range of activities in preparation for and on Election Day itself. Prior to an election, officials recruit and train poll workers to have the skills needed to perform their Election Day duties, such as opening and closing the polls, operating polling place equipment, and explaining and implementing provisional voting procedures for certain voters such as those who are not on the registration list. Where needed and required, election officials must also recruit poll workers who speak languages other than English. Polling places have to be identified as meeting basic standards for accessibility and having an infrastructure to support voting machines as well as voter and poll worker needs. Ballots are designed and produced to meet state requirements, voter language needs, and identify all races, candidates, and issues on which voters in each precinct in their jurisdiction will vote. Election officials seek to educate voters on topics such as what the ballot looks like, how to use a voting machine, and where their particular polling place is located. Finally, election officials seek to ensure that voting equipment, ballots, and supplies are delivered to polling places. On Election Day, poll workers set up and open the polling places. This can include tasks such as setting up the voting machines or voting booths, readying supplies, testing equipment, posting required signs and voter education information, and completing paperwork such as confirming that the ballot is correct for the precinct. Before a voter receives a ballot or is directed to a voting machine, poll workers typically are to verify his or her eligibility. The assistance provided to voters who are in the wrong precinct depends on the practices for that particular location. One of the most significant post-2000 election reforms found in HAVA, according to the Election Assistance Commission, is that states are required to permit individuals, under certain circumstances, to cast a provisional ballot in federal elections. More specifically, states are to provide a provisional ballot to an individual asserting to be (1) registered in the jurisdiction for which he or she desires to vote and (2) eligible to vote in a federal election, but (3) whose name does not appear on the official list of eligible voters for the polling place. In addition, provisional ballots are to be provided in elections for federal office to individuals who an election official asserts to be ineligible to vote, and for court-ordered voting in a federal election after the polls have closed. Although many states had some form of provisional balloting prior to the passage of HAVA, 44 of the 50 states and the District of Columbia were required to provide provisional ballots for the 2004 general election. Under HAVA, 6 states were exempt from HAVA’s provisional voting requirements because they either permitted voters to register on Election Day or did not require voter registration. If individuals are determined to be eligible voters, their provisional ballots are to be counted as votes in accordance with state law, along with other types of ballots, and included in the total election results. Following the close of the polls, election officials and poll workers complete a number of basic steps to get the votes counted and determine the outcome of the election. Equipment and ballots are to be secured, and votes are to be tallied or transferred to a central location for counting. The processes used to count or to recount election votes vary with the type of voting equipment used in a jurisdiction, state statutes, and local jurisdiction policies. Votes from Election Day, absentee ballots, early votes (where applicable), and provisional ballots are to be counted and consolidated for each race to determine the outcome. While preliminary results are available usually by the evening of Election Day, the certified results are generally not available until days later. Some states establish a deadline for certification of results, while other states do not. Voting methods are tools for accommodating the millions of voters in our nation’s approximately 10,000 local election jurisdictions. Since the 1980s, ballots in the United States have been cast and counted using five methods: paper ballots, lever machines, punch cards, optical scan, and DREs. Four of the five methods by which votes are cast and counted involve technology; only the paper ballot system does not use technology. The three newer methods—punch card, optical scan, and DRE—depend on computers to tally votes. Punch card and optical scan methods rely on paper ballots that are marked by the voter, while many DREs use computers to present the ballot to the voter. Voting systems utilize technology in different ways to implement these basic voting methods. For instance, some punch card systems include the names of candidates and issues on the printed punch card, while others use a booklet of candidates and issues that must be physically aligned with the punch card. The way systems are designed, developed, tested, installed, and operated can lead to a variety of situations where misunderstanding, confusion, error, or deliberate actions by voters or election workers can, in turn, affect the equipment’s performance in terms of accuracy, ease of use, security, reliability, and efficiency. In fact, some recent election controversies have been specifically associated with particular voting methods and systems. Nevertheless, all voting methods and systems can benefit from established information technology management practices that effectively integrate the people, processes, technologies. For this report, we conducted a Web-based survey of election officials in all 50 states and the District of Columbia, surveyed by mail a nationally representative stratified random probability sample of 788 local election jurisdictions, and conducted on-site interviews with election officials in 28 local jurisdictions in 14 states. Copies of the survey instruments are in appendixes II and III. In addition, the results of our state and local surveys are presented in two supplemental GAO products that can be found on our Web site at www.gao.gov. Appendix IV provides a summary of jurisdictions we visited. In reporting the state survey data, actual numbers of states are provided. When reporting local jurisdiction survey data, we provide estimates for jurisdictions nationwide. Unless otherwise noted, the maximum sampling error, with 95 percent confidence, for estimates of all jurisdictions from our local jurisdiction survey is plus or minus 5 percentage points (rounded). We also provide some national estimates by jurisdiction population size, and the sampling errors for these estimates are slightly higher. For these estimates, large jurisdictions are defined as those with a population over 100,000, medium jurisdictions have a population of over 10,000 to 100,000, and small jurisdictions have a population of 10,000 or less. Unless otherwise noted, all estimates from our local jurisdiction survey are within our planned confidence intervals. Jurisdictions in which we conducted on-site interviews were chosen based on a wide variety of characteristics, including voting methods used, geographic characteristics, and aspects of election administration, such as whether early voting was offered. We did not select jurisdictions we visited on the basis of size, but as appropriate, we identify the size of a jurisdiction we visited using the same groupings we used for our nationwide mail survey. We also reviewed extensive prior GAO work and other national studies and reports, and attended an annual election official conference. A comprehensive description of our methodology for this report is contained in appendix V. We conducted our work between March 2005 and February 2006 in Washington, D.C.; Dallas; Los Angeles; and 28 local election jurisdictions in 14 states, in accordance with generally accepted government auditing standards. In general, the goal of a voter registration system is to ensure that eligible citizens who complete all the steps required of them to register to vote in their jurisdictions are able to have their registrations processed accurately and in a timely fashion, so they may be included on the rolls in time for Election Day. The November 2000 general election resulted in widespread concerns about voter registration in the United States. Headlines and reports questioned the mechanics and effectiveness of voter registration by highlighting accounts of individuals who thought they were registered being turned away from polling places on Election Day, the fraudulent use of the names of dead people to cast additional votes, and jurisdictions incorrectly removing the names of eligible voters from voter registration lists. With the passage of HAVA, with respect to federal elections, most states were required to establish statewide computerized voter registration lists and perform certain list maintenance activities as a means to improve upon the accuracy of voter registration lists. List maintenance is performed by election officials and consists of updating registrants’ information and deleting duplicate registrations and the names of registrants who are no longer eligible to vote. The voter registration process includes the integration of people, processes, and technology involved in registering eligible voters and in compiling and maintaining accurate and complete voter registration lists. In managing the voter registration process and maintaining voter registration lists, state and local election officials must balance two goals— minimizing the burden on eligible persons registering to vote, and ensuring that voter lists are accurate, that is, limited to those eligible to vote and that eligible registered voters are not inadvertently removed from the voter registration lists. This has been a challenging task, and remains so, as we and others have noted. While registering to vote appears to be a simple step in the election system generally, applying to register and being registered are not synonymous, and election officials face challenges in processing the voter registration applications they receive. This chapter describes various HAVA and state changes related to the voter registration processes that have occurred since the 2000 general election. It also examines continuing and new registration challenges encountered by local jurisdictions for the 2004 general election. With respect to voter registration, a significant change since the 2000 general election is the HAVA requirement for states to each establish a single, uniform, statewide, computerized voter registration list for conducting elections for federal office. The HAVA requirements for states to develop statewide lists and verify voter information against state and federal agency records presented a significant shift in voter list management in many states. While the initial deadline to implement HAVA’s statewide list requirement was January 1, 2004, more than 40 states took advantage of a waiver allowing an extra 2 years to complete the task, or until January 1, 2006. The statewide registration lists for federal elections are intended to implement a system capable of maintaining voter registration lists that are more accurate by requiring states to (1) match voter registration application information against other state and federal agency databases or records to help ensure that only eligible voters are added to such lists, (2) identify certain types of ineligible voters whose names should be removed from the lists, and (3) identify individual voter names that appear more than once on the list to be removed from the lists. While HAVA defined some parameters for the required statewide voter registration lists and required matching voter information with certain state and federal records, the act leaves the choices on the methods of implementing such statewide list requirement to the discretion of the states. On the basis of our survey of state election officials, states varied in the progress made in implementing their statewide voter registrations lists, how they have implemented these systems, and the capabilities of their systems to match information with other state and federal agency records as well as many other features of the state systems. In addition to requiring states to develop statewide voter registration lists, HAVA provides that states must require that mail registrants who have not previously voted in a federal election in the state are to provide certain specified types of identification with their mail application, and if they do not provide such identification with their application, these first-time mail registrants are to provide the identification at the polls. Furthermore, if such a voter does not have the requisite identification at the polls, HAVA requires that the voter be provided a provisional ballot with the status of his or her ballot to be determined by the appropriate state or local official. As with the statewide voter registration list requirement, HAVA leaves the choices on the methods of implementing the provisional voting requirement to the discretion of the states. On the basis of interviews of officials in 28 local election jurisdictions, implementation of the requirement for first-time voters who registered by mail varied. One noteworthy variation is in the definition of mail registration, where some local jurisdictions we visited told us that applications received through voter registration drives would be treated as mail registrations subject to HAVA identification requirements and other local jurisdictions we visited told us applications from registration drives were not treated as mail registrations and therefore were not treated as subject to HAVA identification requirements. As noted above, during 2004 and 2005 many states were in the process of implementing their HAVA-required statewide voter registration lists and associated requirements for maintaining the lists. Thus, the potential benefits to be gained from HAVA’s requirement for the statewide voter registration lists were not evident in many states at the time of the November 2004 general election. Maintenance requirements in HAVA intended to help states and local election jurisdictions to have access to more accurate voter registration list information, such as identifying duplicate registrations and matching the voter information against other state agency databases or records, were not yet fully implemented by many states. Many local jurisdictions were not yet seeing the benefits of being able to verify voter registration application information with state motor vehicle agency databases to identify eligible voters, or to match voter registration lists with state vital statistics agency records to identify deceased persons, and to appropriate state agency’s records to identify felons who may be ineligible to vote. Thus, on the basis of our nationwide survey and local election jurisdictions we visited, many local jurisdictions continued to encounter challenges with the voter registration lists that they had experienced in the 2000 general election, such as difficulties related to receiving inaccurate and incomplete voter registration information, multiple registrations, or ineligible voters appearing on the list. In addition, election officials in some jurisdictions we visited told us they continued to face challenges obtaining voter registration applications from motor vehicle agencies and other NVRA entities. In addition, for some local election jurisdictions we visited, election officials told us that efforts on the part of various groups to get out the vote by registering new voters through voter registration drives created new challenges not identified to us as a problem in the 2000 general election. Specifically, at some local jurisdictions we visited, election officials told us they faced a challenge processing large volumes of voter registration applications just prior to the deadlines for registration, which included challenges in some large jurisdictions to resolve issues of incomplete or inaccurate (and potentially fraudulent) applications submitted by entities conducting voter registration drives. HAVA requires states to, among other things, (a) implement a single, uniform, computerized statewide voter registration list for conducting elections for federal office; (b) perform regular maintenance by comparing the voter list against state records on felons and deaths; (c) verify information on voter registration applications with information in state motor vehicle agency databases or with a Social Security Administration database, as appropriate. In addition, HAVA imposes new identification requirements for certain mail registrants—such as, individuals who register by mail and have not previously voted in a federal election within the state. Historically, to ensure that only qualified persons vote, states and local jurisdictions have used various means to establish and compile voter registration lists. Prior to HAVA, we noted in our October 2001 comprehensive report on election processes nationwide that in compiling these lists, election officials used different methods to verify the information on registration forms, check for duplicate registrations, and update registration records, and we noted that states’ capabilities for compiling these lists varied. At the time, some states had statewide voter lists, but others did not and were not required to do so. Moreover, most jurisdictions we visited at the time maintained their own local, computerized voter lists. Under HAVA, this has changed. HAVA requires the chief election official in the state to implement a “single, uniform, official, centralized, interactive, computerized statewide voter registration list” that must contain the name and registration information of every legally registered voter in the state. Under HAVA, states were required to be in compliance with the statewide voter registration list requirement by January 2004 unless they obtained a waiver until January 2006. Forty-one states and the District of Columbia obtained a waiver and thus, for the 2004 general election, were not required to have their statewide voter registration lists in place. With respect to the HAVA required statewide voter registration list, states are to, among other things: Make the information in such lists electronically accessible to any election officials in the state. Ensure that such voter lists contain registration information on every legally registered voter in the state, with a unique identifier assigned to each legally registered voter. Verify voter identity; most states are required to match voter information obtained on the voter registration application for the applicant’s drivers’ license number or the last four digits of the voter’s Social Security number, when available, to state MVAs or the Social Security Administration databases. In connection with this requirement to verify voter registration application information, states must require that individuals applying to register to vote provide a current and valid driver’s license number, or the last four digits of their Social Security number; if neither has been issued to the individual, then the state is to assign a unique identifier to the applicant. The state MVA must enter into an agreement with the Social Security Administration (SSA), as applicable, to verify the applicant information when the last four digits of the Social Security number are provided, rather than a driver’s license number or state ID number. Perform list maintenance on the statewide voter registration lists by coordinating them on a regular basis with state records on felony status and deaths, in order to identify and remove names of ineligible voters. List maintenance is also to be conducted to eliminate duplicate names. Implement safeguards ensuring that eligible voters are not inadvertently removed from statewide lists. Include technological security measures as part of the statewide list to prevent unauthorized access to such lists. Except for the 9 states that did not obtain a waiver from HAVA’s requirements for establishing a statewide voter registration list, all other states subject to the statewide list requirement were not required to perform list maintenance activities as defined in HAVA until the extended waiver deadline of January 2006. By the November 2004 general election, states were in various stages of implementing provisions of HAVA related to their statewide voter registration lists and performing voter list verification and maintenance, and had different capabilities and procedures at the state and local levels for performing required list maintenance functions. Many states reported that their statewide voter registration systems implementing the statewide list requirement include or will include additional election management features not required under HAVA. Voter registration system development was an ongoing process in 2004 and 2005. For the November 2004 general election, the use of technology to compile voter registration information remained an issue. Developing and implementing statewide computerized voter lists has been an ongoing process for many states, and state and local election officials reported encountering difficulties along the way. Our state survey and site visits suggest that states and jurisdictions were still coming to terms, as of the last half of calendar year 2005, with how their systems should be updated and whether states or jurisdictions should control the flow of information into statewide registration systems. As mentioned in chapter 1, HAVA vests the Attorney General with the responsibility of enforcing certain HAVA requirements with respect to the states. In January 2006, the Justice Department asked all states, the District of Columbia, and other covered territories to provide a detailed statement of their compliance with voting systems standards and implementation of a single, uniform, official, centralized, interactive computerized statewide voter registration list. If the states, the District of Columbia, or covered territories were not implementing HAVA’s requirements for the computerized statewide voter registration lists as of January 2006, the Justice Department reported that it then asked them to identify steps they planned to take to achieve full implementation of the HAVA-compliant statewide voter registration list and the date on which each step would be accomplished. According to Justice Department officials, they are reviewing the information provided by the states, the District of Columbia, and such territories to make determinations of what, if any, enforcement action might be needed. The Department of Justice reports that it entered into a memorandum of agreement with California in November 2005 after that state realized it would not be able to fully meet HAVA’s requirements by the January 1, 2006, deadline. On March 1, 2006, the Department of Justice also filed suit in a federal district court against the state of New York alleging the state not to be in compliance with, among other things, HAVA’s requirement for a computerized statewide voter registration list and seeking a judicial determination of noncompliance and a court order requiring the state to develop a plan for how it will come into compliance. During our site visits in 2005, we asked local election officials about the status of their statewide registration systems. Election officials in some local jurisdictions we visited cited difficulties related to implementing their statewide voter registration systems involving, among other things, internal politics and technology-related challenges. For example, election officials in a large jurisdiction reported that a disagreement between the State Board of Elections and local election officials over the type of system to implement delayed the project for a year. State election officials wanted a system requiring all voter registrations to be entered at the state level but maintained locally. The local election officials expressed the view that such a system would result in a lack of control over data entry at the local level at the front end, while imposing accountability on them on the back end (data maintenance). During our interview in August 2005, these election officials told us that a statewide registration system had not been implemented yet. In some jurisdictions, the difficulties cited by election officials may have reflected the fact that they were establishing statewide voter registration systems for the first time. For example, in 1 large jurisdiction that was establishing a HAVA voter registration list from scratch, local election officials noted that at the time of our interview in August, the system was behind schedule, lacked the ability to identify duplicates, had no quality control, and was not planned to function as a real-time system. In our survey of states and the District of Columbia, and our survey of local election jurisdictions nationwide, among other things, we inquired about the status of their capabilities for meeting HAVA provisions for (1) verifying voter registration application information against MVA and SSA databases and (2) maintaining the statewide voter lists by comparing information on the statewide voter registration list against state death records and felon information, and discussed the issues during our local site visits. Our work focused on how states had matched or planned to match voter registration lists against other state records, as required by HAVA. However, it is important to note that the success of such matching in ensuring accurate voter registration lists is dependent upon the accuracy and reliability of the data in the databases used for matching. If that state’s MVA databases, felon records, death records, or other records used for matching are inaccurate, they can result in voter registration list errors. When a driver’s license or driver’s license number is presented as identification when registering to vote in an election for federal office, HAVA requires that states match the voter registration application information presented with that in the MVA records. In our survey of state election officials, we asked states whether their voter registration systems would have the capability to perform electronic matching of such voter registration information with state motor vehicle agency records for the purposes of verifying the accuracy of information on the registration application. Twenty-seven states reported they will have or currently had the capability to match on a real-time basis, 15 states and the District of Columbia reported they will have or currently had capability to match in batches, and 4 states reported they would not have the capability to perform electronic matching. The remaining 4 states included 2 states that reported that they are not subject to HAVA’s registration information verification requirement because they collect the full Social Security numbers on voter registration applications; 1 state, North Dakota, which does not require voter registration, did not respond, and 1 state reported that it was uncertain of its capability to perform electronic matching. With respect to matching voter information with SSA data when a Social Security number is presented instead of a driver’s license, in our state survey, 7 states had and 26 states and the District of Columbia reported that they would have the capability, by January 1, 2006, to electronically match voter registration information with SSA (through the MVA); 10 states reported they planned to have this capability in place but not by January 2006; and 6 states had not yet determined whether they could do so. Many states reported concerns with whether SSA would be able to return responses to verify requests in a timely manner. Specifically, 30 states and the District of Columbia reported some level of concern about the issue. When asked whether they thought local jurisdictions would be able to resolve nonmatches resulting from SSA verification checks, opinions were divided, with a number of states (21) expressing some degree of concern about this, while a nearly equal number (22 states and the District of Columbia) did not. In our June 2005 report on maintaining accurate voter registration lists, we found that in one state (Iowa) that had verified its voter registration list with SSA before the 2004 general election, there was no unique match for 2,586 names, according to the SSA records. As we stated in our report, Iowa officials said that the biggest problem they faced was that SSA did not specify what specific voter information did not match (i.e., was the mismatch in name, date of birth, or final four-digit Social Security number). Without that information, they were not able to efficiently resolve the non- matching problems. In that same report, we also noted that an SSA official said that the system established to perform the HAVA matching on the four- digit Social Security number is not able to provide that detail. In addition, we found that use of SSA’s database to identify deceased registrants, which is linked with the system established to perform the HAVA verification of voter registration application information, had matching and timeliness issues. As shown in figure 8, many states reported that they faced significant challenges when trying to match voter registration information with state records. For example, in our survey, 29 states and the District of Columbia reported that records with incomplete data posed a challenge; 19 states and the District of Columbia reported that obtaining records not maintained electronically was a challenge; and 23 states reported that verifying information against incompatible electronic record systems was also a challenge. During our site visits to local jurisdictions, we obtained additional views on how well, in general, states were believed to perform various data- matching functions. We asked local election officials to describe their state system’s ability to match voter registration information with MVA and SSA records and the system’s ability to verify information on eligibility status for felons, noncitizens, and others with other state databases or records. One jurisdiction in Illinois reported it was not sure how or if its voter registration system would be able to match data with MVA and SSA databases or to verify eligibility status for felons and by age. An official in a jurisdiction in Florida said that Florida’s system could not verify information on the eligibility status of felons, noncitizens, the mentally incompetent, or the underaged—though plans were under way to obtain information from the Clerk of Courts Information System to perform some of these tasks. HAVA’s list maintenance provisions require states to match the statewide voter registration list information against certain state records to identify ineligible voters and duplicate names. If a voter is ineligible under state requirements and is to be removed from the statewide voter registration list, states are generally required to remove such names in accordance with NVRA provisions relating to the removal of voter names from registration lists for federal elections. Under NVRA, in the administration of voter registration for federal elections, states may not remove the names of people who are registered to vote for nonvoting and names may be removed only for certain specified reasons: at the request of the registrant; by reason of criminal conviction, as provided by state law; by reason of mental incapacity, as provided by state law; or pursuant to a general program that makes a reasonable effort to remove the names of ineligible voters from the official lists by reason of the death of the voter or on the ground that the voter has changed address to a location outside the election jurisdiction on the basis of change of address information from the U.S. Postal Service (but only if either (1) the voter confirms in writing a change of address to a place outside the election jurisdiction or (2) the voter has failed to respond to a confirmation mailing and the voter has not voted or appeared to vote in any election between the time of such notice and the passage of two federal general elections). Reasons Names Removed from Registration Lists In our survey of local election jurisdictions nationwide, we asked about the reasons names were removed from voter registration lists. On the basis of our survey of local election jurisdictions, the following table shows various reasons that jurisdictions removed names from voter registration lists for the 2004 general election and our estimates of how frequently names were removed for that reason. For example, the most frequent reason was the death of the voter (76 percent). Names were removed with about equal frequency because the voter requested that his or her name be removed (54 percent) or the registrant’s name appeared to be a duplicate (52 percent). The least frequent reason was for mental incompetency (10 percent). In many jurisdictions, names were not removed but rather placed on an inactive list for a period of time. In our survey of local jurisdictions, nearly half, or an estimated 46 percent, took this step. In our June 2005 report on maintaining accurate voter registration lists, on the basis of interviews of election officials in 14 jurisdictions and 7 state election offices, we reported that in larger jurisdictions, the task of identifying and removing registrants who died can be substantial. For example, in the city of Los Angeles, in 1 week in 2005 alone, almost 300 persons died. The issue of felons voting unlawfully—that is, voting when their felony status renders them ineligible to voter under state law—was a high-profile issue in some jurisdictions. According to an election official in a Washington jurisdiction we visited, this issue was identified during the November 2004 general election. This official also told us that the Secretary of State is working to establish a database that will indicate felony status and cancel the registration of felons. This election official noted that the jurisdiction rarely receives information from federal courts on felony convictions. Under federal law, U.S. Attorneys are to give written notice of felony convictions in federal district courts to the chief state election official of the offender’s state of residence. In our June 2005 report on maintaining accurate voter registration lists, we found that U.S. Attorneys had not consistently provided this information, and while the law did not establish a standardized time frame or format for forwarding the federal felony conviction information, election officials in 7 states we visited reported that the felony information received from U.S. Attorneys was not always timely and was sometimes difficult to interpret. We recommended that the U.S. Attorneys provide information in a more standardized manner. Under HAVA, duplicate names on the statewide voter registration list are also to be identified and removed. In our state survey, 49 states and the District of Columbia reported that their voter registration systems will include a function for checking duplicate voter registration records. On the basis of our nationwide survey of local jurisdictions, we estimate that 72 percent of local jurisdictions employed a system of edit checks (automated controls to identify registration problems) to identify duplicates. Our prior work has also found that states were, for the most part, able to handle duplicate registrations—though obtaining timely, accurate data to facilitate the identification of duplicate registrations has been viewed as a challenge among some state election officials. Specifically, in our February 2006 report on certain states’ (9 states that did not seek a waiver until January 1, 2006 and were to implement a computerized statewide voter registration list by January 1, 2004) experiences with implementing HAVA’s statewide voter registration lists, we found that 8 of the 9 states we reviewed screened voter applications to identify duplicate registrations, and most did so in real time. We also reported that 8 of these 9 states checked voter registration lists for duplicate registrations on an annual, monthly, or other periodic basis. And 4 of the 9 states reported that implementing the HAVA requirements led to some or great improvement in the accuracy of their voter lists by reducing duplicate registrations or improving the quality of voter information before it was entered into the statewide voter list. Checking for duplicates remained a challenge for some in 2004 and 2005, however. In our June 2005 report on maintaining accurate voter registration lists, we noted that officials in 7 of the 21 local election jurisdictions we spoke with during 2004 and 2005 had some concern about the accuracy and timeliness of data they received to identify duplicate registrants and verify that registrants resided within the jurisdiction. They noted that the matching and validation of names are complex and made more so when considering aliases and name changes, as are matches such as “Margie L. Smith” with “Margaret Smith.” Officials from several states who reported, at the time of our review, that their state had not implemented a statewide voter registration system noted that there was no way to identify duplicates outside their jurisdiction. While HAVA requires that both state and local election officials have immediate electronic access to information in the statewide voter registration list, HAVA grants states discretion as to the method used to ensure that this capability is established. According to EAC, state and local election officials may determine whether to establish (a) a top-down system, whereby the statewide voter registration list resides on a state database hosted on a single, central platform (e.g., a mainframe or client servers), which state and local election officials may query directly; (b) a bottom-up system, whereby the statewide voter list is stored on a state-level database that can be downloaded to jurisdictions and updated by the state only when the jurisdictions send new registration information back to the state; or (c) take another approach. According to the EAC voluntary guidance on HAVA’s statewide voter registration system, the top-down approach most closely matches HAVA requirements—but other configurations may be used as long as they meet the HAVA requirement for a single, uniform list that allows election officials to have immediate access. Our 2005 survey of state election officials sought information on how states were implementing statewide computerized voter registration systems. We asked, among other things, whether states were using a top-down or a bottom-up approach. In response, 40 states and the District of Columbia reported that they have a database maintained by the state, with information supplied by local jurisdictions (top-down system); 4 states reported that local jurisdictions retain their own lists and transmit information to a statewide list (a bottom-up system); and 5 states reported they use a hybrid of these two options. We also asked whether state election officials would have immediate, real-time access to their state lists for the purposes of entering new voter registration information, updating existing information, and querying voter registration records. About half the states and the District of Columbia reported they had or would have all these capabilities. Specifically, 24 states and the District of Columbia reported they had or would have as of January 2006, real-time access for entering new voter registration information, while 23 states reported they did not plan to do so and 2 states did not respond. In addition, 26 states and the District of Columbia reported that they had or would have as of January 2006, real-time access for updating existing voter registration information, while 21 states reported they did not plan to do so and 2 states did not respond. And 47 states and the District of Columbia reported they had or would have as of January 2006 real-time access for querying all state voter registration records, while 1 state reported it would not do so and 1 state did not respond. For each of these questions, one state reported it too would have these capabilities, but not by the January 1, 2006, HAVA deadline. We also sought state election officials’ views on whether election officials in local jurisdictions would have immediate, real-time access to voter list information for the same three purposes stated above: entering new information, updating existing information, and querying records. In our state survey, most states and the District of Columbia reported that local jurisdictions had these capabilities. Specifically, 46 states and the District of Columbia reported that local jurisdictions had or would have as of January 2006, real-time access for entering new voter registration information, and 3 other states reported that they planned to do so as well, but not by January 1, 2006. Also, 46 states and the District of Columbia reported that local jurisdictions had or would have as of January 2006, real-time access for updating existing voter registration information, and 3 other states planned to do so as well, but not by the deadline. Finally, 47 states and the District of Columbia reported local jurisdictions had or would have as of January 2006 the capability to query records for their jurisdictions in real time, and 2 states planned to do so, but not by January 2006. Figure 9 compares the capability of state and local jurisdiction election officials to access the voter registration lists to perform certain tasks. While HAVA’s list maintenance provisions require states to coordinate statewide voter registration list information with certain other state records within their state in order to identify and remove ineligible names, the act does not specifically provide that such coordination must be done electronically. However, to determine whether state systems had or would have the capability to perform electronic data matching, our survey asked states about existing or planned electronic capability. As shown in figure 10, more than half the states reported that they had, or planned to have, the ability to match voter registration information electronically with state records on felony convictions and deceased registrants. Specifically, 25 states reported they had and 15 states reported that they would have the capability to electronically match against state death records as of January 2006, and 6 states and the District of Columbia planned to have the capability, but not by January 2006. Three states reported that they did not plan to have this capability. With respect to identifying ineligible felons, 16 states reported they had, and 15 reported they would have the capability to electronically match against felony conviction records as of January 2006, while 9 states planned to do so but would not have done so by January 2006. In addition, 7 states and the District of Columbia did not plan to have this capability, and 2 states had not determined whether to have the capability. On the topic of states’ efforts to meet HAVA’s data-matching requirements electronically—as opposed to transmitting paper records—EAC recommends that voter registration information be transmitted electronically, particularly between states and their MVAs. EAC further recommends that to the extent allowed by state law and available technologies, the electronic transfer between statewide voter registration lists and coordinating verification databases should be accomplished through direct, secure, interactive, and integrated connections. While EAC provided guidance to states for their statewide systems, under HAVA, the states are to define the parameters for implementing interactive and integrated systems. HAVA requires election officials to provide adequate technological database security for statewide voter registration lists that is designed to prevent unauthorized access. EAC provided states with voluntary guidance, issued in July 2005, to help clarify HAVA’s provisions for computerized statewide voter registration lists. Among other things, the EAC guidance noted that such computer security must be designed to prevent unauthorized users from altering the list or accessing private or otherwise protected information contained on the list. Access may be controlled through a variety of tools, including network- or system-level utilities and database applications (such as passwords and “masked” data elements). Special care must be taken to ensure that voter registration databases are protected when linked to outside systems for the purposes of coordination. Any major compromise of the voter registration system could lead to considerable election fraud. We sought information on what documented standards or guidance for computer and procedural controls would be in place to prevent unauthorized access to the lists. In our state survey, 45 states and the District of Columbia reported having such standards or guidance, 3 plan to do so, and 1 reported that it did not know. We also asked states what actions they had taken or planned to take to deal with privacy and intrusion issues. We asked, for instance, what, if anything, had been done to install or activate mechanisms to detect or track unauthorized actions affecting the state’s computerized voter registration system. A majority of states reported actions had been taken or were to be taken at some point. Specifically, 26 states reported taking action as of August 1, 2005, while 12 states and the District of Columbia reported they would do so by January 1, 2006. An additional 4 states reported that actions were planned, but at no particular point in time. In a related question, we asked what actions had been taken or were planned to install or activate mechanisms to protect voter privacy. Again, a majority of states reported actions had been taken or were to be taken at some point. Specifically, 32 states reported taking action as of August 1, 2005, while 13 states and the District of Columbia reported they would do so by January 1, 2006. Two other states reported actions would be taken at a later point in time. During our site visits, we asked local election officials what standards or procedures were used for the November 2004 general election to help ensure that the registration list was secure and that the privacy of individuals was protected. Election officials in most jurisdictions reported that voter information (such as name and address) is public information if it is to be used for political purposes—though some do not release Social Security numbers, and others limit access to this information by requiring a fee. Some local election officials noted that security standards for this information were not set by the state but rather at the county or local level, though many look to the state for future guidance on standards. The type of security in place to restrict access to voter registration records varied by jurisdiction; among the procedures commonly used were password protection (so that only certain election officials could log onto the voter registration system to access the information); storage of voter registration records in locked facilities; use of “best practice” protocols such as system firewalls; and in some cases, registration information is maintained on a computer system that is separate from the jurisdiction’s central system. Along these lines, 1 jurisdiction noted that it planned to implement a public key infrastructure (PKI). A PKI is a system of computers, software, policies, and people that can be used to facilitate the protection of sensitive information and communications. The official noted it is a felony in that jurisdiction to use a PKI authorization without authorization from the State Board of Elections. Election officials in another jurisdiction we visited told us that all voter registration system users must log on using unique user IDs and passwords, which are maintained by the county registrar. The system tracks all data entries and changes, which user made them, and when they were made. In a few jurisdictions, election officials said they grant additional privacy to the records of voters involved in domestic disputes or other law enforcement matters. When asked whether they had any plan to develop or change existing security standards or procedures, local election officials in 16 of the 28 jurisdictions we visited told us there were no plans to alter current practices, though some noted they were not sure. Among those indicating that security procedures were being enhanced, election officials in 1 large jurisdiction said they planned to enclose their computer systems server in a secure case with restricted access. Another official in a large jurisdiction in another state said that because of a change in state law in 2004, a hard copy of voter records was no longer available for public inspection. As mentioned earlier, the HAVA computerized statewide voter registration list provisions require states to perform list maintenance to identify duplicate registrations, deceased registrants, and registrants who may be ineligible to vote under state law based upon a felony conviction. However, we note that requirements for matching voter registration lists with certain state records leaves some potential gaps for incomplete and inaccurate voter registration lists because election officials may not have information regarding registered voters who die out of state or who are in prison in another state and ineligible because of a criminal conviction. To determine whether states went beyond HAVA requirements to share voter registration data with other states to identify registrants who died in another state, were incarcerated in another state, or registered in another state, we asked on our survey of state election officials whether they had taken action to electronically exchange voter registration information with at least 1 other state and whether they were sharing registration information routinely with other states. In our state survey, 31 states and the District of Columbia reported that they did not plan to electronically exchange voter registration information with another state. However, 35 states and the District of Columbia reported they share information with states when a new registrant indicates he or she previously resided in another state. Other types of information sharing across state lines were less common. For instance, 6 states reported sharing voter registration information with neighboring states, and 1 state reported that it shared information with states where an individual is known to reside part of the year. In our state survey, 14 states reported they do not currently share voter registration information with other states. We analyzed state and federal voter registration applications to determine whether these applications provided space for applicants to indicate they were registered in other states or in other jurisdictions within the same state to identify duplicate registrations. We obtained state application forms during site visits with local election jurisdictions, from state Web sites or, if not available from there, we obtained the application from the state. Registration forms were those on the Web site or obtained from the states as of January 2006. Applications for the 46 states and the District of Columbia and both federal applications had a place on their registration application where registration applicants could indicate prior registration in another state on their forms. Three states (Kentucky, Texas, and Wyoming) did not include a place on their registration forms to identify prior registration information in another state. Forty-five states and the District of Columbia included a space for registration applicants to indicate prior registration in another jurisdiction within their state on their forms, or in the case of the District of Columbia applicants were to indicate the address of their last registration. Four states (Alaska, Hawaii, Kentucky, and Wyoming) did not provide space to indicate prior registration within their state. Figure 11 is an example of a state registration form that provided a space for the voter registration applicant to indicate that he or she had registered in another state. On the basis of our survey of local election jurisdictions, we estimated that 12 percent of local jurisdictions administered their own registration application form in addition to the state registration application. Of the 12 percent who had their own form, we estimate that 70 percent had space on their voter registration applications so that an applicant can indicate whether he or she was previously registered in another state. However, we estimate that about a third did not capture this information on their forms. Although HAVA’s voter registration-related provisions focus primarily on state election management activities for developing, verifying, and maintaining voter lists, we sought information on what other types of registration system upgrades, if any, states planned, and we asked at the sites we visited what additional system capabilities, if any, had been implemented or planned. In our state survey, 15 states reported taking action to upgrade the processing speed or records capacity of their systems as of August 2005; 6 states reported that such actions would be taken by January 2006; and 12 states and the District of Columbia reported they would take such action at a later time. In other recent work, we have also looked at selected states’ efforts to enhance their statewide voter list systems. In our February 2006 report on certain states’ experiences with implementing HAVA’s statewide voter registration lists, we found that 7 of 9 states that reported implementing HAVA provisions for a computerized, statewide voter registration system by January 1, 2004, also reported that they have upgraded or enhanced their systems, or planned to so do, to include additional election management capabilities. For example, Arizona reported plans to upgrade its current system to reflect reciprocity agreements with other states, so that election officials can be alerted when a voter moves from state to state, and will allow election officials to retrieve data on such issues as voter petitions, provisional ballots, poll worker training, and polling locations. Other states reported adding or planning similar enhancements. Kentucky reported another type of enhancement: It has used its statewide computerized voter registration system to establish voter information centers on the state’s Web site, to assist applicants and staff in the voter registration process. During our site visits, we asked local election officials to comment on the election management functions their voter registration systems might perform. While some local election officials noted they were not certain whether their new statewide voter registration systems would include the same array of features as the local county versions, other local election officials in some jurisdictions responded that they expect their statewide systems to be able to perform some or all of the following functions: maintain records confirming mailings to new registrants, generate letters informing rejected applicants of reasons for generate forms or mailing labels, note status or date of absentee applications and ballots sent and identify polling places for use on Election Day, and identify poll workers. In some jurisdictions, other capabilities were mentioned; 2 large jurisdictions noted, for instance, that bar coding would be used to identify registrants, and 2 other large jurisdictions indicated that their systems would track and maintain candidate petition information. Not all jurisdictions expressed equal confidence in the extra (non-HAVA- related) capabilities of their systems. Election officials in a couple of large jurisdictions, for instance, told us they were not certain their statewide voter system would have features comparable to those already in place, and that their vendor or state was taking a one-size-fits-all approach for all jurisdictions regardless of size, rather than taking specific local needs into account. In some jurisdictions, election officials stated that their statewide systems were still too new to know whether these additional functions would be operational, and some said they were not yet familiar with all the system’s capabilities. HAVA imposed new identification requirements for certain mail registrants—such as, individuals who register by mail and have not previously voted in a federal election within the state. These individuals (first-time mail registrants) must provide certain specified types of identification either by submitting copies of such identification during the mail registration process or by presenting such identification when voting in person for the first time following their mail registration. Moreover, first-time mail registrants are to be informed on the application that appropriate identifying information must be submitted with the mailed form in order to avoid additional identification requirements upon voting for the first time. An individual who asserts that he or she has registered by mail and desires to vote in person but who does not meet the identification requirements may cast a provisional ballot under HAVA’s provisional language. However, according to election officials in some jurisdictions we visited, casting a provisional ballot requires that these voters are to provide identification to election officials by a specified time (e.g., by close of polls on Election Day or within a certain number of days following Election Day) to have their ballot count. On the basis of our local survey, we estimate that 32 percent of local jurisdictions encountered a problem in counting provisional ballots because voters did not provide identification as specified by HAVA for mail-in registrants and were voting for the first time in the precinct or jurisdiction. Our discussion of provisional voting processes appears in chapter 5. HAVA, in general, provides states with discretion as to the methods of implementing HAVA’s identification requirements for first-time mail registrants, such as ensuring that voters comply with the requirements and, subject to certain limitations, allows states to establish requirements that are stricter than those required under HAVA. According to our state survey, 7 states reported that such HAVA requirements were already covered by existing state legislation or some type of state executive action (such as orders, directives, regulations, or policies); 44 states and the District of Columbia reported that they enacted new legislation or took some type of state executive action (such as orders, directives, regulations, or polices) to address the identification requirements in HAVA for first-time mail registrants. We analyzed state and federal (NVRA) voter registration application forms to determine whether the applications provided instructions on identification requirements for individuals registering in a jurisdiction for the first time. We obtained some state application forms during site visits with local election jurisdictions, and others from state Web sites or, if not available from there, we obtained the application from the state. Registration forms were those on the Web site or obtained from the states as of January 2006. Our analysis showed that 39 states and the District of Columbia had information on their application forms and 10 states did not provide this information on their forms. The NVRA voter registration form included this information. Figure 12 is an example of a voter registration form that included instructions for first-time mail registrants. During our site visits, we asked local election officials whether they considered registering by mail to only include when someone mails in a single application or to also include mailed-in applications from voter registration drives. Five local jurisdictions told us that applications received by mail as a result of voter registration drives are not treated as mail-in applications and therefore are not treated as subject to mail registration identification requirements under HAVA; 3 jurisdictions told us that applications submitted by voter registration drives were treated as mail-in applications subject to HAVA’s mail registration identification requirements. Election officials in 1 of these jurisdictions told us that under their state law (Pennsylvania) all voters who are voting for the first time in a district must show a valid form of identification, regardless of how they registered to vote. Also, during our site visits we asked local election officials how they processed voter registration applications from first-time mail registrants for the 2004 general election. Election officials reported taking different approaches, many involving mailed communications from election officials sent back to the applicant, particularly if required information was missing. For example, at least 2 large jurisdictions reported that first-time voters who did not mail in identification with their applications were sent letters instructing them to do so. Similarly, officials in 2 jurisdictions in another state said letters were sent to applicants whose applications were incomplete, advising them of the need to provide photo ID—and informed applicants that if they failed to do so, they may have to use a provisional ballot on Election Day, which would be subject to the voter subsequently providing identification. In other jurisdictions, though local election officials reported taking steps to process incomplete applications from first-time voters, they did not necessarily give the applicant a chance to correct the application prior to Election Day. For example, in a medium jurisdiction we visited, first-time voter applicants who did not submit proper identification were to have been given provisional ballots. However, the election official told us her office did not inform them about this in advance for the 2004 general election. In addition to contacting applicants to inform them of the need to provide identification discussed above, 1 jurisdiction we visited told us that it periodically provided a list of applicants who provided driver’s license numbers but did not provide identification at the time of registration to the state MVA as another means to verify the registrant’s identity. In this case, the MVA compared the county clerk office’s registration list against its list of licensed drivers to see if the name, date of birth, and driver’s license number matched, and returned the results to election officials. If all these data elements matched, the election official certified the records and these prospective voters were not required to show identification at the polling place. If a registrant did not provide identification prior to Election Day, local election officials at all 28 sites we visited reported having a system for recording first-time voters who failed to provide identification and transferring that information to a polling site by annotating the poll book. One large jurisdiction, for example shaded the voter line in the poll book, while another printed the words “ID required” next to the voter’s name. With respect to voters who presented themselves at a polling place and did not have identification, election officials at some local jurisdictions we visited described different ways that the voter’s provisional ballot could become verified. For example, a jurisdiction in Georgia said that if a voter did not provide identification at the polls, it allowed the voter to vote a provisional ballot and the voter had until 2 days after the election to provide identification. Another jurisdiction in Kansas told us that the voter had until the day that votes were canvassed to provide identification. Other jurisdictions told us that voters would have until the close of the polls on Election Day to provide identification to election officials. A local jurisdiction in Washington told us that if the voter did not have identification on Election Day, the voter would vote a provisional ballot and election officials would subsequently have the voter’s signature matched against the registration application to verify the voters identity. Citizens generally have numerous opportunities to apply to register to vote. Figure 13 shows several of these opportunities—such as applying at a local election office, at a motor vehicle agency, or through a voter registration drive—and the processes used to submit an application. Problems with applications submitted to MVAs have been identified as a challenge since 1999. Our October 2001 report on election processes found that 46 percent of local jurisdictions nationwide had problems processing applications submitted at MVAs and other public registration sites designated pursuant to NVRA requirements. In its reports to Congress on the impact of NVRA on federal elections in 1999 through 2002, the Federal Election Commission (FEC) found that several states reported problems with election officials receiving applications from MVA offices in a timely manner, resulting in, the FEC stated, “the effective disenfranchisement” of citizens who had applied to vote but were not processed by Election Day. FEC recommended in both reports that states develop ongoing training programs for personnel in NVRA agencies, such as MVAs. HAVA includes requirements providing that voters who contend that they registered (at MVAs or through other means) in the jurisdiction in which they desire to vote, but whose names are not on the voter registration list for that polling place, be allowed to cast a provisional ballot. HAVA also requires that voters who an election official asserts is not eligible to vote also be permitted to cast a provisional ballot. Election officials would determine the voter’s eligibility under state law and whether the vote should count as part of the vote counting process. From our local jurisdiction survey, we estimate that for the 2004 general election, 61 percent of local jurisdictions had a problem in counting provisional ballots because of insufficient evidence that individuals had submitted voter registration applications at MVAs. In addition, we estimate that 29 percent of local jurisdictions had a problem in counting provisional ballots because of insufficient evidence that individuals had submitted voter registration applications at NVRA agencies other than MVAs. Also, our September 2005 report on managing voter registration reported that 4 of 12 jurisdictions we surveyed reported that election office staff experienced challenges, either to a great extent or some extent, receiving voter registration applications from motor vehicle agencies. They reported taking steps to address the problem by hiring additional staff to handle the volume of applications received and by contacting applicants to obtain correct information. There is evidence that, at least in 1 jurisdiction, election officials took steps since the 2000 general election to address the MVA voter registration issue, though problems persisted for the November 2004 general election. When we revisited the same small jurisdiction in 2005 that we had visited in 2001, election officials reported they were still experiencing problems receiving registration forms from the MVA, for all those who registered to vote there—but noted that the process had improved. For example, they said elections staff now have access to the MVA database directly, so they can verify whether someone who claimed to have registered at the MVA actually did so. In our local jurisdictions survey, we estimate that few jurisdictions provided training to MVA or other NVRA agencies. Specifically, for the 2004 general election, we estimate that 12 percent of local jurisdictions provided training or guidance to MVA offices and an estimated 3 percent provided training to other NVRA entities regarding procedures for distributing and collecting voter registration applications. Large jurisdictions are statistically different from small or medium jurisdictions, and medium jurisdictions are statistically different from small jurisdictions. Specifically, we estimate that 34 percent of large jurisdictions provided training to MVA offices, an estimated 18 percent of medium jurisdictions did so, and an estimated 9 percent of small jurisdictions did this. In addition, large jurisdictions are statistically different from both medium and small jurisdictions in providing training to other NVRA entities. In our October 2001 comprehensive report on election processes nationwide, we identified measures such as improving the training of MVA staff as a means of addressing challenges related to applications received from MVAs. After the November 2004 general election, the National Task Force on Election Reform—composed almost exclusively of officials who served in voter registration and administration of elections capacities— reported that while the NVRA expanded the number of locations and opportunities where citizens can apply to register to vote, supporting the voter registration application process is a secondary duty for entities that do so under this law. The task force report noted that it is a challenge for these entities to provide this service in a consistent manner and to transfer the registrations collected accurately and efficiently to voter registration offices. In our October 2001 report on election processes, some election officials noted that while extending voter registration deadlines gave voters additional chances to register, it shortened the time for processing applications. And a few election officials raised concerns about short time frames for processing applications in relation to the possibility of voter fraud if there was insufficient time to verify an applicant’s eligibility. For the 2004 general election, the time frame for processing applications had the potential to pose an even greater challenge given the increase in the number of voter registration applications that elections officials reported receiving for the November 2004 general election. The conditions that election officials experienced in processing the volume of voter registration applications, such as long hours and lack of time to fully train temporary workers, could have resulted in data entry errors that would have had the impact of not properly registering eligible voters and not identifying ineligible voters. During our site visits to local jurisdictions, election officials told us that for the 2004 general election, entering applications in a timely manner was possible—but challenges did arise, and election officials described actions taken to help ensure that voters were properly registered. Furthermore, on the basis of our survey of local election jurisdictions, we estimate that 81 percent of local jurisdictions were able to process applications received just prior to the registration deadline—though we estimate 19 percent of the jurisdictions received applications just prior to the registration deadline that posed problems in entering them prior to Election Day. As shown in figure 14, we estimate that large jurisdictions experienced problems in entering the number of voter registration applications more than small and medium jurisdictions. Large jurisdictions are statistically different from both medium and small jurisdictions. This may be attributable to larger jurisdictions having larger populations with more registration activity, among other things. All jurisdictions we visited reported that they were able to enter all eligible applications into the voter registration lists. Nevertheless, most reported it was a challenge to process the large volume of applications received. For example, 1 large jurisdiction we visited reported that on a daily basis it was 30,000 to 40,000 applications behind in data entry. As a result, election officials reported that they hired 80 full-time temporary workers who worked two full-time shifts to enter all eligible applications into the voter registration list used at the polls on Election Day. Election officials in another large jurisdiction told us that they unexpectedly received about 10,000 last-minute registrants. Another large jurisdiction reported it was “swamped” with registration applications right before the registration deadline and was not prepared for the volume of applications submitted. Several jurisdictions required permanent employees to work extended hours or on weekends. To manage registration workloads, other jurisdictions reported hiring temporary workers and recruiting county employees to handle processing workloads. Figure 15 shows the reported spike in voter registration applications received prior to Election Day in 1 large jurisdiction. Some applications were received after the final week allowed for voter registration and could not be registered for the 2004 general election but were registered for future elections. In our state survey, a few states reported that since the 2000 general election they increased the time that voters in their states have to register. Although setting registration deadlines close to Election Day itself provides citizens increased time to apply to register, reducing the number of days from the registration deadline to Election Day can make it difficult for election officials to ensure that all eligible voters are included on the voter registration list. Specifically, in our state survey, 3 states (Maryland, Nevada, and Vermont) reported changing their registration deadlines for the November 2004 general election. For the 2000 general election, Maryland’s registration deadline had been 25 days before the election, but for the 2004 general election, the deadline for registration was 21 days before the election, extending the time that voters could register by 4 days. Nevada’s 2000 registration deadline (9 p.m. on the fifth Saturday preceding any primary or general election) remained the same for mail-in registrations. However, for the 2004 general election, the state extended in- person registration by 10 days. Vermont’s voter registration deadline changed from the second Saturday before the election to the second Monday before the election, allowing voters 2 more days to register. Appendix VI provides information on state laws pertaining to registration deadlines. On the basis of our local jurisdiction survey, entering all voter registration applications for the time between the registration deadline and the November 2004 general election posed problems for large jurisdictions more than it did for small and medium jurisdictions. Specifically, we estimate that 41 percent of large jurisdictions experienced problems, 18 percent of medium jurisdictions, and 13 percent of small jurisdictions. Large jurisdictions are significantly different from both medium and small jurisdictions. Inasmuch as large jurisdictions have more potential registrants, it is reasonable to expect that they would experience more difficulty entering all voter registration applications by Election Day than smaller ones would. For the 2004 general election, while many states reported having registration deadlines that were 20 to 30 days prior to Election Day, a few states reported having registration deadlines that were 10 days or less prior to Election Day, and some states reported having same-day registration. Four states (Alabama, Maine, New Hampshire, and Vermont) reported having registration deadlines that were 10 days or less prior to Election Day. Idaho, Maine, Minnesota, New Hampshire, Wisconsin, and Wyoming reported having Election Day registration at the polling place. Having sufficient staff to process the increased number of voter registration applications was an issue for large local election jurisdictions. On the basis of our nationwide survey, most local jurisdictions (an estimated 89 percent) had a sufficient number of election workers (whether full-time, part-time, or temporary) who were able to enter registration applications in a timely manner. However, we estimate that 11 percent had an insufficient workforce for this task. Large jurisdictions experienced problems with insufficient election workers to enter voter registrations applications more than small and medium jurisdictions did, as shown in figure 16. The difference between large jurisdictions and both medium and small jurisdictions is statistically significant. This difference could be attributable to larger jurisdictions having a greater need for additional staff. Several jurisdictions we visited reported that there was a price to pay for the large volume of registration applications received, such as the need to hire temporary workers or extend the hours of permanent employees in order to process voter registration applications for the November 2004 general election. Election officials in several jurisdictions we visited commented on the financial impact of the temporary workers hired, overtime hours, and the purchase of needed equipment, such as computers. In our September 2005 report on managing voter registration, we noted that all but 1 of the 14 jurisdictions we surveyed faced challenges receiving and processing voter registration applications during the 2004 general election and took various steps to address them. For example, election officials in 7 of the 14 jurisdictions reported challenges checking voter registration applications for completeness, or for accuracy, or for duplicates. At that time, as in our more recent site visits, jurisdictions reported hiring extra staff, among other things, to address these challenges. Providing training to data entry staff and tracking applications provide ways for election officials to manage the flow of applications for processing that can help ensure that voter registration applications are appropriately entered into the voter registration list. As part of our inquiry into the methods jurisdictions used to enter completed registration application data into voter lists, our questionnaire to local election jurisdictions asked how they went about accomplishing this task. On the basis of our survey, we estimate that 76 percent of all local jurisdictions provided training to data entry staff about the processing and inputting of registration applications. Seventy-five percent of small jurisdictions provided this training, 73 percent of medium jurisdictions did so, and 94 percent of larger jurisdictions did so, too. Large jurisdictions are statistically different from both medium and small jurisdictions. Another activity that election officials undertook when entering completed registration applications included tracking incoming registrations. The results of our survey show that over half of local jurisdictions tracked incoming registration applications to ascertain the total number received, the number entered into registration lists, and the number not processed because of omission or application error, and to identify ineligible voters based on age or residence. Again, large jurisdictions are statistically different from both medium and small jurisdictions. Table 2 provides information on the different activities that local election jurisdictions undertake when entering completed registration applications into the official voter registration list. Nongovernmental organizations in many states sponsored voter registration drives for the November 2004 general election in an effort to increase the number of citizens eligible to vote. Voter registration drives pose a dilemma for some election officials. On one hand, voter registration drives provide another means by which persons can apply to register to vote. On the other hand, they pose challenges in assessing the validity of submitted registrations and in processing large numbers of registrations submitted close to the registration deadline. For the November 2004 general election, election officials in some jurisdictions we visited told us they encountered challenges validating and processing the large number of voter registration applications obtained through voter registration drives that employed either paid staff (where workers are paid for each voter registration application completed and submitted to election authorities prior to Election Day) or used volunteers. For example, Wisconsin’s state legislative audit bureau conducted an evaluation of the 2004 general election in its state. It found, among other things, that many registration deputies appointed for the November 2004 general election worked for special interest groups or political parties interested in increasing voter turnout. The evaluation states that investigators found that registration deputies had submitted 65 falsified names for the 2004 general elections and that district attorneys in two counties charged four individuals with submitting fraudulent registration forms. According to the evaluation report, these registration deputies were reportedly paid by their employer on a per registrant basis, which may have encouraged them to submit fraudulent registration forms to increase their compensation. Such questions about the integrity of the voter registration process were of particular concern in battleground states such as Florida, Ohio, and Pennsylvania, where margins of victory were slim and accurate tallies of eligible votes were therefore of consequence. In our state survey several states reported that their state election provisions do not address the issue of voter registration drives that involve payment per application, while relatively fewer states reported prohibiting them outright. Specifically, 19 states and the District of Columbia reported that state laws or executive actions are silent about these drives (that is, it is left up to each local jurisdiction to decide). However, 1 of these 19 states further reported that while its state law does not address voter registration drives that involve payment per application, the conduct of such drives is not left up to each local jurisdiction—the local jurisdictions have no authority in regulating such matters. Sixteen states reported that voter registration drives are allowed either by state law or by executive action, 13 states reported that they are prohibited by state law, and 2 states did not respond. In addition, our nationwide survey of local election jurisdictions inquired about their awareness and handling of registration drives, and any actions taken to deter fraudulent applications from being submitted by persons or groups participating in paid registration drives, and we discussed this matter during our site visits to selected jurisdictions as well. In our nationwide survey, we estimate that 91 percent of all local jurisdictions were not aware of such drives, while 9 percent were aware. About a third (an estimated 32 percent) of the large jurisdictions—those with populations greater than 100,000—were aware of such drives. We also queried local election jurisdictions whether any names on voter registration applications appeared to be fraudulent. On the basis of our local survey, nearly all jurisdictions—an estimated 95 percent—did not have any names that appeared to be fraudulent. Although only 5 percent of local election jurisdictions had voter registration applications that appeared to have fraudulent names, an estimated 70 percent identified receiving 10 fraudulent applications or fewer, an estimated 14 percent identified receiving 10 or more fraudulent applications, and an estimated 16 percent did not know the volume of fraudulent applications received. The distribution of the volume of fraudulent applications received is of a smaller subset of our total sample and therefore has larger confidence intervals than other estimates. Figure 17 shows the extent to which local jurisdictions identified experiencing fraudulent voter registration applications. In addition, our prior work raised concerns about the quality of voter registration applications obtained through voter registration drives. In our September 2005 report on managing voter registration, we reported that among 12 of 14 local jurisdictions we surveyed, processing applications received from voter registration drives sponsored by nongovernmental organizations posed a challenge to election officials because applications were incomplete or inaccurate. During our site visits, we sought local officials’ views on a host of issues related to the integrity of the voter registration process, including how or whether voter registration drive applications were tracked, how many registration applications were submitted by volunteer or paid registration drives in calendar year 2004 leading up to the November election, and how their jurisdictions dealt with irregular applications. (We defined irregular applications as those using fictitious names, unusual dates of birth, nonexistent addresses, or fake signatures or party affiliations.) We also asked election officials whether they had the ability to determine if individuals were using false or fictitious names. Many local jurisdictions that we visited told us that they did not have specific procedures to ensure that voter applications obtained through voter registration drives were collected or tracked. This was because, in some cases, the application forms could simply be downloaded from the Internet. One large jurisdiction that did not track applications coming from various sources told us it planned to begin doing so, using a drop-down menu in its statewide voter registration system that will allow staff to record the information. Overall, at local jurisdictions that we visited where applications from voter registration drives were tracked or at least estimated, the number and proportion of applications submitted through voter registration drives relative to total registrations—and the number and proportion considered irregular—varied widely. For example, in 1 large jurisdiction, election officials reported that approximately 30,000 registrations received in 2004—about 90 percent—were submitted by registration drives. Of these, the election officials estimated that only about 50 applications were irregular—that is, they were unreadable, had questionable signatures, were incomplete, or had invalid addresses. The election official from this jurisdiction noted that it appeared some of the applications had been filled out by individuals who took addresses from the phone book and changed them slightly. In another large jurisdiction in a battleground state, local election officials estimated that 70,000 registration applications were submitted by volunteer or paid registration drives, and here too irregularities were noted—such as fictitious names and fake signatures— but election officials stated that these irregular applications represented a “low” percentage of the total. In other large jurisdictions, fewer voter registration applications were received; 1 jurisdiction, for example, in another battleground state, reported receiving 2,500 such applications and estimated that about 20 percent of them were irregular. Two medium jurisdictions we visited reported receiving a few hundred voter registration applications or fewer, and both reported that there were no irregularities. One small jurisdiction did not report any voter registration drives taking place. When we asked local election officials during our site visits whether they had the ability to determine whether a person actually tried to vote using a false or fictitious name, responses were mixed: Election officials in 3 large jurisdictions we visited told us they did not have the ability to make this determination. An election official in another large jurisdiction stated that “there is no way to know if someone falsely registered has voted.” Others, however, reported that they were able to determine whether false identities had been used. For example, in 1 large jurisdiction, election judges check voter IDs and signatures at the polls to prevent the use of fictitious identities. One large jurisdiction verifies voter registration information against Social Security and driver’s license information and checked voter history internally; election officials in this jurisdiction reported that they believe anyone who attempted to use a false or fictitious name in the November 2004 general election would have been caught. And in another jurisdiction, election officials told us that if an individual attempted to vote using a fictitious name that was not in the poll book, that individual would be issued a provisional ballot—which would not be verified if it was determined that the name was indeed fictitious. Election officials in some jurisdictions we visited said there was no way to know whether the poll book already contained fictitious names. When asked what steps, if any, local jurisdictions we visited took to notify law enforcement or other legal authorities on irregular registration applications received, most reported taking some actions. For example, 1 large jurisdiction we visited reported providing irregular registration applications to the Federal Bureau of Investigation (FBI) and the district attorney’s office and to the Secretary of State’s office for investigation. Both the FBI and the district attorney declined to pursue the matter on the ground that they were understaffed, the jurisdiction reported. The Secretary of State’s office concluded that while the registration applications were fraudulent or fictitious, a purposeful fraud was not committed and that the people completing the fake applications were not trying to alter an election, but to obtain money by working for the registration drives. Four other jurisdictions that we visited said they contacted appropriate state or federal authorities, such as state law enforcement, a State’s Attorney, a state election enforcement agency, or the FBI, but election officials did not know whether any action had been taken. In addition, in our June 2005 report on maintaining voter registration lists, we reported that election officials in seven locations we visited referred reported instances of voter registration fraud allegations to appropriate agencies, such as the district attorney and the U.S. Attorney for investigation. Also, EAC issued voluntary guidance in July 2005 to help states implement HAVA. EAC’s guidance suggested that when the voter registration verification process indicates the possible commission of an election crime, such as the submission of false registration information, such matters should be forwarded to local, state, and federal law enforcement authorities for investigation. When we asked local jurisdictions that we visited whether they had procedures in place for registration groups to follow when submitting applications, election officials in most jurisdictions reported that some type of system was in place to control registration drives. For example, 1 large jurisdiction reported that it had a program to train volunteer field registrars to register citizens on behalf of the county registrar; these field registrars were to comply with all registration rules and laws and must themselves be registered voters, and noncandidates, have proof of identify, complete a 2-hour training course, and pass a brief examination before taking an oath. In addition, this same jurisdiction required that any group requesting more than 50 voter registration forms was required to provide a plan to the state elections department for when, where, and how it would distribute the forms—all of which were numbered so that election offices could track them. Some jurisdictions reported, however, that no procedures were in place that registration groups had to follow. One large jurisdiction, for instance, reported that anyone can run a voter registration drive simply by downloading the voter registration form from the election office Web site. On the topic of what actions, if any, local jurisdictions had taken to deter paid registration drives from submitting fraudulent registration applications, from our nationwide survey, we estimate that roughly half of the estimated 9 percent of local jurisdictions that were aware that paid registration drives were occurring provided training or guidance on how to accurately complete an application, and an estimated 41 percent of these jurisdictions notified the persons or groups engaged in paid registration drives that they had submitted incomplete, inaccurate, or fraudulent applications. In addition, on the basis of our survey, 41 percent of local jurisdictions that were aware of the drives helped prevent submission of incomplete, inaccurate, or fraudulent applications by working with persons and groups engaged in paid registration drives. In a couple of jurisdictions, election officials told us they took other steps, such as meeting with registration drive organizers and contacting the registrant identified on the application, to help prevent fraudulent registrations. A jurisdiction in Colorado reported that numerous complaints had been received from voters who claimed to have completed registrations through a drive but for whom the county had no record of application. The jurisdiction reported that Colorado’s legislature passed a bill pertaining to voter registration drives. Subsequently, Colorado enacted legislation effective in June 2005 that, among other things, requires voter registration organizers to file a statement of intent with the Secretary of State, fulfill training requirements pursuant to rules promulgated by the Secretary of State, and, in general, submit or mail registration applications within 5 business days. In addition, the 2005 state legislation provides that voter registration organizers may not compensate persons circulating voter registration application forms based on the number of applications distributed or collected. The Secretary of State issued rules in November 2005 implementing such requirements, including rules that require registration drive organizers to file a statement of intent with the Secretary of State and require persons circulating such application forms to ensure that the tear-off receipt on the application is completed and given to the applicant. Election officials in 17 jurisdictions we visited told us that they had procedures in place for managing voter registration drives to some extent. For example, in 1 medium jurisdiction, election officials stated that groups or persons seeking to run registration drives must be trained and deputized by the registrar’s office. In 43 of the 50 states and the District of Columbia, successfully registering to vote prior to Election Day is a prerequisite for casting a ballot and having that ballot counted. States are still working to fully implement HAVA’s voter registration requirements. As states gain more experience with their statewide voter registration and data matching systems and processes, it is likely their systems and processes will evolve. Given the continuing challenge of maintaining accurate voter registration lists in a highly mobile society, this is to be expected. For election officials, the voter registration process presents a continuing challenge in balancing ease of registration for eligible voters with sufficient internal controls to help ensure that only eligible voters are added to and remain on the voter registration rolls. To maintain accurate voter registration lists, election officials must use and rely upon data from a number of sources, such as state death and criminal records and applications from MVAs. HAVA’s requirements for creating and maintaining statewide voter registration lists and its identification requirements for first-time voters who register by mail were designed to help improve the accuracy of voter registration lists and reduce the potential for voter fraud. Specifically, HAVA’s requirements for creating and maintaining a statewide voter registration list was designed to improve voter registration list accuracy by identifying duplicate registrations within the state and identifying those ineligible to vote because of death, criminal status, or other reasons. HAVA requires states to match the names and other identifying information on their statewide voter registration lists against death and felony records in the state. States may voluntarily match their voter registration lists with the voter registration lists, death, felony, or other records in other states. In the absence of voluntary cross-state matching, it is possible to fully implement HAVA’s statewide voter registration provisions and still have ineligible persons on the state’s voter registration rolls on Election Day, such as those who died out of state or were convicted in federal courts or other states. Nor would implementing HAVA’s statewide matching requirements identify persons who are registered to vote in more than one state. Although some states report sharing registration and eligibility information among states, the practice was generally limited to neighboring states or dependent upon a registrant indicating that he or she previously resided in another state. HAVA includes a provision that requires certain first-time voters who register by mail to provide identification as proof of their identity and eligibility to vote in the jurisdiction. Which voters must present identification either with their mail application or when they vote for the first time depends upon how states and local jurisdictions define “mail registrations” subject to HAVA’s identification requirement. In our site visits, we found that some local jurisdictions considered registration applications submitted by registration drives to be mail registrations subject to HAVA’s identification requirement for first-time voters, while other jurisdictions did not consider such registrations to be mail registrations subject to the identification requirement. This distinction has importance on Election Day for first-time voters who registered through registration drives. In those jurisdictions that considered mail registrations to include registration drive applications, first-time voters who registered through registration drives would be required to show an acceptable form of identification at the polls on election day. If they did not do so, they are to be permitted to cast a provisional ballot, but the ballot would only be counted upon a state determination that the voter is eligible to vote under state law. In contrast, in those jurisdictions that did not consider mail applications to include those submitted through registration drives, first- time voters would not be treated as subject to the HAVA identification requirement and could generally cast a regular ballot that would be counted with all other regular ballots. Election jurisdictions continue to face challenges in obtaining voter registration applications from NVRA entities, including MVAs. Some local jurisdictions have established processes to manage receipt of voter registration applications from these entities, such as training for staffs of these agencies. To the extent that NVRA entities do not track and forward to the appropriate election jurisdiction the voter applications that they have received, voters may be required to cast provisional ballots instead of regular ones because their names do not appear on the voter registration lists. In addition, the provisional ballot will not be counted if the voter’s valid registration cannot be verified. Our survey of local election jurisdictions found that many local jurisdictions encountered problems counting provisional ballots in cases where voters claimed to have registered at an MVA or some other NVRA entity but there was insufficient evidence that the voter had submitted a registration application at the MVA or NVRA entity. A surge of last-minute registrations in many jurisdictions prior to the November 2004 election illustrated the challenge of balancing ease of registration with assurance that only eligible voters are on the registration rolls. Some election jurisdictions reported registration drive groups submitted hundreds or thousands of applications just before the registration deadline. When the registration deadline is close to Election Day, processing these applications presents a tremendous challenge in checking applications for completeness, having time to contact applicants to obtain missing information, verifying applicants’ eligibility to vote, and adding the name of eligible voters to the registration list. Some jurisdictions reported hiring and training temporary employees to process the applications. The enormous workload and time constraints associated with processing large numbers of last-minute applications can increase the chances that errors will be made in determining voter eligibility, and the names of some eligible voters may not be added to the list in time for Election Day. A growing number of citizens seem to be casting their ballots before Election Day using absentee and early voting options that are offered by states and local jurisdictions. However, circumstances under which these voters vote and the manner in which they cast their ballots before Election Day differ because there are 51 unique election codes. Because of the wide diversity in absentee and early voting requirements, administration, and procedures, citizens face different opportunities for obtaining and successfully casting ballots before Election Day. To collect information about absentee and early voting options, in our state and local surveys we asked questions about each of these voting options separately. We defined absentee voting as casting a ballot, generally by mail, in advance of Election Day (although ballots could be returned through Election Day and dropped off in person). We defined early voting as generally in-person voting in advance of Election Day at specific polling locations, separate from absentee voting. However, there is some measure of overlap between absentee voting and early voting reported by the states, especially where states have reported in-person absentee voting to be, in effect, early voting. This may be due, in part, to the fact that the relational statutory framework for early voting and absentee voting varies among the states—with some states, for example, providing early voting within the context of the state’s absentee voting provisions, while others, for example, provide for absentee voting within the context of the state’s early voting provisions. Similarly, local jurisdictions that completed our survey may also have had some measure of overlap in relation to their practices for absentee and early voting. During our interviews with local election officials in jurisdictions that offered early voting, we were able to obtain more detailed information about absentee and early voting procedures and practices for those jurisdictions. On the basis of our site visits to jurisdictions that had early voting, absentee and early voting were similar in some ways and distinct in others. Election officials described to us that when voters cast absentee ballots, they typically followed a specific process including applying for and receiving the ballot and returning their marked ballots before Election Day or, in some cases, returning the ballot up until the close of polls on Election Day. According to the description that election officials gave us, early voting was distinct from in-person absentee voting in that in-person absentee voters usually applied for and received a ballot, and cast it at the registrar’s office, while early voters reported to a voting location where early voting staff verified their eligibility to vote, usually by accessing the jurisdiction’s voter registration list. Also, early voting usually did not require citizens to provide an excuse, as some states required for absentee voting, and it was usually allowed for a shorter period of time than absentee voting. For example, in the 14 jurisdictions we visited in 7 states that reported having early voting, the time frame allowed for absentee voting was almost always at least twice as long as that for early voting (e.g., Colorado allowed 30 days for absentee voting and 15 days for early voting). Early voting was similar to Election Day voting in that the voting methods were usually the same. However, according to election officials in jurisdictions we visited that had early voting, voters were not limited to voting in their precinct because all early voting locations had access to a complete list of registered voters for the jurisdiction (not just precinct specific) and had appropriate ballots that included federal, state, and precinct-specific races. Appendix VII provides a description of selected characteristics of the early voting jurisdictions we visited. In this chapter, we will discuss changes since 2000 and challenges related to (1) absentee voting in general, (2) overseas military and civilian absentee voting, and (3) early voting. Some states have increased the opportunities for citizens to vote absentee or early. For the November 2004 general election, 21 states reported that they no longer required voters to provide excuses such as being ill, having a disability, or being away from the precinct on Election Day to vote absentee—an increase of 3 states from the November 2000 general election. Three states reported expanding their provision for permanent absentee status (usually reserved for the elderly or those with disabilities), allowing voters to receive absentee ballots for a state-specified time period, such as 4 years. One state reported eliminating its requirement that mail-in absentee voters provide an attestation from a notary or witness for their signature along with the completed absentee ballot. Eliminating the need for a notary or witness removes a potential barrier to an absentee ballot being counted. According to election officials in 2 jurisdictions in 1 state we visited that required a notary or witness signature, an absentee ballot may not be counted if voters neglect to have their ballots witnessed or notarized. Furthermore, HAVA amended the Uniformed and Overseas Citizens Absentee Voting Act (UOCAVA) to, among other things, extend the period of time that can be covered by a single absentee ballot application by absent uniformed service voters and certain other civilian voters residing outside of the United States from the year during which the application was received to a time period covering up to the two next regularly scheduled general elections for federal office. Election officials reported facing some of the same challenges in the November 2004 general election that they had identified to us for the November 2000 general election, and they also reported some new challenges. Continuing absentee voting challenges included (1) receiving late absentee voter applications and ballots; (2) managing general workload, resources, and other administrative constraints; (3) addressing voter error issues such as unsigned or otherwise incomplete absentee applications and ballot materials; and (4) preventing potential fraud. Election officials also told us that they encountered new challenges in the November 2004 general election. Some election officials said that the increased early voter turnout during this election resulted in long lines. In some local jurisdictions we visited, election officials said that factors such as inadequate planning on their part, limitations on types of facilities that could be used for early voting locations, and funding constraints on hiring more staff or acquiring more voting locations affected their management of large early voter turnout. In addition, some election officials reported that they encountered a challenge handling disruptive third parties as they attempted to approach early voters who were in line to vote. Another challenge could develop as a result of a 2002 HAVA amendment to UOCAVA. In an effort to help make registration and voting easier for absent uniformed service voters and certain other civilian voters residing outside of the United States, this 2002 amendment, as noted above, extended the period of time that can be covered by a single application from the year during which the application was received to a time period covering up to the next two subsequent general elections for federal office. Election officials in 4 jurisdictions we visited told us that a possible unintended consequence of this amendment could be that when uniformed services personnel are reassigned to other duty posts, absentee ballots may not be sent to the correct address for subsequent general elections. Even with a 2005 revision to the ballot request form whereby voters can indicate that they want ballots for one federal election only, election officials in 3 of these jurisdictions were concerned many absentee ballots would be returned as undeliverable. Absentee voting allows citizens the opportunity to vote when they are unable to vote at their precinct on Election Day. Although availability, eligibility requirements, administration, and procedures vary across the 50 states and the District of Columbia, absentee voting generally follows a basic process. As figure 18 shows, this process included four basic steps for the November 2004 general election. Jurisdictions we visited typically provided absentee ballot applications that registered voters used to request absentee ballots in a standard state or jurisdiction form, as shown in figure 19. According to our state survey, state election officials reported that registered voters could visit or write their local election office, or in some cases visit a state or local election Web site, to obtain an application or learn what information was required to request an absentee ballot. State election officials reported registered voters could return a completed absentee ballot application via the U.S. mail or in many other different ways as allowed by state absentee ballot provisions. Also, some election officials in jurisdictions we visited told us that voters could complete any part of the absentee voting process in person at their local elections office. Table 3 shows the various options allowed by states for requesting and returning absentee ballot applications. However, it is important to note that particular local jurisdictions might not have offered all of the options described below. According to our state survey results, states reported that applicants could find out the status of their absentee ballot application after it was submitted and offered at least one of several ways, including telephoning a state or local jurisdiction office, telephoning a hotline or toll-free number, or e-mailing a state or local jurisdiction office. For example, in 49 states and in the District of Columbia, applicants could telephone a state or local jurisdiction office, and in 47 states and in the District of Columbia, applicants could e-mail a state or local jurisdiction office to find out their absentee ballot applications’ status. Thirty-nine states and the District of Columbia notified the applicant if the application was rejected. While absentee ballots are generally provided to the voter through the mail, unless voting in person, on the basis of our survey of a representative sample of local jurisdictions nationwide, some jurisdictions provided absentee ballots using fax and e-mail. Specifically, for the November 2004 general election, we estimate that 17 percent of local jurisdictions provided absentee ballots by fax, and 4 percent of local jurisdictions provided absentee ballots by e-mail. On the basis of our discussions with election officials in jurisdictions we visited, absentee ballots are generally returned through the mail. Election officials in most jurisdictions we visited said that voters used a combination of envelopes for returning completed absentee ballots so that voters’ indentities would be distinct from the ballots they were casting. For example, a voter would place the completed ballot in a secrecy (inner) envelope, which would then be placed in an outer envelope. The secrecy envelope would be to ensure that the voted ballot was not linked to the voter, while the voter’s affidavit information, such as a name, address, and signature, needed to certify that the voter was eligible to vote, would be marked on the outer envelope. Election officials in some jurisdictions provided examples of the envelopes used to return absentee ballots. One of these examples had a separate affidavit envelope, which was to be placed in a pre-addressed return envelope and mailed to the local elections jurisdiction. Other examples allowed the voter to include the affidavit information on the back of the pre-addressed return envelope. Once the local elections jurisdiction certified that the absentee ballots could be counted using the affidavit information, election officials in jurisdictions we visited told us that they removed the secrecy envelope (with the voted ballot sealed inside) and set it aside for counting. Figure 20 shows examples of absentee ballot return envelopes and the inclusion of affidavit information. In our survey of state election officials, we asked whether absentee voters were able to find out the status of their submitted absentee ballots in various ways. According to our state survey, 44 states and the District of Columbia reported that absentee voters were able to telephone a state or local jurisdiction office, 32 states and the District of Columbia reported that absentee voters were able to e-mail a state or local jurisdiction office, 16 states reported that absentee voters could telephone a hotline or toll- free number, and 5 states reported that absentee voters’ ballot status was available via a Web site. Furthermore, 16 states reported that either state or local jurisdictions would notify the voter if the absentee ballot was not counted. However, 6 states reported that they do not allow voters to check the status of their absentee ballots at all. For example, Vermont reported that state law does not allow voters to find out whether or not the absentee ballot was counted. Kentucky reported that it does not track whether or not an individual voter’s ballot was counted because linking a voted ballot back to a specific voter violates that voter’s right to a secret ballot. A few states reported changes to their requirements with respect to absentee voting by (1) no longer requiring a reason or excuse for voting absentee; (2) eliminating the need for a mail-in absentee voter to have a notary or witness for the voter’s signature to accompany the ballot; and (3) not limiting permanent absentee voting status to individuals with disabilities or the elderly. According to our state survey regarding the November 2004 general election, all 50 states and the District of Columbia had some provisions allowing registered voters to vote before Election Day, but not every registered voter was eligible to do so. Twenty-one states reported allowing voters to vote absentee for the November 2004 general election without first having to provide a reason or excuse. The other 29 states and the District of Columbia reported requiring voters to meet one of several criteria, or “excuses,” to be eligible to vote before Election Day, such as having a disability, being elderly, or being absent from the jurisdiction on Election Day. The following are examples of excuses that some states required: absent from the state or county on Election Day; a member of the uniformed services or a dependent; a permanent or total disability; ill or having a temporary disability; over a certain age, such as 65; at a school, college, or university; employed on Election Day in a job for which the nature or hours prevent the individual from voting at his or her precinct, such as an election worker; and involved in emergency circumstances, such as the death of a family member. In our survey of local jurisdictions, we asked about problems encountered when processing absentee ballot applications. As shown in figure 21, we estimate that 9 percent of local jurisdictions received absentee applications that did not meet the excuse required by law, in states where excuses were required. The issue of applicants not meeting the required excuse is more of a problem for large jurisdictions than small or medium jurisdictions. According to our state survey, the number of states that allowed absentee voting without an excuse increased from 18 in 2000 to 21 in 2004. Since November 2004, 2 more states reported that they have eliminated their excuse requirement. Specifically, during visits to local jurisdictions in New Jersey, election officials told us that state law had changed since the November 2004 general election. According to these officials, no-excuse absentee voting was adopted by the New Jersey legislature and became effective in July 2005. Ohio also amended its absentee voter provisions, effective January 2006, to provide for no-excuse absentee voting. Election officials in 2 jurisdictions in 1 state we visited told us that if voters returned a completed (voted) ballot without having the signature notarized or affirmed by a witness, the vote would be disqualified and not counted. For the November 2004 general election, according to our state survey, 12 states reported requiring that mail-in absentee ballots contain attestation by a notary or witness for a voter’s signature to accompany the absentee ballot. From the November 2000 election to the November 2004 election, Florida was the only state that reported in our state survey that it had dropped the requirement that mail-in absentee ballots contain attestation by a notary or witness for a voter’s signature. Permanent absentee voting, which typically was available to individuals with disabilities or the elderly, was another way some states sought to help enfranchise certain categories of voters. Permanent absentee status, where offered, generally allowed the voter to apply for mail-in absentee ballots once (rather than for each separate election) over a specified time period. State requirements dictated when and how often a voter must apply for permanent absentee status. For example, for the November 2004 general election, in a New Jersey jurisdiction that we visited, election officials told us that state law required those eligible for permanent absentee status to apply at the beginning of the calendar year to receive absentee ballots for that year. According to the absentee ballot application provided by this jurisdiction, a voter’s permanent absentee status remains in effect throughout that year unless the voter notifies the election office otherwise. An election official in a Pennsylvania jurisdiction we visited said that his state allowed permanent absentee voters to apply once every 4 years. In this state, permanent absentee voters were to receive absentee ballots for all elections during the 4-year period, according to the election official. In 2 Washington jurisdictions we visited, election officials told us that any voter could qualify for permanent absentee status for all future elections (e.g., no time period specified). In one of these Washington jurisdictions, election officials provided a copy of the permanent absentee application instructing voters that their permanent absentee status would be terminated upon the (1) voter’s written request, (2) cancelation of the voter’s registration record, (3) death or disqualification, or (4) return of an ongoing absentee ballot as undeliverable. Our state survey results showed that since the November 2000 general election, 3 states (California, Rhode Island, and Utah) reported state changes that expanded, in some manner, the use of permanent absentee voting. For example, California, reported changes for the November 2004 election that allowed any voter to apply for and receive permanent absentee status. For the November 2000 general election, California previously reported that only certain categories of voters with disabilities (e.g., blind voters) were eligible for permanent absentee status. Overall, the results of our state survey showed that at the time of the November 2004 general election, 17 states reported having some provision for permanent absentee status, 32 states and the District of Columbia reported that they did not provide for permanent absentee status, and Oregon reported conducting its election entirely by mail—making permanent absentee status unnecessary in this state. Appendix VIII provides information on states’ requirements for no-excuse absentee voting and witness or notary signature provisions for the November 2000 and 2004 general elections and shows where changes occurred. States did not report any changes to their permanent absentee requirements since the November 2000 general election. The results from our state survey show that deadlines for voters to both apply for absentee ballots and return them to local jurisdictions to be counted differed among states. According to our state survey for the November 2004 general election, 47 states and the District of Columbia reported that they had absentee ballot application deadlines that ranged from Election Day (5 states: Connecticut, Maine, New Jersey, Ohio, and South Dakota) to 21 days before Election Day (Rhode Island). Three states (Florida, New Hampshire, and Oregon) reported having no absentee ballot application deadline, although ballots in these states had to be returned by the close of polls on Election Day. With respect to state deadlines for returning absentee ballots, many states reported having more than one deadline to correspond with differing methods of returning such ballots to election officials. In our state survey, 44 states reported having provisions requiring that absentee ballots be returned by or on Election Day; 7 states reported having provisions requiring that absentee ballots be returned a certain number of days before Election Day; and 8 states and the District of Columbia reported having provisions allowing mailed absentee ballots to be returned a certain number of days after Election Day, if such ballots were postmarked by a specified date. For example, for the 2004 November general election, Alaska reported two deadlines: (1) mail-in absentee ballots were to be received by close of business on the 10th day after the election when postmarked on or before Election Day, and (2) in-person absentee ballots were to be delivered by 8:00 p.m. on Election Day. Also, according to our state survey, Nebraska reported that for absentee ballots returned by mail, the deadline changed from no later than 2 days after Election Day for the November 2000 general election to the close of polls on Election Day for the November 2004 general election. According to our state survey, these deadlines may be different for absent uniformed service voters and certain other civilian voters residing outside the United States, a subject that will be discussed later in this chapter. In our October 2001 comprehensive report on election processes, we reported that election officials for the 2000 general election identified receiving applications and ballots after state statutory deadlines as a challenge. According to our nationwide survey, local jurisdictions encountered similar problems with processing absentee ballot applications and absentee ballots for the November 2004 general election. More specifically, on the basis of our survey, we estimate that 55 percent of local jurisdictions received absentee ballot applications too late to process. We also estimate 77 percent of local jurisdictions encountered problems in processing absentee ballots because ballots were received too late. Furthermore, we asked jurisdictions about which problems were encountered most frequently. An estimated 25 percent of local jurisdictions encountered the ballot lateness problem most frequently. Figure 22 shows that medium and large jurisdictions encountered lateness with absentee ballots more than small jurisdictions did. Appendix VIII summarizes states’ deadlines for receiving domestic mail-in absentee ballot applications and absentee ballots. Election officials in the local jurisdictions we visited told us that they tried to approve applications and mail absentee ballots to voters as quickly as possible, assuming that the ballots had been finalized and printed. In 8 jurisdictions we visited in 5 states (Colorado, Kansas, New Mexico, Pennsylvania, and Washington), election officials said that their states mandated that local election jurisdictions process absentee ballot applications within a specified time period, such as within 24, 48, or 72 hours of receipt of the application. In 2 Pennsylvania jurisdictions we visited, election officials stated that they established a local policy encouraging election staff to process absentee ballot applications faster (such as on the day of receipt) than the time period specified in state law (which was 48 hours). In 1 Illinois and 1 Nevada jurisdiction we visited, election officials said that while a 24- or 48-hour turnaround time for absentee ballot applications was not mandated in state law, local office policy was to process them as quickly as possible—such as within 24 hours of receipt of the application. During our site visits, election officials in 9 jurisdictions stated that they received large numbers of mail-in absentee ballot applications just prior to the deadlines prescribed by state law. Most of these election officials said they were able to meet their state-mandated or office policy application- processing time, although they had to work long hours and hire additional staff to process the absentee ballot applications by the deadline. In 1 Florida jurisdiction we visited, local election officials said that even though they had no absentee ballot application deadline, they processed applications using “long hours and extra people” and tried to send out absentee ballots within 24 hours of receiving a complete application. In jurisdictions we visited in Pennsylvania and Colorado, election officials said that sometimes the 24- or 48-hour turnaround was impossible to meet because the state did not finalize the ballots for printing until the days immediately preceding Election Day for the November 2004 election. For example, an election official in the Pennsylvania jurisdiction we visited told us that determining whether or not an independent presidential candidate’s name was to be included on the November 2004 general election ballot proved to be a challenge. In this jurisdiction, the validity of petition signatures supporting the independent candidate’s request to be included on the ballot was challenged in state court about 10 weeks before the election. As a result, according to the election official, election officials were required to participate in a court-mandated process of verifying the signatures. According to the election official, it took about 10 days in court to resolve the situation, which delayed the printing of the ballots. In 6 jurisdictions we visited, election officials told us that slowness in the delivery of the mail added to the processing time crunch during the week before Election Day—a problem that is out of election officials’ control and may contribute to the local election officials’ receipt of absentee voting materials after state-mandated deadlines. Although envelopes can use an “official election mail” designation, election officials in these 6 jurisdictions we visited said that the U.S. Postal Service did not always process absentee voting materials in a timely manner. For example, in one New Mexico jurisdiction we visited, election officials stated that they experienced serious problems with the U.S. Postal Service delivering absentee ballot applications. These officials felt that the post office ignored the envelopes’ official election mail designation and did not process and deliver them quickly. Election officials in this jurisdiction said that their telephone system crashed numerous times leading up to Election Day in November 2004, given the heavy volume of incoming calls from voters checking on the status of their absentee ballot applications. In one Pennsylvania jurisdiction that we visited, election officials said that postal concerns were raised when some college students’ absentee ballot applications were received after Election Day. These officials could not definitely say at what point these applications might have been delayed and explained that the mail delivery delay could have been attributable to either the U.S. Postal Service or the university’s mailing center. Figure 23 illustrates the use of special postal markings for absentee ballot materials. While election officials in 6 jurisdictions we visited told us about challenges with mail delivery, election officials in 7 jurisdictions we visited told us that they did not have problems with mail delivery or coordinating with the U.S. Postal Service. In an Illinois jurisdiction we visited, election officials told us that prior to the election, staff from his office met with the postmaster to establish a good working relationship. Election officials in a New Hampshire and Ohio jurisdiction we visited stated that the post office was very helpful. In a Nevada jurisdiction we visited election officials said that they received excellent service from the postal service. When an absentee application was received after the state-mandated deadline, election officials in 13 jurisdictions we visited told us that they often sent these applicants a letter explaining that their application was received too late. In 5 of these same jurisdictions, election officials said they also provided an alternative to absentee voting such as early voting, voting on Election Day, or in-person absentee voting, where the voter could visit the election office and complete the absentee voting process in person. In our October 2001 report on election processes, we reported that election officials for the 2000 general election identified voters’ failure to provide critical information, with respect to signatures and addresses, as challenges to successfully processing mail-in absentee applications and verifying ballots for counting. According to our nationwide survey for the November 2004 election, local jurisdictions encountered similar voter errors that could affect the jurisdictions’ ability to establish voter eligibility or approve the ballot for counting when processing absentee ballot applications and absentee ballots. In our nationwide survey, we asked local jurisdictions what problems they encountered in processing absentee ballot applications. We estimate that 48 percent of them identified problems receiving absentee ballot applications that contained a missing or illegible voter signature. Furthermore, we asked about which problems were encountered most frequently. An estimated 20 percent of local jurisdictions encountered the problem of receiving absentee ballot applications that contained a missing or illegible voter signature most frequently. Table 4 shows our estimates of the types of voter errors local jurisdictions encountered with absentee ballot applications submitted for the November 2004 general election. On the basis of our nationwide survey, large jurisdictions had more of a problem than small or medium jurisdictions concerning missing or illegible signatures. Specifically, we estimate that 73 percent of large jurisdictions encountered this problem, while we estimate 44 percent and 55 percent of small and medium jurisdictions respectively encountered it. Large jurisdictions are statistically different from medium and small jurisdictions. When elections officials were unable to process absentee ballot applications, our nationwide survey showed that some local jurisdictions contacted applicants to inform them of the status of their application using the methods listed in table 5. Specifically, on the basis of our survey of local jurisdictions, we estimate that 72 percent of all jurisdictions telephoned applicants when their absentee applications could not be processed. We found no significant difference based on the size of the jurisdiction with regard to this contact method. However, we estimate that 84 percent of medium jurisdictions and 90 percent of large jurisdictions contacted absentee applicants by U.S. mail. In contrast, 63 percent of small jurisdictions contacted absentee applicants with problem applications via U.S. mail. Small jurisdictions are statistically different from medium and large jurisdictions. We also estimate that 10 percent of local jurisdictions did not inform any applicants about the status of their application. In an Illinois jurisdiction that we visited, elections officials told us that they would do everything possible in an attempt to obtain complete absentee applications from voters. If the absentee ballot application was incomplete, election office staff said they contacted the voter and attempted to resolve the problem in the best way practical, according to the election officials. For example, if the application was missing the voter’s signature and there was enough time, the staff mailed the application back to the voter for signature. If time was limited, the staff called the voter and asked him or her to visit the election office to sign the application. An election official in a Pennsylvania jurisdiction we visited told us that if applicants forgot to include one part of an address, such as a ZIP code, but election staff could match the rest of the address and voters’ identifying information with their registration information, the application was approved. Election officials in another Pennsylvania jurisdiction and a Nevada jurisdiction told us that the voter registration system automatically generated letters to voters when the application could not be processed for any reason. In our nationwide survey, we asked local jurisdictions what problems they encountered in processing submitted absentee ballots. We estimate that 61 percent of all jurisdictions reported that absentee ballots were received without the voter’s signature on the envelope. We estimate 54 percent of small jurisdictions, 76 percent of medium jurisdictions, and 90 percent of large jurisdictions encountered this problem. Jurisdictions of all sizes are statistically different from one another. Table 6 shows our estimates of the types of problems election officials encountered on absentee ballots. We estimate that 81 percent of local jurisdictions encountered at least one of the problems listed. If the ballot was not able to be verified, election officials in some jurisdictions we visited told us that they attempted to contact the voter, time permitting, so that the affidavit envelope could be corrected and approved for counting. In 10 jurisdictions we visited, election officials said that they reviewed the affidavit envelope information to approve the ballots as they received them rather than waiting until Election Day. On the basis of our nationwide survey, we estimate that 40 percent of local jurisdictions contacted the voter by mail in an attempt to address a problem with the affidavit envelope, and 39 percent contacted the voter via telephone. Table 7 shows our estimates of the contact methods used by local jurisdictions when absentee ballots had problems that could prevent them from being approved for counting if not corrected. Differences in whether voters were contacted by mail when there were problems with their absentee ballots were based on the size of the local elections jurisdiction. Specifically, we estimate that 31 percent of small, 61 percent of medium, and 66 percent of large jurisdictions contacted voters by mail. Small jurisdictions are statistically different from medium and large jurisdictions. While election officials in 10 jurisdictions we visited told us that they qualified absentee ballots prior to Election Day—allowing them time to follow up with voters, in 6 local jurisdictions we visited, election officials told us that they qualified or approved absentee ballots for counting on Election Day. According to election officials in these jurisdictions, contacting the voter for corrected or complete ballot information was not a viable option because there was not enough time. These election officials stated that absentee ballots with incomplete or inaccurate information on the affidavit envelope would not be qualified or counted. Some election officials in jurisdictions we visited told us that voters can visit local election offices and complete all or part of the absentee process in person. Some election officials told us that when voters vote in-person absentee, officials are well situated to help ensure that the application and ballot are complete and accurate before accepting them. For example, in one Connecticut jurisdiction we visited, election officials told us that they did not have incomplete absentee ballot applications from voters who visited the office in person because they reviewed the application and required the person to correct any errors before leaving. In our October 2001 report on election processes, we reported that election officials for the 2000 general election had concerns with mail-in absentee voting fraud, particularly regarding absentee voters being unduly influenced or intimidated while voting. However, we also reported that election officials identified that they had established procedures to address certain potential for fraud, such as someone other than the registered voter completing the ballot or voters casting more than one ballot in the same election. Once the voters received and voted absentee ballots in accordance with any state or local requirements (such as providing a signature or other information on the affidavit envelope), such ballots were to be returned to specified election officials. In general, local election officials or poll workers were to review the information on the affidavit envelope and subsequently verified or disqualified the ballot for counting based on compliance with these administrative requirements, according to election officials in some local jurisdictions we visited. In our state survey, we asked states whether they specified how local jurisdictions were to determine eligibility of absentee ballots. According to our survey, 44 states and the District of Columbia reported that at the time of our survey, they specified how to determine absentee ballot eligibility, while 6 states reported that they did not. Colorado, for example, specified that the poll worker is to compare the signature of the voter on a self- affirmation envelope with a signature on file with the county clerk and recorder. Wisconsin specified, among other things, that inspectors ascertain whether a certification has been properly executed, if the applicant is a qualified elector of the ward or election district, and that the voter has not already voted in the election. Our survey of local elections jurisdictions asked election officials if they used any of the procedures described in table 8 to ensure that the absentee voter did not vote more than once for the November 2004 general election. These procedures could have been conducted either manually by elections officials or through system edit checks. On the basis of our survey of local jurisdictions, we estimate that 69 percent of jurisdictions checked the Election Day poll book to determine whether the voter had been sent an absentee ballot, and 68 percent of jurisdictions checked the Election Day poll book to determine whether the voter had completed an absentee ballot. On our survey of local jurisdictions, we also asked if any of the procedures listed in table 9 were in place to ensure that the absentee ballots were actually completed by the person requesting the ballot. On the basis of our survey of local jurisdictions, we estimate that 70 percent of jurisdictions compared the absentee ballot signature with the absentee application signature. With respect to comparing the absentee ballot application signature with the absentee ballot signature, there were differences based on the size of the jurisdiction. On the basis of our survey of local jurisdictions, we estimate that 72 percent of small, 69 percent of medium, and 40 percent of large jurisdictions compared these signatures. Large jurisdictions are significantly different from small and medium jurisdictions. One reason that large jurisdictions may differ is that they have a large volume of absentee ballots to process and it may be too resource intensive to compare signatures, among other things. During our site visits, elections officials provided examples of the procedures they used to ensure against fraud. For example in 20 local jurisdictions that we visited, election officials said that when the ballot signature was compared with the absentee application signature, voter registration signature, or some other signature on file, the signatures had to match for the ballot to be approved and counted. In addition to matching signatures, election officials in 2 Illinois jurisdictions and 1 New Jersey jurisdiction we visited told us that during the Election Day absentee ballot qualification process, poll workers were instructed to check the poll book to determine if the voter had cast an Election Day ballot. In 1 of these Illinois jurisdictions, if poll workers found both an Election Day and absentee ballot were cast, they were instructed to void the absentee ballot so that it would not be counted. In addition to matching signatures, election officials in a Nevada jurisdiction we visited said that they used an electronic poll book to manage absentee, early, and Election Day voting to ensure that voters cast only one ballot. Once a ballot was cast in this jurisdiction, the electronic poll book was annotated and the voter was not allowed to cast another ballot. Although election officials in the 20 jurisdictions mentioned above told us that they had procedures in place designed to help prevent fraud during the absentee voting process, election officials told us that they still suspected instances of fraud. For example, in a Colorado jurisdiction we visited, election officials told us that they referred 44 individuals who allegedly voted absentee ballots with invalid signatures to the district attorney for investigation. In a New Mexico jurisdiction that we visited, election officials told us that organized third parties went door to door and encouraged voters to apply for absentee ballots. Once these voters received their ballots, according to election officials, the third parties obtained the voters’ names (in New Mexico this is public information, according to such officials), and went to the voters’ homes and offered to assist them in voting the ballots. These election officials said that they were concerned that the latter part of this activity might be intimidating to voters and could result in voter fraud. In general, the Uniformed and Overseas Citizens Absentee Voting Act requires, among other things, that states permit absent uniformed services members and U.S. citizen voters residing outside the country to register and vote absentee in elections for federal office. In addition, states also generally offer some measure of absentee voting for registered voters in their states not covered under UOCAVA. The basic process for absentee voting under UOCAVA is generally similar to that described in figure 18 for absentee voters not covered under UOCAVA in that UOCAVA voters also must establish their eligibility to vote on their absentee ballot application, and the ballot must be received by the voter’s local jurisdiction to verify it for counting. Election officials in some jurisdictions we visited told us that they allow UOCAVA voters to submit a voted ballot via facsimile—a method that might not be allowed for absentee voters not covered under UOCAVA because of concerns about maintaining ballot secrecy. In 6 jurisdictions we visited, election officials told us that they require voters under UOCAVA to submit a form acknowledging that ballot secrecy could be compromised when ballots are faxed. One mechanism used to simplify the process for persons covered by UOCAVA to apply for an absentee ballot is the Federal Post Card Application (FPCA), which states are to use to allow such absentee voters to simultaneously register to vote and request an absentee ballot. On our survey of local jurisdictions, we asked if any problems were encountered in processing absentee applications when the applicant used the FPCA. We estimate that 39 percent of local jurisdictions received the FPCA too late to process—a problem also encountered with other state-provided absentee ballot applications. Table 10 shows our estimates of problems local jurisdictions encountered when processing Federal Post Card Applications. In addition, we asked about which problems were encountered most frequently when the FPCA was used, and an estimated 19 percent of local jurisdictions encountered the problem of receiving the FPCA too late to process more frequently than other problems. Also, uniformed services voters and U.S. citizen voters residing outside of the country are allowed to use the Federal Write-In Absentee Ballot to vote for federal offices in general elections. This ballot may be used when such voters submit a timely application for an absentee ballot (i.e., the application must have been received by the state before the state deadline or at least 30 days prior to the general election, whichever is later) but do not receive a state absentee ballot. Some states’ absentee ballot application forms included serving in a uniformed service or residing outside the country as excuses for voting absentee. According to our state survey, 4 states (Minnesota, Missouri, Oklahoma, and Rhode Island) reported that they require attestation by a notary or witness for a voter’s signature on voted mail-in absentee ballots but do not require uniformed service voters and U.S. citizen voters outside the country to provide this on their voted ballots. For the 2004 November general election, according to our state survey, 9 states reported having absentee ballot deadlines for voters outside the United States that were more lenient than the ballot deadlines for voters inside the United States. Table 11 lists these 9 states and the difference between the mail-in ballot deadline from inside the United States and the mail-in absentee ballot deadline from outside the United States. HAVA amended the UOCAVA to, among other things, extend the period of time that can be covered by a single absentee ballot application—the Federal Post Card Application—by absent uniformed service voters and citizen voters residing outside the United States from the year during which the application was received to a time period covering up to the two next regularly scheduled general elections for federal office. To illustrate, if uniformed service voters or civilian voters residing outside the United States submitted a completed FPCA in July 2004, they would have been allowed to automatically receive ballots for the next two federal general elections, including those held in 2004 and 2006. (See fig. 24 for an example of the FPCA used in 2004.) In 4 local jurisdictions we visited, election officials told us that the amendment described above may present a challenge for successfully delivering absentee ballots to the uniformed services members because they tend to move frequently. For example, in a North Carolina jurisdiction that we visited, election officials stated that addresses on file for such voters at the time of the November 2004 general election may be no longer correct and that mail sent to these voters could be returned as undeliverable. Also, in 1 jurisdiction in Georgia that we visited, election officials told us that they were concerned that many of the absentee ballots sent in subsequent general elections would be returned as undeliverable. In an Illinois jurisdiction we visited, elections officials expressed concerns about paying the postage for mail that may be undeliverable will be a challenge in future years. Also, we noted in our March 2006 report on election assistance provided to uniformed service personnel, that one of the top two reasons for disqualifying absentee ballots for UOCAVA voters was that the ballots were undeliverable. The Federal Post Card Application was revised in October 2005, after the November 2004 general election, and now allows overseas military and civilians to designate the time period for which they want to receive absentee ballots. (See figure 24 for the revised FPCA.) Those who do not wish to receive ballots for two regularly scheduled general elections can designate that they want an absentee ballot for the next federal election only and then complete the form and request a ballot for each subsequent federal election separately. The FPCA used at the time of the November 2004 election did not allow overseas military and civilian voters to make this designation. Even with the revised FPCA, some applications might not have this box checked, and jurisdictions could continue to have absentee ballots returned as undeliverable. In an attempt to mitigate these problems, election officials in 3 local jurisdictions we visited told us that they planned several activities in an attempt to maintain and update the addresses of uniformed services voters and civilian voters residing outside the country. In a Washington jurisdiction we visited, election officials told us that they began requesting e-mail addresses from such voters so that any problems with these applications or ballots could be corrected more efficiently. In previous elections, when e-mail addresses were not available, elections officials in this jurisdiction told us that many absentee applications and ballots sent to uniformed services members and civilian voters residing outside the United States were often returned as undeliverable. In a Georgia jurisdiction that we visited, election officials said that they planned to create a subsystem within their voter registration system. This subsystem will, according to the election officials, allow staff in the election office to produce a form letter for each uniformed services voter that will verify the voter’s current address. The election officials also told us letters will be mailed in January asking the voter to contact the jurisdiction to confirm that he or she continues to reside at the address on the letter. If the jurisdiction does not receive confirmation from the uniformed services voter, the election officials told us that they will contact the Federal Voting Assistance Program (FVAP) for assistance in locating the voter. In an Illinois jurisdiction we visited, election officials stated that they plan to canvass all uniformed services members and civilians residing outside the United States who are registered in the state in 2006. Election officials in this jurisdiction told us that they had approximately 7,400 such registered voters who completed the FPCA and that the jurisdiction planned to canvass these voters to confirm that they continued to reside at the address on the FPCA. This jurisdiction expects that as many as half of these canvass cards will be returned as undeliverable. Once the cards are returned, state law allows those voters whose canvass cards are returned to be deleted from the voter registration list, according to the election officials. Early voting is another way to provide registered voters with the opportunity to cast ballots prior to Election Day. However, conducting early voting is generally more complicated for election officials than conducting Election Day voting. In the jurisdictions we visited in 7 states with early voting, election officials described early voting as generally in- person voting at one or more designated polling locations usually different from polling locations used at the precinct level on Election Day. The voting may or may not be at the election registrar’s office. Early voting is distinct from in-person absentee voting in that in-person absentee voters usually apply for an absentee ballot at the registrar’s office and vote at the registrar’s office at that time. Also, early voting usually does not require an excuse to vote, which some states require for absentee voting, and in the jurisdictions we visited in 7 states with early voting, it was usually offered for a shorter period of time than absentee voting. The time frame allowed for absentee voting was almost always at least twice as long as for early voting. For example, election officials in the Colorado jurisdictions we visited said that they allow 30 days for absentee voting and 15 days for early voting. In the jurisdictions we visited in 7 states with early voting, election officials said early voting is similar to Election Day voting in that the voter generally votes using the same voting method as on Election Day. However they added that it differs from Election Day voting in that voters can vote at any early voting polling location because all early voting locations have access to a list of all registered voters for the jurisdiction (not just precinct specific) and can provide voters with appropriate ballots that include federal, state, and precinct-specific races. Proponents argue that early voting is convenient for voters and saves jurisdictions money by reducing the number of polling places and poll workers needed on Election Day, and also provides the voter with more opportunity to vote. Opponents counter that those who vote early do so with less information than Election Day voters, and there is no proof that early voting increases voter turnout. Statistics on voter turnout for early voting can be difficult to come by, partly because some states and localities combine early and absentee voting numbers. Nevertheless, early voting in certain jurisdictions appears to be popular with voters and on the rise. In a New Mexico jurisdiction, election officials told us that early voting accounted for about 34 percent of the ballots cast in that jurisdiction. In North Carolina and Colorado elections jurisdictions we visited, election officials said that early voters cast about 35 and 38 percent of the jurisdictions’ total votes in the November 2004 election, respectively. In a Nevada jurisdiction we visited, election officials told us that the percentage of voters who voted early steadily increased over time. The officials said that in 1996, about 17 percent of voters voted early; in 2000, 43 percent voted early; and in the November 2004 general election, about 50 percent (271,500) of their voters voted early. Our prior work on the 2000 general election did not identify states that offered early voting as we have defined it. Rather, we reported on absentee and early voting together. Thus, we are unable to identify the change in the number of states that offered early voting for the November 2000 general election and the November 2004 general election. We describe the availability of early voting throughout the nation and the challenges and issues that election officials encountered in the November 2004 general election as they conducted early voting in selected jurisdictions. Many early polling locations in Florida and elsewhere received media publicity about voters standing in long lines and waiting for long periods of time to vote early. In half of the local election jurisdictions we visited, election officials described encountering challenges that included long lines, and some identified challenges dealing with disruptive third-party activities at the polls. For the November 2004 general election, in our state survey, 24 states and the District of Columbia reported offering early voting. In addition, 2 states—Illinois and Maine—reported, in our state survey, that they had enacted legislation or taken executive action since November 2004 to provide for early voting in their states. Another 7 states reported that with respect to early voting, they (1) had legislation pending, (2) considered legislation in legislative session that was not enacted, or (3) had an executive action that was pending or was considered. Figure 25 shows where early voting was provided for the November 2004 general election. On the basis of our survey of local jurisdictions, we estimate 23 percent of jurisdictions were in states that offered early voting. Furthermore, we estimate that 16 percent of small jurisdictions, 40 percent of medium jurisdictions, and 52 percent of large jurisdictions were in states that offered early voting. Small jurisdictions are statistically different from both medium and large jurisdictions. The number of days that early voting was available in these 24 states and the District of Columbia varied. In some cases, early voting was allowed no sooner than a day or a few days prior to Election Day, while in other cases voters had nearly a month or longer to cast an early ballot. Table 12 shows the range of days for early voting among the states and the District of Columbia that reported providing early voting for the November 2004 election. On the basis of our survey of local jurisdictions, we estimate that 75 percent of the jurisdictions that offered early voting offered it for 2 or more weeks prior to Election Day. Figure 26 shows the estimated percentage of local jurisdictions that offered early voting for various time periods. Among the local jurisdictions that we visited in the 7 states that provided early voting, we found that the shortest time frame allowed for early voting was in Georgia, which had 5 days, and the longest time frame allowed for early voting was in New Mexico, with 28 days. Furthermore, in the local jurisdictions we visited in the 7 states that provided early voting, election officials supplied information on early voting hours that ranged from weekday business hours to those that included weekends and evenings. For more details on the characteristics of early voting sites we visited, see appendix VII. During the course of our work, a limited review of state statutes showed, for example, that Nevada statute requires early voting polling places be open Monday through Friday, 8 a.m. to 6 p.m., during the first week of early voting and possibly to 8 p.m. during the second week, dependent upon the county clerk’s discretion. In addition, under the Nevada provision, polling places must be open on any Saturdays within the early voting period from 10 a.m. to 6 p.m., and may be open on Sundays within the early voting period dependent upon the county clerk’s discretion. Under these provisions, the early voting period is to begin the third Saturday prior to an election and end the Friday before Election Day. Similarly, Oklahoma statute provides that voters be able to cast early ballots from 8 a.m. to 6 p.m. on the Friday and Monday immediately before Election Day, and from 8 a.m. to 1 p.m. on the Saturday immediately before Election Day. Some states’ statutes are less prescriptive, such as those of Florida, where the statute specifies that early voting should be provided for at least 8 hours per weekday during the early voting period, and at least 8 hours in the aggregate for each weekend during the early voting period, without specifying the specific hours such voting is to be offered. Other states, such as Kansas, however, do not specify in statute the hours for voting early. Kansas statute, in general, leaves it to county election officials to establish the times for voting early. Officials at some local jurisdictions we visited said that their hours of operations were set based on the hours of the election office or by the hours of the facility that was hosting early voting such as a shopping mall or a library. According to our survey of local jurisdictions, an estimated 34 percent of local jurisdictions that provided early voting for the November 2004 general election offered early voting during regular business hours (e.g., from 8 a.m. until 4 p.m.) on weekdays, and 16 percent offered early voting during regular business hours on weekdays and during other hours. Other hours included weekday evenings (after 4 p.m. or 5 p.m. until 7 p.m. or 9 p.m.) and Saturdays (all day) and Sundays (any hours) for about 2 percent of the jurisdictions, respectively. As with early voting time frames, some states reported having requirements for local election jurisdictions regarding the number of early voting locations. In our state survey, 17 of the 25 entities (including 24 states and the District of Columbia) that reported offering early voting for the November 2004 general election also reported having requirements for local jurisdictions regarding the number or distribution of early voting locations. Kansas election standards, for example, provide for one such voting location per county unless a county’s population exceeds 250,000, in which case the election officer may designate additional sites as needed to accommodate voters. Election officials in 1 jurisdiction we visited said that state statute determined the number of locations, while election officials in 13 other jurisdictions told us they decided the number of locations. For example, New Mexico’s early voting statutory provisions specifically require that certain counties with more than 200,000 registered voters establish not fewer than 12 voting locations each. During our site visits, we asked jurisdictions how they determined the number of early voting locations. In a Nevada jurisdiction that we visited, election officials said that the number of locations was determined by the availability of resources such as fiscal and manpower needs. In a Colorado jurisdiction we visited, an election official said he would like to have had more early voting locations but could not because the jurisdiction did not have the funds to pay for additional costs associated with additional sites, such as the cost for computer connections needed for electronic voter registration list capability. In a North Carolina jurisdiction we visited, election officials said that they had only one early voting location because they did not have election staff that would be needed to manage another site. In many ways, early voting is conducted in a manner substantially similar to Election Day voting in that polling locations are obtained, workers are recruited to staff the sites for each day polling locations are to be open, and voting machines and supplies are delivered to the polling locations. However, as described by election officials in jurisdictions we visited that had early voting, early voting differs from Election Day voting in that staff are generally required to perform their voting day-related duties for more than 1 day, and staff generally do not use poll books to identify eligible voters and check them in. Instead, as described by some of these jurisdictions, early voting staff usually access the jurisdiction’s voter registration list to identify eligible voters and to indicate the voter voted early to preclude voting on Election Day or by absentee ballot. Also, election officials told us that, generally, staff must possess some computer skills and need to be trained in using the jurisdiction’s voter registration system. Furthermore, staff must be aware that ballots are specific to the voter’s precinct. In our nationwide survey of local election jurisdictions, we asked about the type of staff who worked at early voting polling places. According to our survey for the November 2004 general election, local election jurisdictions relied on permanent election jurisdiction staff most often to work at early voting polling locations. As table 13 shows, we estimate 30 percent of local jurisdictions offered early voting using only permanent election jurisdiction staff to work at the early voting polling places; we estimate that 14 percent of local jurisdictions used poll workers exclusively; and we estimate 14 percent used other staff (e.g., county or city employees). Election officials at 11 jurisdictions we visited emphasized the importance of staffing early voting locations with experienced staff such as election office staff or experienced and seasoned poll workers. Even with experienced staff working early voting locations, election officials at local jurisdictions we visited mentioned that staff were required to take training and were provided tools to help them perform their duties. In our nationwide survey, we asked local jurisdictions that provided early voting about the ways that staff were trained for early voting. As shown in table 14, the majority of jurisdictions used methods, such as providing a checklist of procedures, written guidance for self-study or reference, and quick reference materials for troubleshooting, to train early voting staff. Local jurisdictions could do more than one of the above ways to train early voting staff. On the basis of our local survey, we estimate that 14 percent of local jurisdictions used classroom training, written guidance for self-study or reference, a checklist of procedures, and quick reference materials for troubleshooting to train early voting staff. When asked about what worked particularly well during early voting, election officials in 1 jurisdiction we visited in Colorado said that that they provided 8 hours of training and had on-site supervision that they thought contributed to a successful early voting experience. The election officials also said they used a feature in their electronic poll book system to track the number of ballots used at each site to determine whether sites had adequate inventories of ballots. The program for the poll book system had an alarm that went off if any site was running low on ballots, according to these election officials. Two other jurisdictions we visited in Kansas and Florida noted the importance of having experienced staff for early voting, with the election officials in 1 Kansas jurisdiction saying that designating a group of workers to work on early voting helped the process run effectively and the election officials in 1 Florida jurisdiction saying that having the supervisor of elections office staff on site to support early voting helped make the process work well. When asked about challenges with early voting faced during the November 2004 general election, in half of the local jurisdictions we visited that offered early voting election officials identified long lines as a major challenge. Election officials at 5 local jurisdictions we visited said that they had not anticipated the large number of voters who had turned out to vote early. Officials attributed challenges handling the large number of voters and resulting long lines to problems with technology, people, and processes. Election officials at local jurisdictions we visited made the following comments: Election officials in one Florida jurisdiction we visited said that their jurisdiction faced more early voters than anticipated and this fact, coupled with slowness in determining voter eligibility, resulted in long lines. They said that on the first day of early voting, staff was unable to access the voter registration list because laptops were not functioning properly. To address the problem, a worker at the early voting location paired with another worker, who called the supervisor of elections office to obtain voter registration information and provide information on the voter seeking to vote early. An election official in another Florida jurisdiction said that while state law provides for early voting in the main office of the supervisor of elections, other locations may be used only under certain conditions. For example, in order for a branch office to be used, it must be a full-service facility of the supervisor and must have been designated as such at least 1 year prior to the election. In addition, a city hall or public library may be designated as an additional early voting location, but only if the sites are located so as to provide all voters in the county an equal opportunity to cast a ballot, insofar as is practicable. The official thought more flexibility was needed to allow him to either have more early voting locations or use other types of facilities, such as a local community center, that could accommodate more voters. An election official in a Nevada jurisdiction we visited said that the jurisdiction’s process flow was inadequate to handle the large turnout for early voting. The election official said that the jurisdiction had not planned sufficiently to manage the large turnout for early voting and did not have enough staff to process voters. The election official said that in the future, he will hire temporary workers and will assign one person to be in charge of each process (e.g., checking in voters, activating the DRE machine, etc.) In addition, the election official said that, in hindsight, he made a questionable decision to close all but two early voting locations for the last day of early voting. The closing of all but two locations on the last day of early voting coincided with a state holiday so children were out of school. The decision to close all but two locations caused 3 to 3½ hours of wait time, with parents waiting in line with children. The election official said he has set a goal for the future that no wait time should be longer than half an hour. To address challenges related to heavy early voter turnout, election officials in 1 Nevada jurisdiction said they have gradually added new early voting locations each year to keep up with the increasing number of people who vote early. In a New Mexico jurisdiction we visited, election officials said that they used a smaller ratio of voters to machines than required by state statute. According to these election officials, the state required at least one machine for every 600 voters, and during early voting, the election officials said they used one machine for every 400 voters registered in the jurisdiction. In 1 Colorado jurisdiction we visited, election officials said that they addressed the challenge of long lines by having greeters inform voters about the line and make sure the voters had required identification with them. They said they provided equipment demonstrations and passed out sample ballots so people in line could consider their choices, if they had not already done so. They also said they offered people in line the option of absentee ballot applications. In 3 jurisdictions we visited, election officials stated that they encountered challenges dealing with disruptive third-party activities at early voting sites. In particular, concerns were raised about various groups attempting to campaign or influence voters while the early voters waited in line. State restrictions on various activities in or around polling places on Election Day include prohibitions relating to, for example, the circulation of petitions within a certain distance of a polling place, the distribution of campaign literature within a certain distance of the polls, the conduction of an exit or public opinion poll within a certain distance of the polls, and disorderly conduct or violence or threats of violence that impede or interfere with an election. Election officials in 1 jurisdiction we visited stated that campaign activities too close to people waiting in line were a concern to the extent that police were called in to monitor the situation at one early voting location. Election officials in a Florida jurisdiction we visited said that they were concerned about solicitors, both candidates and poll watchers, approaching people waiting in line to vote early and offering them water or assistance in voting. While Florida’s statutory provisions in place for the November 2004 general election contained restrictions of various activities in or around polling places on Election Day, such provisions did not explicitly address early voting sites. Amendments to these provisions, effective January 2006, among other things, explicitly applied certain restrictions of activities in or around polling places to early voting areas. With respect to poll watchers, these amendments also prohibit their interaction with voters to go along with the pre-existing prohibition on obstructing the orderly conduct of any election by poll watchers. Making voting easier prior to Election Day has advantages for voters and election officials, but also presents challenges for elections officials. Many states and local jurisdictions appear to be moving in the direction of enabling voters to vote before Election Day by eliminating restrictions on who can vote absentee and providing for early voting. Many states allowed voters to use e-mail and facsimiles to request an absentee ballot application and, in some cases, to return applications. To the extent that large numbers of voters do vote absentee or early, it can reduce lines at the polling place on Election Day and, where permitted by state law, ease the time pressures of vote counting by allowing election officials to count absentee and early votes prior to Election Day. However, there are also challenges for election officials. An estimated 55 percent of jurisdictions received absentee ballot applications too late to process, and an estimated 77 percent received ballots too late. Although we do not know the extent of these problems in terms of the number of applications and ballots that could not be processed, the estimated number of jurisdictions encountering these problems may be of some concern to state and local election officials. Absentee application deadlines close to Election Day provide citizens increased time to apply to vote absentee. However, the short time period between when applications are received and Election Day may make it difficult for election officials to ensure that eligible voters receive absentee ballots in time to vote and return them before the deadline for receipt at election offices. Voter errors on their absentee applications and ballots also create processing problems for election officials. These include missing or illegible signatures, missing or inadequate voting residence addresses, and missing or incomplete witness information for a voter’s signature or other information. In addition, mail-in absentee ballots are considered by some to be particularly susceptible to fraud. This could include such activities as casting more than one ballot in the same election or someone other than the registered voter completing the ballot. Despite efforts to guard against such activities, election officials in some of the jurisdictions we visited expressed some concerns, particularly regarding absentee voters being unduly influenced or intimated by third parties who went to voters’ homes and offered to assist them in voting their ballots. Some election officials expressed similar concerns about the influence of third parties on early voters waiting in line who were approached by candidates and poll watchers. After this happened in Florida in November 2004, the state amended its election provisions to prohibit such activity with respect to early voters. Getting absentee ballots to uniformed service personnel and overseas citizens is a continuing challenge. UOCAVA permitted such voters to request an absentee ballot for the upcoming election, and HAVA extended the covered period to include up to two subsequent general elections for federal office. Because the duty station of uniformed service personnel may change during the period covered by the absentee ballot requests, election officials in jurisdictions we visited were concerned that they have some means of knowing the current mailing address. Some jurisdictions are taking action to ensure that they have the correct address for sending absentee ballots for the November 2006 election, such as requesting e-mail addresses that can be used to obtain the most current address information prior to mailing the absentee ballot. To the extent there are problems identifying the correct address, uniformed service personnel and overseas civilians may either not receive an absentee ballot or receive it too late to return it by the deadline required for it to be counted. Whether voters are able to successfully vote on Election Day depends a great deal on the planning and preparation that occur prior to the election. Election officials carry out numerous activities—including recruiting and training poll workers; selecting and setting up polling places; designing and producing ballots; educating voters; and allocating voting equipment, ballots, and other supplies to polling places—to help ensure that all eligible voters are able to cast a ballot on Election Day with minimal problems. In our October 2001 comprehensive report on election processes nationwide we described these activities as well as problems encountered in administering the November 2000 general election. Since then, federal and state actions have been taken to help address many of the challenges encountered in conducting the November 2000 general election. However, reports after the November 2004 general election highlighted instances of unprepared poll workers, confusion about identification requirements, long lines at the polls, and shortages of voting equipment and ballots that voters reportedly encountered on Election Day. This chapter describes changes and challenges—both continuing and new—that election officials encountered in preparing for and conducting the November 2004 general election. States and local jurisdictions have reported making changes since the November 2000 general election as a result of HAVA provisions and other state actions to improve the administration of elections in the United States. In addition to establishing a commission—the U.S. Election Assistance Commission—with wide-ranging duties that include providing information and assistance to states and local jurisdictions—HAVA also established requirements with respect to elections for federal office for, among other things, certain voters who register by mail to provide identification prior to voting; mandated that voting systems accessible to individuals with disabilities be located at each polling place; and required voter information to be posted at polling places on Election Day. HAVA also authorized the appropriation of federal funds for payments to states to implement these provisions and make other improvements to election administration. Since the November 2000 general election, some states have also reported making changes to their identification requirements for all voters. Election officials reported encountering many of the same challenges preparing for and conducting the November 2004 general election as they did in 2000, including recruiting and training an adequate supply of skilled poll workers, locating a sufficient number of polling places that met requirements, designing ballots that were clear to voters when there were many candidates or issues (e.g., propositions, questions, or referenda), having long lines at polling places, and handling the large volume of telephone calls received from voters and poll workers on Election Day. Election officials in some of the jurisdictions we visited also reported encountering new challenges not identified to us in the 2000 general election with third-party (e.g., poll watchers, observers, or electioneers) activities at polling places on Election Day. On the basis of our survey of a representative sample of local election jurisdictions nationwide and our visits to 28 local jurisdictions, the extent to which jurisdictions encountered many of these continuing challenges varied by the size of election jurisdiction. Large and medium jurisdictions—those jurisdictions with over 10,000 people—generally encountered more challenges than small jurisdictions. In most results from our nationwide survey where there are statistical differences between the size categories of jurisdictions, large jurisdictions are statistically different from small jurisdictions. HAVA established EAC to provide voluntary guidance and assistance with election administration, for example, by providing information on election practices to states and local jurisdictions and administering programs that provide federal funds for states to make improvements to some aspects of election administration. HAVA also added a new requirement for states to in turn require certain first-time voters who register by mail who have not previously voted in a federal election in the state to provide identification prior to voting, and jurisdictions reported taking steps to implement this requirement and inform voters about it. In addition, HAVA includes provisions to facilitate voting for individuals with disabilities, such as requirements for accessible voting systems in elections for federal office. HAVA established voter information requirements at polling places on the day of election for federal office and authorized the appropriation of funding for payments to states to expand voter education efforts. HAVA established EAC, in part, to assist in the administration of federal elections by serving as a national clearinghouse for information and providing guidance and outreach to states and local officials. In our October 2001 report on election processes, we estimated that on the basis of our survey of local election jurisdictions in 2001, 40 percent of local election jurisdictions nationwide were supportive of federal development of voluntary or mandatory standards for election administration similar to the voluntary standards available for election equipment. We also reported in 2001 that some election officials believed that greater sharing of information on best practices and systematic collection of information could help improve election administration across and within states. To assist election officials, since its establishment, EAC has produced two clearinghouse reports, one of which covers election administration. EAC released a Best Practices Toolkit on Election Administration on August 9, 2004, to offer guidance to election officials before the November 2004 general election. The document is a compilation of practices used by election officials that covers topics such as voter outreach, poll workers, polling places, and election operations. Of note, this compilation provided election officials with a checklist for HAVA implementation that covers identification for new voters, provisional voting, complaint procedures, and access for individuals with disabilities. EAC has made this guidance available to states and local jurisdictions via its Web site and engaged in public hearings and outreach efforts to inform the election community about the resource tool. EAC also administers programs that provide federal funds for states under HAVA to make improvements to aspects of election administration, such as implementing certain programs to encourage youth to become involved in elections; training election officials and poll workers; and establishing toll- free telephone hotlines that voters may use to, among other things, obtain general election information. The results of our state survey of election officials show that as of August 1, 2005, most states reported spending or obligating HAVA funding for a variety of activities related to improving election administration. For example, 45 states and the District of Columbia reported spending or obligating HAVA funding for training election officials, and 32 states and the District of Columbia reported spending or obligating funding to establish toll-free telephone hotlines. As discussed in chapter 2, under HAVA, states are to require certain first- time voters who registered to vote by mail to provide identification prior to voting. Voters who are subject to this provision are those individuals who registered to vote in a jurisdiction by mail and have not previously voted in a federal election in the state, or those who have not voted in a federal election in a jurisdiction which is located in a state that has not yet established a computerized voter registration list, as required by HAVA. When voting in person, these individuals must (if not already provided with their mailed application) present a current and valid photo identification, or a copy of a current utility bill, bank statement, government check, paycheck, or other government document that shows the name and address of the voter. Under HAVA, voters at the polls who have not met this identification requirement may cast a vote under HAVA’s provisional voting provisions. Additional information on provisional voting processes and challenges is presented in chapter 5. Election officials in 21 of the 28 jurisdictions we visited reported encountering no problems implementing the HAVA first-time voter ID requirement, and officials in some of these jurisdictions provided reasons why there were no problems. For example, election officials in 2 jurisdictions in Colorado told us that they did not encounter implementation problems because all voters, under state requirements, were required to show identification. Election officials in some other jurisdictions we visited reported that they took steps to inform voters of the new HAVA ID requirement for such voters registering by mail. For example, election officials in a jurisdiction in Ohio reported that they contacted about 300 prospective voters twice, either by phone or by letter, prior to the election to inform them that that they needed to show identification. Figure 27 illustrates a poster used in a jurisdiction we visited to inform prospective voters about the new identification requirements. HAVA contains provisions to help facilitate voting for individuals with disabilities, including requirements for the accessibility of voting systems used in elections for federal office, effective January 1, 2006. HAVA also authorized the appropriation of funding for payments to states to improve the accessibility of polling places. In October 2001, we issued a report that examined state and local provisions and practices for voting accessibility, both at polling places and with respect to alternative voting methods and accommodations. We reported in 2001 that all states and the District of Columbia had laws or other provisions concerning voting access for individuals with disabilities, but the extent and manner in which these provisions addressed accessibility varied from state to state. In addition, in our 2001 report we noted that various features of the polling places we visited had the potential to prove challenging for voters with certain types of disabilities. On the basis of our observations on Election Day 2000, we also estimated that most polling places in the contiguous United States had one or more physical features, such as a lack of accessible parking or barriers en route to the voting room, that had the potential to pose challenges for voters with disabilities. Results from our 2005 surveys show that at the time of the November 2004 general election, many states and local jurisdictions had taken steps to meet HAVA’s requirement for accessible voting systems, as well as making other changes to help improve the accessibility of voting for individuals with disabilities. HAVA requires that, effective January 1, 2006, each voting system used in a federal election must meet certain accessibility requirements. These voting systems are required to provide individuals with disabilities with the same opportunity for access and participation (including independence and privacy) as for other voters. These HAVA requirements specify that such accessibility include nonvisual accessibility for voters who are blind or visually impaired. HAVA provides for the use of at least one DRE or other voting system equipped for voters with disabilities at each polling place. The results of our state survey show that as of August 1, 2005, 41 states and the District of Columbia reported having laws (or executive action) in place to provide each polling location with at least one DRE voting system or other voting system equipped for individuals with disabilities by January 1, 2006. Of the remaining 9 states, 5 reported having plans to promulgate laws or executive action to provide each polling location with at least one DRE voting system or other voting system equipped for individuals with disabilities, and 4 reported that they did not plan to provide such equipment or were uncertain about their plans. Some local election jurisdictions provided accessible voting machines at polling places for the November 2004 general election. On the basis of our survey of a representative sample of local election jurisdictions nationwide, we estimate that 29 percent of all jurisdictions provided accessible voting machines at each polling place in the November 2004 general election. Further, more large and medium jurisdictions provided accessible voting machines than small jurisdictions. We estimate that 39 percent of large jurisdictions, 38 percent of medium jurisdictions, and 25 percent of small jurisdictions provided accessible voting machines at each polling place. The differences between both large and medium jurisdictions and small jurisdictions are statistically significant. Election officials from some small jurisdictions who provided written comments on our survey questionnaire expressed concerns about how this requirement would be implemented in their jurisdictions and whether electronic voting machines were the best alternative. For example, one respondent wrote: “We in a small town … and use paper ballots and that has worked very well in the past and I believe will work very well in the future. Voting machines should be decided on for much larger areas with a lot more than our 367 population with 150 voters.” Another wrote: “We are a small rural township with about 160 voters. Our 2004 election went well; as usual, we had no problems. We use paper ballots. We have some concerns with the implementation of HAVA. We are being forced to use expensive voting machines that will require expensive programming for every election. We are concerned about these costs.… If our limited budget can’t afford those expensive machines and programming, we may need to combine our township polling place with another township—maybe several townships. The additional driving to a different polling place miles away will discourage voters from voting—particularly our elderly residents. So these efforts (HAVA) to help voters will actually hinder voters.” In an effort to address these issues, Vermont, which has about 250 small and medium election jurisdictions that use paper and optical scan ballots, took an alternative approach to meeting the HAVA requirement, according to an election official. Instead of providing one DRE machine for each of its 280 polling places, Vermont plans to implement a secure vote-by-phone system that allows voters to mark a paper ballot, in private, using a regular telephone at the polling place. According to the Vermont’s Secretary of State’s Office, a poll worker uses a designated phone at the polling place to call a computer system located at a secure location and access the appropriate ballot for the voter. The computer will only permit access to the system from phone numbers that have been entered into the system prior to the election, and only after the proper poll worker and ballot access numbers have been entered. The phone system reads the ballot to the voter, and after the voter makes selections using the telephone key pad, the system prints out a paper ballot that is automatically scanned by the system and played back to the voter for verification. The voter may then decide to cast the ballot or discard it and revote. The system does not use the Internet or other data network, and it produces a voter-verified paper ballot for every vote cast. In addition, according to an election official, voters are able to dial into a toll-free telephone number for at least 15 days prior to an election to listen to, preview, and practice with the actual ballot they will vote on Election Day. This is a way of providing a sample ballot to voters, as well as providing an opportunity for voters to become familiar with using the telephone system. For our October 2001 report on voters with disabilities, our analysis included a review of state statutes, regulations, and written policies pertaining to voting accessibility for all 50 states and the District of Columbia, as well as policies and guidelines for a statistical sample of 100 counties. As part of our 2005 surveys, we asked states to report on provisions concerning accessibility and local jurisdictions whether they provided accommodations or alternative voting methods for individuals with disabilities in the November 2004 general election. While the methodologies in the 2001 report and this report differ, results of our 2005 surveys show that states and local jurisdictions have taken actions to help improve voting for individuals with disabilities by, for example, using HAVA funds, taking steps to help ensure accessibility of polling places, and providing alternative voting methods or accommodations. Most states reported that they had spent or obligated HAVA funding to improve the accessibility of polling places, including providing physical or nonvisual access. The results of our state survey of election officials show that as of August 1, 2005, 46 states and the District of Columbia reported spending or obligating HAVA funding for this purpose. For instance, election officials in a local jurisdiction we visited in Colorado told us they had used HAVA funds to improve the accessibility of polling places by obtaining input from the disability community, surveying the accessibility of their polling places, and reviewing the DRE audio ballot with representatives of the blind community. States and local jurisdictions reported taking a variety of actions designed to help ensure that polling places are accessible for voters with disabilities, including specifying guidelines or requirements, inspecting polling places to assess accessibility, and reporting by local jurisdictions on polling place accessibility to the state. In our October 2001 report on voters with disabilities, we noted that state involvement in ensuring polling places are accessible and the amount of assistance provided to local jurisdictions could vary widely. For example, in 2001 we reported that 29 states had provisions requiring inspections of polling places, and 20 states had provisions requiring reporting by local jurisdictions. According to our 2005 state survey, 43 states and the District of Columbia reported requiring or allowing inspections of polling places, and 39 states and the District of Columbia reported that they required or allowed reporting by local jurisdictions. From our local jurisdiction survey, we estimate that 83 percent of jurisdictions nationwide used state provisions to determine the accessibility requirements for polling places. During our site visits to local jurisdictions, we asked election officials to describe the steps they took to ensure that polling places were accessible. Election officials in many of the jurisdictions we visited told us that either local or state officials inspected each polling location in their jurisdiction using a checklist based on state or federal guidelines. For example, election officials in the 4 jurisdictions we visited in Georgia and New Hampshire told us that state inspectors conducted a survey of all polling locations. Election officials in the 2 jurisdictions we visited in Florida told us that they inspected all polling places using a survey developed by the state. Appendix IX presents additional information about state provisions for alternative voting methods and accommodations for the November 2000 and 2004 general elections. In addition to making efforts to ensure that polling places are accessible, some local jurisdictions provided alternative voting methods pursuant to state provisions (such as absentee voting) or accommodations at polling places (such as audio or visual aids) that could facilitate voting for individuals with disabilities. Table 15 presents results from our survey of local election jurisdictions about the estimated percentages of jurisdictions that provided alternative voting methods or accommodations to voters for the November 2004 general election. Election officials’ efforts to educate citizens can help minimize problems that could affect citizens’ ability to successfully vote on Election Day. Informing the public about key aspects of elections includes communicating how to register, what opportunities exist to vote prior to Election Day, where to vote on Election Day, and how to cast a ballot. This information can be distributed through a number of different media, including signs or posters, television, radio, publications, in-person demonstrations, and the Internet. In our October 2001 report on election processes, we stated that lack of funds was the primary challenge cited by election officials in expanding voter education efforts. From our 2001 survey of local election jurisdictions, we estimated that over a third of jurisdictions nationwide believed that the federal government should provide monetary assistance for voter education programs. Since the November 2000 election, changes in voter education efforts include HAVA requiring certain information to be posted at polling places and authorizing the payment of federal funds to states to use for educating voters, and states and local jurisdictions reported expansion of voter education efforts. To help improve voters’ knowledge about voting rights and procedures, HAVA required election officials to post voting information at each polling place on the day of each election for federal office and authorized the payment of funding to states for such purposes. This required voting information includes a sample ballot, polling place hours, instructions on how to vote, first-time mail-in instructions, and general information on federal and state voting rights laws and laws prohibiting fraud and misrepresentation. Results of our state survey of election officials show that as of August 1, 2005, 40 states and the District of Columbia reported spending or obligating HAVA funding for voting information, such as sample ballots and voter instructions, to be posted at polling places. Election officials in all 28 jurisdictions we visited told us they posted a variety of voter information signs at polling places on Election Day 2004. Figure 28 illustrates examples of some of these signs. HAVA also authorized the payment of funding for voter education programs in general, and according to our state survey, as of August 1, 2005, 44 states and the District of Columbia reported spending or obligating HAVA funding for these programs. For example, according to its HAVA plan, Florida required local election officials to provide descriptions of proposed voter education efforts, such as using print, radio, or television to advertise to voters, in order to receive state HAVA funds in fiscal years 2003 and 2004. Election officials in 2 jurisdictions we visited in Florida provided us information about voter education campaigns that they implemented. Election officials in 1 of these jurisdictions reported designing election advertisements to be shown on movie theater screens in the beginning of the summer season; election officials in the other jurisdiction told us they implemented a “Get Out the Vote” television advertising campaign with a cable company intended to reach hundreds of thousands of households during the weeks prior to the November 2004 general election. More local election jurisdictions appear to have taken steps to educate prospective voters prior to Election Day in 2004 than in 2000, and on the basis of our 2005 survey of local jurisdictions, more large and medium jurisdictions took these steps than small jurisdictions. In our October 2001 report on election processes, we noted that local election jurisdictions provided a range of information to prospective voters through multiple media. For example, on the basis of our 2001 survey of local jurisdictions, we reported that between 18 and 20 percent of local jurisdictions nationwide indicated they placed public service ads on local media, performed community outreach programs, or put some voter information on the Internet. On the basis of our 2005 survey, we estimate that more jurisdictions provided these measures. For instance, we estimate that 49 percent of all jurisdictions placed public service ads on local media, and 43 percent of all jurisdictions listed polling places on the Internet. However, increases in the overall estimates from the 2001 and 2005 surveys are, in part, likely due to differences in the sample designs of the two surveys and how local election jurisdictions that were minor civil divisions (i.e., subcounty units of government) were selected. Because of these sample design differences, comparing only election jurisdictions that are counties provides a stronger basis for making direct comparisons between the two surveys’ results. These county comparisons show increases as well. For instance, for the November 2000 election, we estimate that 21 percent of county election jurisdictions placed public service ads on local media, while for the November 2004 election, we estimate that 61 percent of county election jurisdictions placed such ads. In our 2005 survey, we also looked at whether there were differences between the size categories of jurisdictions, and generally, more large jurisdictions provided voter education prior to Election Day than medium and small jurisdictions. For instance, we estimate that 88 percent of large jurisdictions, 46 percent of medium jurisdictions, and 38 percent of small jurisdictions listed polling place locations on Internet Web sites. Table 16 presents estimated percentages of jurisdictions that provided various voter education steps prior to the November 2004 general election. Large jurisdictions may have provided voter education through multiple media in order to reach a broader audience of prospective voters. For instance, Web sites were used to provide information to voters by nearly all large jurisdictions. On the basis of our 2005 survey of local jurisdictions, we estimate that 93 percent of large jurisdictions, 60 percent of medium jurisdictions, and 39 percent of small jurisdictions had a Web site. The differences between all size categories are statistically significant. During our site visits, election officials in large jurisdictions described a variety of voter education mechanisms used to reach a number of prospective voters. For example, election officials in a large Nevada jurisdiction we visited told us that their office partnered with power, water, and cable companies to provide voter registration information in subscribers’ billing statements. Election officials in other jurisdictions we visited reported using a variety of other media to encourage participation or provide information to a broad audience of prospective voters. For example, figure 29 illustrates a billboard, cab-top sign, and milk carton used in local jurisdictions we visited. Whether or not all voters should be required to show identification prior to voting is an issue that has received attention in the media and reports since the November 2000 general election. Recent state initiatives, such as those in Georgia, that in general require voters to provide photo identification, exemplify the challenge that exists throughout the election process in maintaining balance between ensuring access to all prospective voters and ensuring that only eligible citizens are permitted to cast a ballot on Election Day. Results of our state and local jurisdiction surveys show that while providing identification could be one of several methods used to verify identity, it was not required by the majority of states, nor was it the only way used to verify voters’ identities in the majority of local jurisdictions for the November 2004 election. Voter identification requirements vary in flexibility, in the number and type of acceptable identification allowed, and in the alternatives available for verifying identity if a voter does not have an acceptable form of identification. Results of our state survey of election officials show that for the November 2004 general election 28 states reported that they did not require all prospective voters to provide identification prior to voting in person. Twenty-one states reported that they required all voters to provide identification prior to voting on Election Day 2004. However, 14 of these states reported allowing prospective voters without the required identification an alternative. In 9 of these 14 states the alternative involved voting a regular ballot in conjunction with, for example, the voter providing some type of affirmation as to his or her identity. For example, Connecticut, in general, allowed voters who were unable to provide required identification to swear on a form provided by the Secretary of State’s Office that they are the elector whose name appears on the official registration list. Kentucky allowed an election officer to confirm the identity of a prospective voter by personal acquaintance or by certain types of documents if the prospective voter did not have the required identification. The other 5 states reporting that they offered an alternative did so through the use of a provisional ballot if a prospective voter did not have the required identification. For the November 2004 election, 5 of the 21 states that reported having identification requirements also had statutory provisions requiring, in general, that such identification include a photograph of the prospective voter. For the other 16 states that reported requiring identification, there was a range of acceptable forms of identification, including photo identification, such as a driver’s license, and other documentation, such as a copy of a government check or current utility bill with a voter’s name and address. Figure 30 presents information on the identification requirements for prospective voters for the November 2004 general election for all 50 states and the District of Columbia. In our nationwide survey, we asked local jurisdictions about how they checked voters’ identities, such as by asking voters to state their name and address, verifying voters’ signatures, or asking voters to provide a form of identification or documentation. On the basis of this survey, we estimate that 65 percent of all local jurisdictions checked voters’ identification as one way to verify their identities on Election Day. However, in an estimated 9 percent of all jurisdictions, providing identification was the only way voters could verify their identities. Since the November 2004 general election, several states have reported that they have considered establishing identification requirements for all prospective voters, and some reported that they have implemented requirements. Results of our state survey show that at the time of our survey, 9 states reported having either considered legislation (or executive action) or legislation (or executive action) was pending to require voters to show identification prior to voting on Election Day. Four states, at the time of our survey, reported having taken action since November 2004 to require that voters show identification for in-person Election Day voting. For example, changes in Arizona law and procedure emanating from a November 2004 ballot initiative were finalized in 2005 after receiving approval from the Department of Justice. These Arizona changes require voters to present, prior to voting, one form of identification with the voter’s name, address, and photo, or two different forms of identification that have the name and address of the voter. Indiana enacted legislation in 2005 requiring, in general, that voters provide a federal- or state-of-Indiana- issued identification document with the voter’s name and photo prior to voting, whereas 2005 legislation in New Mexico and Washington imposed identification requirements but allowed prospective voters to provide one of several forms of photo or nonphoto forms of identification. In all four states, if voters are not able to provide a required form of identification, they are allowed to cast a provisional, rather than a regular, ballot. Finally, a state that had identification requirements in place for the November 2004 general election may have taken additional actions to amend such requirements. Georgia, for instance, required voters in the November 2004 general election to provide 1 of 17 types of photo or nonphoto identification. In 2005 Georgia enacted legislation that, in general, amended and reduced the various forms of acceptable identification and made the presentation of a form of photo identification, such as a driver’s license, a requirement to vote. Having enough qualified poll workers to set up, open, and work at the polls on Election Day is a crucial step in ensuring that voters are able to successfully vote on Election Day. The number of poll workers needed varies across jurisdictions, and election officials recruit poll workers in a variety of ways using different sources and strategies. Some poll workers are elected, some are appointed by political parties, and some are volunteers. Election officials in jurisdictions we visited reported considering several different factors—such as state requirements, registered voters per precinct, historical turnout, or poll worker functions at polling places—to determine the total number of poll workers needed. On the basis of our survey of local jurisdictions, we estimate that recruiting enough poll workers for the November 2004 general election was not difficult for the majority of jurisdictions. However, large and medium jurisdictions encountered difficulties to a greater extent than small jurisdictions. To meet their need, election officials recruited poll workers from numerous sources, including in some cases, high schools and local government agencies, to help ensure that they were able to obtain enough poll workers for Election Day. Poll workers with specialized characteristics or skills were also difficult for some large and medium jurisdictions to find. Election officials in some jurisdictions we visited reported that finding qualified poll workers could be complicated by having a limited pool of volunteers willing to work long hours for low pay. Poll worker reliability continued to be a challenge for some jurisdictions—especially large jurisdictions—that depend on poll workers to arrive at polling places on time on Election Day. We estimate that recruiting enough poll workers for the November 2004 general election was not difficult for the majority of jurisdictions, and may have been less of a challenge for the November 2004 election than it was for the November 2000 election. For example, on the basis of our 2001 survey of local jurisdictions, we estimate 51 percent of county election jurisdictions found it somewhat or very difficult to find a sufficient number of poll workers for the November 2000 election. In contrast, from our 2005 survey, we estimate that 36 percent of county election jurisdictions had difficulties obtaining enough poll workers for the November 2004 election. In our 2005 survey, there are differences between size categories of election jurisdictions in the difficulties encountered obtaining a sufficient number of poll workers, with more large and medium jurisdictions encountering difficulties than small jurisdictions. As shown in figure 31, we estimate that 47 percent of large jurisdictions, 32 percent of medium jurisdictions, and 14 percent of small jurisdictions found it difficult or very difficult to obtain a sufficient number of poll workers. Election officials in large and medium jurisdictions, with typically more polling places to staff, are generally responsible for obtaining more poll workers than officials in small jurisdictions. For example, election officials in a large jurisdiction we visited in Illinois told us that recruiting enough poll workers for Election Day was always a challenge and November 2004 was no different. They said that state law specifies a minimum of 5 poll workers per precinct, and there were 2,709 precincts in their jurisdiction for the November 2004 general election, requiring at least 13,545 poll workers. In contrast, election officials in a small jurisdiction we visited in New Hampshire told us that they never had difficulties finding poll workers because they were able to use a pool of volunteers to staff the 9 poll worker positions at their one polling place. While election officials in 10 of the 27 large and medium jurisdictions we visited told us they had difficulties recruiting the needed number of poll workers, election officials in the other 17 jurisdictions did not report difficulties. These officials provided a variety of reasons why they did not encounter difficulties, including having a set number of appointed or elected poll workers for each precinct, having a general public interest in being involved in a presidential election, and using a variety of strategies and sources to recruit poll workers. For example, election officials in a large jurisdiction in New Mexico told us that their lack of problems with recruitment was due to the fact that they had a full-time poll worker coordinator who began the search for poll workers very early and, as a result, was able to fill all of the positions needed (about 2,400) for the November 2004 election. Election officials in other large jurisdictions reported that they were able to obtain enough poll workers by relying on multiple sources. For example, election officials in a large jurisdiction in Kansas told us that they made an exhaustive effort to recruit about 1,800 poll workers for the November 2004 general election that included soliciting from an existing list of poll workers, working with organizations, using a high school student program to obtain about 300 student poll workers, recruiting from a community college, using county employees, and coordinating with the political parties. On our nationwide survey we asked local jurisdictions about the sources they used to recruit poll workers for the November 2004 general election, and table 17 presents estimates from this survey on a variety of sources that jurisdictions used. In our October 2001 report on election processes, we identified several recruiting strategies that election officials reported helped in their efforts to obtain enough poll workers. On the basis of our local jurisdictions survey, student poll workers and county or city employees were used as sources for poll workers by many medium and large jurisdictions in the November 2004 general election, as shown in table 17. These two sources were also cited as having worked well by election officials in several of the jurisdictions we visited. For example, election officials in a jurisdiction in Colorado told us that their high school student poll worker programs helped them to obtain a sufficient number of skilled poll workers and reported that 200 of their about 600 poll workers were high school students. Election officials in other jurisdictions we visited reported that high school students often helped them in obtaining enough poll workers with specialized skills or characteristics, such as needed language skills. According to our state survey, 38 states and the District of Columbia reported allowing poll workers to be under the age of 18. Local government offices were another source of poll workers for the November 2004 general election. As shown in table 17, we estimate that 65 percent of large jurisdictions, 25 percent of medium jurisdictions, and 12 percent of small jurisdictions recruited poll workers from city or county government offices. For example, election officials in a large jurisdiction in Nevada told us that the chief poll worker at most of the jurisdiction’s 329 polling places is a county employee, and described benefits of recruiting local government employees as poll workers, including their experience in dealing with the public. The specific skills and requirements needed for poll workers varies by jurisdiction, and in some cases by precinct, but can include political party affiliation, specific technical or computer skills, or proficiency in languages other than English. On the basis of our survey of local jurisdictions, we estimate that most jurisdictions nationwide did not encounter difficulties recruiting poll workers with these specific skills and requirements. However, the results show that the ease of obtaining poll workers with these skills varied by the size of the election jurisdiction, with large and medium jurisdictions generally experiencing more difficulties than small jurisdictions. Some states require political balance between poll workers at polling places. For example, New York election law, which requires that each election district must be staffed with four election inspectors (i.e., chief poll workers) and a variable number of poll workers (depending upon specified conditions), requires that appointments to such positions for each election district be equally divided between the major political parties. Election officials in some jurisdictions we visited told us that even though not required, they tried to maintain a balance in poll workers’ political party affiliation. Recruiting enough poll workers with specific political party affiliations continued to be a challenge for some, in particular large and medium jurisdictions. From our local jurisdiction survey, we estimate that 49 percent of large jurisdictions, 41 percent of medium jurisdictions, and 22 percent of small jurisdictions had difficulties recruiting enough Democratic or Republican poll workers, as shown in figure 32. Election officials in 11 of the 28 jurisdictions we visited reported experiencing some difficulties finding enough poll workers with needed party affiliations. For example, election officials in a jurisdiction in Connecticut told us that because their jurisdiction was predominantly one political party it was difficult to find minority party poll workers. Election officials in these 11 jurisdictions told us that they recruited independents, unaffiliated persons, or student poll workers to fill minority party poll worker positions. Recruiting poll workers with necessary information technology skills or computer literacy was also a challenge for some large and medium jurisdictions, according to our survey of local jurisdictions. We estimate that 34 percent of large jurisdictions and 28 percent of medium jurisdictions found it difficult or very difficult to obtain poll workers with these skills, whereas, we estimate that 5 percent of small jurisdictions had difficulties, as shown in figure 33. Election officials in 23 of the 28 jurisdictions we visited told us that computer or technically skilled poll workers were not needed in their jurisdictions for the November 2004 general election. However, election officials in some of these jurisdictions reported that they foresaw a need for poll workers with these skills with the implementation of electronic poll books or new voting technology. Among the reasons cited for not needing technically skilled poll workers were the use of paper ballots or lever machines, the ease of use of DRE voting equipment, and that any needed skills were taught. In addition, election officials in many jurisdictions we visited told us that they recruited and trained technicians or troubleshooters to maintain, repair, and in some cases set up voting equipment prior to Election Day. Some jurisdictions may be required under the language minority provisions of the Voting Rights Act to, in general, provide voting assistance and materials in specified minority languages in addition to English. We asked on our survey of local jurisdictions whether jurisdictions encountered difficulties recruiting poll workers who were fluent in the languages covered under the Voting Rights Act for their jurisdiction and estimate that for the majority (61 percent) of all jurisdictions, this requirement was not applicable. We estimate that 15 percent of all jurisdictions indicated that recruiting poll workers fluent in languages other than English was difficult or very difficult. Jurisdictions of all size categories may encounter difficulties recruiting poll workers with needed language skills for different reasons. For instance, small jurisdictions may find it difficult to recruit enough poll workers fluent in other languages because of a limited pool of potential recruits, whereas large jurisdictions may be required to provide voters with assistance in multiple languages other than English. Los Angeles County, for example, was required to provide voters assistance in six languages other than English for the November 2004 election. Election officials in some of the large jurisdictions we visited reported encountering difficulties obtaining poll workers with needed language skills, but these officials also told us about their efforts to recruit poll workers with language skills. For example, election officials in a large jurisdiction in Illinois reported that they recently established an outreach department to assist in the recruitment of poll workers with specialized language skills. The jurisdiction has hired outreach coordinators for the Hispanic, Polish, and Chinese communities to assist with recruiting. Figure 34 illustrates materials used by election officials in some jurisdictions we visited to recruit poll workers with a variety of skills for the November 2004 general election. In our October 2001 report on election processes, we identified long hours, low pay, and an aging volunteer workforce as factors that complicated election officials’ efforts to recruit enough poll workers. Election officials in some, but not all, of the jurisdictions we visited in 2005 told us that one or more of these factors complicated their efforts to find enough quality poll workers for the November 2004 general election. For example, election officials in a large jurisdiction in Nevada told us that it was difficult to find people who wanted to work, considering that most families are two-income households and Election Day is a long—14 hours—grueling day. Election officials in a large jurisdiction in Washington told us that they never have enough poll workers, noting that the pay is minimal, the hours are long, and the majority of the poll worker population is elderly. Election officials in several of these jurisdictions we visited reported concerns about finding poll workers in light of a limited pool of volunteers. For example, election officials in a large jurisdiction in Colorado told us the average age of poll workers was over 70 years old and expressed concerns about obtaining poll workers who could physically work a 12-hour day. Alternatively, election officials in a large jurisdiction in Florida told us that the younger generation does not have the same commitment to civic duty that the older poll worker generation had and recruiting enough qualified poll workers may be a challenge in the future. These officials noted that about three- quarters of their poll workers are return participants. An election official in a large jurisdiction in Pennsylvania, where the median age of poll workers is about 75 years old, suggested that serving as a poll worker should be treated similarly as serving on jury duty—it should be everyone’s civic duty to serve as a poll worker. In our October 2001 report on election processes, we noted that poll worker reliability was a challenge for election officials, who depended on poll workers to arrive on time, open, and set up polling places. Poll worker absenteeism was a challenge for large and, to some extent, medium jurisdictions in the November 2004 general election. On the basis of our nationwide survey of local jurisdictions, we estimate that 61 percent of large jurisdictions, 20 percent of medium jurisdictions, and 2 percent of small jurisdictions encountered problems with poll workers failing to show up on Election Day. The differences between all size categories are statistically significant. One way that election officials in several large jurisdictions we visited minimized the impact of poll worker absenteeism was to recruit backup poll workers to ensure that polling places were set up and adequately staffed, even if some poll workers failed to show up. For example, election officials in a large jurisdiction we visited in Illinois reported that approximately 1 to 2 percent of about 13,000 poll workers did not show up on Election Day. However, these officials reported that they had recruited stand-by judges who were to report to the elections office on Election Day in case an already scheduled judge did not show up. Election officials in a few other jurisdictions we visited told us that they called poll workers before Election Day to help ensure they showed up. For instance, election officials in a large jurisdiction in Pennsylvania told us that they called all of the chief poll workers—about 1,300 people—during the week prior to the election. Election officials in a large jurisdiction we visited in Connecticut went a step further, reporting that in addition to placing wake- up calls to all of the chief poll workers, they offered rides to poll workers to help ensure they showed up on time. Voters’ experiences on Election Day are largely informed by their interactions with poll workers, who are responsible for conducting many Election Day activities, such as setting up polling places, checking in voters and verifying their eligibility to vote, providing assistance to voters, and closing the polling places. Although these workers are usually employed only for 1 day, the success of election administration partly depends on their ability to perform their jobs well. Depending on the applicable state requirements and the size of the jurisdiction, the steps that election officials take to adequately prepare all of their poll workers can vary, but may include training, testing, or certification. Ensuring that poll workers were adequately trained for Election Day was a challenge reported by some election officials in large and medium jurisdictions we visited, but these officials also reported a variety of steps they took to help prepare poll workers for Election Day. Most states and the District of Columbia reported having training requirements for poll workers for the November 2004 general election, but the frequency and content of training varied. Some states also reported providing guidance related to the training of poll workers. According to our state survey, for the November 2004 general election, 18 states reported having had poll worker training requirements and providing guidance; 20 states and the District of Columbia reported having had training requirements; 9 states reported providing guidance; 1 state reported that it did not require training nor provide guidance; and Oregon, which conducted all-mail voting on Election Day 2004, indicated this requirement was not applicable. Figure 35 shows reported state requirements for training for the chief poll worker at a precinct or polling place and for poll workers. About half of the states with training requirements reported requiring that poll workers be trained prior to every election or every general election. According to our survey, of the 38 states and the District of Columbia that reported having training requirements for poll workers, 22 states and the District of Columbia reported requiring poll workers to be trained prior to every election or every general election. For example, Florida provisions in place for the November 2004 general election required that poll workers have a minimum of 3 hours of training prior to each election and demonstrate a working knowledge of the laws and procedures relating to voter registration, voting system operation, balloting, and polling place procedures, and problem-solving and conflict resolution skills. These Florida provisions also require, among other things, that local election officials are to contract with a “recognized disability-related organization” to develop and assist with training for disability sensitivity programs, which must include actual demonstrations of obstacles confronted by persons with disabilities during the voting process, including obtaining access to the polling place and using the voting system. Ten states reported requiring that poll workers be trained on a scheduled basis (e.g., yearly or every 2 years). For example, under provisions in place for the November 2004 general election, New Jersey required that all district board members attend training sessions for each election at least once every 2 years. The other 6 states reported that training was required at least once, but not prior to every general election; that the frequency of training was not specified; or that they did not know. For the November 2004 general election, fewer states reported requiring testing or certification than training for poll workers. According to our state survey, 12 states reported having requirements for testing or certification for poll workers, and 16 states reported having these requirements for the chief poll worker at a precinct or polling place. Election officials in 6 of the 28 jurisdictions we visited reported that poll workers were certified or tested after training. Election officials in 6 other jurisdictions told us that they used informal tests or quizzes or informally monitored poll workers performance in training. For instance, election officials in a jurisdiction in Kansas told us that they gave poll workers a nongraded quiz at the end of training. In Nevada, where state election officials indicated in our state survey that there are no requirements for poll worker training or testing, election officials in the 2 jurisdictions we visited told us that they required poll workers to attend training. Election officials in 1 of these jurisdictions required all poll workers to attend a training class each year and to pass a hands-on performance test in which they demonstrate their ability to perform their assigned function, such as checking in voters or programming the DRE voting equipment. Training provided to poll workers varies greatly among local election jurisdictions. Therefore, we asked questions about training challenges as part of our site visits only where we were able to gain an understanding of the types of training and specific conditions faced by local jurisdictions. Election officials in a small jurisdiction we visited in New Hampshire reported that they did not conduct training for the November 2004 general election because poll workers only receive training if they have not previously worked in the polling place, and all nine poll workers had worked in the polling place before. Election officials in the 27 other jurisdictions we visited described the training that they provided poll workers for the November 2004 general election. According to these officials, poll worker training generally occurred in the weeks or month before the election and ranged from 1 hour to 2 days, depending on the type of poll worker being trained. Election officials in most of these jurisdictions reported that training was mandatory. However, the frequency varied, with election officials in the majority of jurisdictions reporting that they required training prior to every election. Election officials in a few jurisdictions reported that poll workers received training at least once or on a scheduled basis, such as once every 2 years. Election officials in many jurisdictions told us that poll workers were paid to attend training, and payments could range from $5 to $50. While election officials in nearly all of these jurisdictions reported that training was conducted by these officials and their staffs, the manner in which the training was conducted varied. For example, election officials in a large jurisdiction in Nevada told us that poll workers were trained in a workshop fashion in which 15 to 20 poll workers were provided hands-on training for their specific function, such as operating voting machines or processing voters. In a large jurisdiction in Kansas, election officials told us that they conduct the training for between 70 and 100 poll workers using a formal presentation as well as the documents poll workers use on Election Day and the voting equipment. Election officials in a large jurisdiction in Washington told us that poll worker training consisted of a PowerPoint presentation conducted in a train-the-trainer style where election officials trained the chief poll workers, who then trained the poll workers. Election officials in 9 of the 27 large and medium jurisdictions we visited reported encountering some challenges with training poll workers, but generally reported that they overcame them. Some of the challenges reported by these officials included keeping poll workers informed about new or changing requirements, conveying a vast amount of information about election processes to a large number of people in a limited time, and ensuring that poll workers understand their tasks and responsibilities. For instance, election officials in a large jurisdiction in Ohio told us that it was challenging keeping up with state changes and incorporating such changes into poll worker training. Election officials in a large jurisdiction in Connecticut told us that effectively training poll workers on a variety of new changes (such as those required by HAVA) could be challenging because the procedures can be difficult to understand, especially for tenured poll workers who have been working at the polls for many years. Election officials in a large jurisdiction in Kansas noted that addressing the need to have a systematic way to evaluate poll worker performance at polling places was a challenge. These officials said that they currently rely on the fact that the poll worker showed up, general observations of the poll workers’ performance, and feedback cards completed by voters exiting the polls. Election officials in the jurisdictions we visited reported taking steps to address these challenges, such as providing poll workers training manuals or booklets for reference on Election Day, training poll workers to perform one function, and conducting training in a workshop fashion with smaller class sizes. Election officials and poll workers perform many tasks throughout the day to ensure that elections run smoothly and that voters move efficiently through the polling place. These activities can include checking in voters, providing instructions for voting machine operation, or assisting voters at the polls. We asked on our survey of local jurisdictions whether for the November 2004 general election jurisdictions encountered poll workers failing to follow procedures for a variety of activities, including, among others, procedures for voter identification requirements, providing correct instructions to voters, and voting machine operation. Overall, according to this survey, most local election jurisdictions nationwide did not encounter problems with poll worker performance. For example, we estimate that 90 percent of all jurisdictions did not encounter poll workers failing to follow procedures related to voter identification requirements, 92 percent of all jurisdictions did not encounter poll workers failing to provide correct instructions to voters, and 94 percent of all jurisdictions did not encounter poll workers failing to follow procedures for voting machine operation. However, we estimate that poll worker performance problems encountered varied by size category of jurisdiction, with more large jurisdictions encountering problems than medium and small jurisdictions. For example, we estimate that 37 percent of large jurisdictions, 19 percent of medium jurisdictions, and 3 percent of small jurisdictions encountered problems with poll workers failing to follow procedures related to voter identification requirements. In terms of providing correct instructions to voters, we estimate that 31 percent of large jurisdictions, 12 percent of medium jurisdictions, and 1 percent of small jurisdictions encountered problems with poll worker performance in this area. For both results, the differences between all size categories are statistically significant. Large jurisdictions could have encountered problems for a variety of reasons, including having more poll workers to train and oversee or having fewer options for recruiting skilled poll workers. While jurisdictions may have reported on our survey that they encountered problems with a particular aspect of poll workers’ performance, written comments provided on the questionnaire indicated that these problems may not have been widespread or may have been easily remedied after they occurred. For example, one survey respondent wrote: “Errors were few and far between, but with 4,500 poll workers, it is very difficult to answer that [our jurisdiction did not encounter any problems with poll workers’ performance.]” Election officials in 12 of the 28 jurisdictions we visited reported that they encountered some problems with poll workers’ performance, but that generally the majority of poll workers performed well. For example, an election official in a large jurisdiction in Pennsylvania we visited told us that while the jurisdiction did not encounter serious problems with performance, in the official’s opinion, it would be disingenuous to report that there were no problems with the 6,500 poll workers working the polls on Election Day. In an effort to minimize poll worker confusion or performance problems, many jurisdictions provided written guidelines or instructions for poll workers to use at the polling place. On our nationwide survey we asked local jurisdictions whether or not for the November 2004 general election they had written guidelines or instructions at the polling place for poll workers covering a variety of topics, such as voting equipment operation; procedures related to verifying voters’ eligibility to vote; and assisting voters with special needs, such as voters with disabilities or who spoke a language other than English. We estimate that 94 percent of all jurisdictions had at least one set of written guidelines at polling places for poll workers. Further, more large and medium jurisdictions provided instructions to poll workers than small jurisdictions. For example, we estimate that 99 percent of large jurisdictions, 96 percent of medium jurisdictions, and 80 percent of small jurisdictions provided written instructions for poll workers to use at polling places if a voter’s name was not on the poll list. In addition, we estimate that 96 percent of large jurisdictions, 92 percent of medium jurisdictions, and 71 percent of small jurisdictions provided written guidelines to use at the polls for identification requirements for first-time voters who registered by mail and did not provide identification with their registration. For both of these results, small jurisdictions are statistically different from both medium and large jurisdictions. During our site visits, election officials in 26 of the 28 jurisdictions we visited reported that they provided written instructions or checklists for poll workers to have at polling places. Election officials in the 2 smallest population size jurisdictions we visited reported that they did not provide written instructions for poll workers. As the officials in a small jurisdiction in New Hampshire said, they are at the polling place to resolve issues personally as they arise. Figure 36 illustrates examples of some checklists that election officials in jurisdictions we visited provided to us. Written instructions and checklists may help poll workers, but problems on Election Day can still be encountered with some issues, in particular issues related to voter registration. We asked on our survey of local jurisdictions whether for the November 2004 general election jurisdictions maintained a written record to keep track of issues or problems that occurred on Election Day. We estimate that 55 percent of all jurisdictions nationwide maintained a written record to keep track of issues. Of those that did maintain a record and provided written comments on our survey, the issues most frequently cited by election officials were problems with voter registration (e.g., not being registered, being registered at another polling location, or being in the wrong polling location). Election officials are responsible for selecting and securing a sufficient number of polling places that meet basic requirements and standards. Polling place locations vary across jurisdictions but can include public and private facilities, such as schools, government buildings, fire departments, community centers, libraries, churches, and residential facilities. To meet the needs of the voting population, polling places should be easily accessible to all voters, including voters with disabilities. Polling places also need to have a basic infrastructure, including electricity, heating and cooling units, and communication lines, to support some voting machines and be comfortable for voters and poll workers. In our October 2001 report on election processes, we stated that obtaining polling places for the November 2000 election was not a major challenge for most jurisdictions. On the basis of our 2005 survey of local jurisdictions, obtaining a sufficient number of polling places was not difficult for the majority of jurisdictions. However, finding polling places that met these standards was generally more difficult for large and medium jurisdictions than for small jurisdictions. Election officials in many jurisdictions reported combining precincts in one polling place, with minimal challenges, for the November 2004 general election. For the November 2004 election, obtaining a sufficient number of polling places was not difficult for the majority of jurisdictions. On the basis of our survey of local jurisdictions, we estimate that 3 percent of all jurisdictions found it difficult or very difficult to obtain a sufficient number of polling places for the November 2004 general election. However, the difficulty encountered in finding enough polling places varied by the size category of jurisdiction. We estimate that 14 percent of large jurisdictions, 8 percent of medium jurisdictions, and 1 percent of small jurisdictions had difficulties obtaining enough polling places, as presented in figure 37. Small jurisdictions may not experience difficulties obtaining polling places for a variety of reasons, among them because they do not have to find as many locations to support an election as large jurisdictions do. For example, election officials in a small jurisdiction we visited in New Hampshire told us that because of the small voting population (about 1,200), they only needed to use one polling place—the town hall—for the November 2004 general election, as shown in figure 38. In contrast, large jurisdictions could be responsible for selecting hundreds of polling places for Election Day. Election officials from a large jurisdiction we visited in Illinois reported that they used over 1,800 polling places for the November 2004 election and hired staff to find polling places that met standards for their jurisdiction. Although election officials in some large and medium jurisdictions told us that they needed to find numerous polling places, officials in only 1 large jurisdiction we visited in Kansas told us that they encountered difficulties finding suitable polling places, in part because of low payments provided to use polling place facilities. Election officials in this jurisdiction reported that in 2003 they implemented a campaign to “recruit” polling places and sent letters to schools and other possible locations in addition to conducting site visits and inspections. These election officials reported that after their efforts, they added about 70 polling places for use on Election Day 2004. Selecting accessible polling places includes assessing parking areas, routes of travel, exterior walkways, and entrances, as well as interior voting areas. In our October 2001 report on voters with disabilities, we identified a variety of challenges faced by election officials in improving the accessibility of voting—including the limited availability of accessible buildings and the lack of authority to modify buildings to make them more accessible. Finding accessible polling places continued to be a challenge for some jurisdictions for the November 2004 general election. On the basis of our local jurisdiction survey, we estimate that 36 percent of large jurisdictions, 25 percent of medium jurisdictions, and 5 percent of small jurisdictions found it difficult or very difficult to find enough accessible polling places, as shown in figure 39. Election officials in some jurisdictions we visited told us that they encountered challenges finding accessible polling places. For example, election officials in 2 large jurisdictions we visited reported that it was challenging to find polling places that were accessible because many of the public buildings in their jurisdiction were older facilities and were not compliant with the Americans with Disabilities Act (ADA). However, election officials reported taking steps to help ensure that polling places were accessible. For example, election officials in a large jurisdiction in Georgia reported that they hired a private company to conduct surveys of the polling locations and determine whether they were accessible and what, if any, changes needed to be made to make the facilities compliant. Some election officials described making minor or temporary modifications to polling places to ensure that they were accessible, for example, by adding ramps, using doorstops for heavier doors, or clearly identifying accessible entrances. In addition to being accessible for all voters, polling places should have sufficient parking for voters and phone lines to provide for communication on Election Day. From our local jurisdiction survey, more large and medium jurisdictions encountered difficulties in finding polling places with these characteristics than small jurisdictions. On the basis of this survey, we estimate that 38 percent of large jurisdictions, 18 percent of medium jurisdictions, and 4 percent of small jurisdictions had difficulties obtaining polling places with adequate parking. The differences between all size categories are statistically significant. In terms of finding polling places with adequate phone lines, we estimate that 35 percent of large jurisdictions, 33 percent of medium jurisdictions, and 9 percent of small jurisdictions had difficulties obtaining polling places with adequate phone lines. Providing cell phones to poll workers was one way for some jurisdictions to help ensure communication between polling places and the election office on Election Day. Also on the basis of our survey, we estimate that cell phones provided by the jurisdiction were the primary means of communication for 29 percent (plus or minus 9 percent) of large jurisdictions, 15 percent (+9 percent, -6 percent) of medium jurisdictions, and 3 percent of small jurisdictions. For both of these results, the differences between both large and medium jurisdictions and small jurisdictions are statistically significant. Election officials in some large jurisdictions we visited included cell phones as part of the supplies provided to each polling place. For example, officials in a large jurisdiction we visited in Nevada told us they paid poll workers $5 to use their own cell phones. We identified several strategies in our October 2001 report on election processes that election officials said helped in their efforts to obtain enough polling places, including locating more than one precinct at a single polling place. Results of our 2005 state and local surveys and site visits show that combining precincts at a polling location continued to be a strategy used by local jurisdictions, predominantly large and medium jurisdictions, to find adequate polling locations for voters in all precincts. According to our state survey, nearly all states (47) reported that they allowed precincts to be colocated in a polling place for the November 2004 general election. Ten states reported allowing colocation only under specified conditions, for instance, if no suitable polling place existed for a precinct, and 37 states reported allowing colocation but did not specify conditions. On the basis of our survey of local jurisdictions, we estimate 33 percent of all jurisdictions had multiple precincts located in the same polling place. However, more large and medium jurisdictions combined precincts than small jurisdictions. We estimate that 78 percent of large jurisdictions, 63 percent of medium jurisdictions, and 19 percent of small jurisdictions had multiple precincts located in the same polling location. The differences between all size categories are statistically significant. During our site visits, election officials in 22 of the 28 jurisdictions we visited told us that they combined precincts in the same polling location for the November 2004 general election. Included in the 6 jurisdictions that did not report combining precincts in a single polling place were the 1 small and 2 medium jurisdictions we visited. Further, in many of the large jurisdictions we visited, election officials told us that most of their polling places had more than one precinct. For example, election officials in a large jurisdiction in Ohio told us that there was an average of three precincts per polling location, but that there could be up to nine precincts in one polling place. Although combining precincts may help solve the issue of obtaining a sufficient number of voting places that meet requirements, other challenges may surface, including voter confusion in not finding the correct precinct at a location, poll worker confusion about eligibility if a voter is not in the correct precinct poll book at a polling place, and the possibility of voters voting on the wrong voting machine for their precinct. However, on the basis of our local survey, few challenges were encountered in polling places where precincts were combined for the November 2004 general election. We estimate that of the 33 percent of jurisdictions with multiple precincts at a polling location, 85 percent (+6 percent, -5 percent) did not experience challenges in terms of voters locating their correct precinct. Election officials in jurisdictions we visited described steps they took to help ensure that voters were able to easily find their correct precinct, including posting signage to direct voters to the correct precinct, using specially designated poll workers as greeters to direct voters as they entered the polling location, setting up separate tables or voting areas for each precinct, and locating the precincts in distinct areas of the building, for example, in the gym and cafeteria of a school building. Election officials in a few jurisdictions we visited told us that they consolidated functions, such as the check-in table or voting equipment, for precincts located in the same polling location in order to avoid voter confusion or problems with voting. For example, election officials in a jurisdiction in Kansas reported that they used one registration table with a consolidated poll book for all precincts at a polling location. As a result, voters only needed to locate one table. Election officials in a jurisdiction in Nevada reported that once voters checked in, they were able to vote on any voting machine in the polling location because the machines were programmed with ballots from each of the precincts located at the polling place, and poll workers activated the particular ballot style for a particular voter. Beyond consolidating some functions at a polling place, in 2004 Colorado authorized the use of “vote centers,” which are polling places at which any registered voter in the local election jurisdiction may vote, regardless of the precinct in which the voter resides. Each vote center is to use a secure electronic connection to a computerized registration list maintained by the local election office to allow all voting information processed at any vote center to be immediately accessible to computers at all other vote centers in the jurisdiction. Larimer County, with 143 precincts and about 200,000 registered voters, reported using 31 vote centers for the November 2004 general election. Election officials in Larimer County described several benefits of vote centers, including voter convenience; cost- effectiveness; minimal voter wait time on Election Day; and overall easier management, including requiring fewer poll workers. Election officials told us that voters liked the convenience of being able to vote anywhere in the jurisdiction, regardless of the precinct they live in. Vote centers can also be cost-effective, according to election officials, for jurisdictions faced with replacing voting equipment to comply with HAVA accessibility requirements for voting systems used in federal elections. Using vote centers also reduces the number of polling places a jurisdiction needs, which can be cost-effective with respect to finding enough accessible polling places. Election officials also told us that on Election Day they were able to avoid having long lines at most vote centers by issuing media announcements to voters throughout the day specifying which vote centers were busy and which were not, and by using their electronic poll book technology to process voters quickly and to monitor ballots and supplies. Officials told us that on average there was a 15-minute wait time for voters. Finally, officials told us that from the perspective of election officials, vote centers facilitated aspects of election administration because there were fewer locations (about 30 instead of about 140) and fewer poll workers overall to recruit and train. While other jurisdictions in Colorado have used vote centers since the November 2004 election or are planning to pilot vote centers in elections in 2006, election officials in a second jurisdiction we visited in Colorado explained why their jurisdiction opted to not use vote centers. Officials told us that their jurisdiction assessed the feasibility of implementing vote centers and concluded that despite several advantages offered by vote centers, the cost of implementation was prohibitive. For example, election officials identified costs including the connectivity for the electronic poll books, so that voters can be credited with voting in real time; potential rental costs for facilities, such as hotels, to house vote centers; and the expense of purchasing additional voting equipment. Because a voter in a jurisdiction using vote centers can vote at any vote center, each vote center needs to be stocked with all applicable ballot styles for an election or have DRE voting machines capable of being programmed with all applicable ballot styles, according to election officials. For the November 2004 general election, these officials told us that they used optical scan for absentee and Election Day voting and DREs for early voting. To avoid the cost and confusion of having to print and keep track of ballot styles for their 378 precincts—compared to Larimer County’s 143 precincts—election officials said that they would need to purchase additional DRE voting machines if they were to implement vote centers. Election officials are responsible for designing ballots that meet various state requirements, possibly federal requirements under the minority language provisions of the Voting Rights Act relating to offering voting materials in specified minority languages in addition to English, and the requirements of the particular voting equipment, and these ballots must be easy for voters to understand. Ballot design generally involves both state and local participation. Most states (46 states and the District of Columbia) were involved in ballot design for the November 2004 general election. For instance, according to our state survey, 17 states and the District of Columbia reported designing ballots for local jurisdictions, 3 states reported requiring approval of the ballot design, and 26 states reported having requirements for local jurisdictions regarding ballot design (e.g., layout, candidate order, or paper stock). Specifically, election officials must determine all races, candidates, and issues that voters in each precinct in a jurisdiction will vote on and construct layouts for these races and issues for the particular types of ballots used with their election equipment. Figure 40 illustrates an optical scan ballot used in El Paso County, Colorado, for the November 2004 general election. In our October 2001 report on election processes, we noted that despite the controversy over the “butterfly ballot” and other ballot problems in the aftermath of Florida’s 2000 general election, very few jurisdictions nationwide thought that confusing ballot design was a major problem. Ballot design problems were not highlighted by voters as a problem in the November 2004 election; therefore, we did not inquire about the extent of ballot design problems in our local survey of jurisdictions. However, we asked about ballot design processes and problems during our visits to local election jurisdictions. Election officials in all of the jurisdictions we visited reported that they did not encounter voter problems with confusing ballot designs for the November 2004 general election. However, election officials in 7 jurisdictions we visited told us that designing easily understood ballots that meet the particular constraints of the voting equipment can be challenging when there are a large number of races or issues to include on the ballot. For example, election officials in a jurisdiction we visited in Colorado that used optical scan ballots told us that fitting all of the races and questions on the ballot is always challenging, but they managed to do so by limiting the number of words on ballot questions and using small fonts. These officials noted that they provided magnifying glasses at polling places to assist voters. Election officials in a jurisdiction we visited in Florida reported that they had to use oversized optical scan ballots to accommodate the number of constitutional amendments that had to be included on the ballot. Some ballot design options taken to help ensure clarity for voters could lead to problems later. For example, election officials in a jurisdiction in Kansas reported that they used a two-sided ballot design requiring that the optical scan counting equipment read the ballot front and back, which presented a problem. Chapter 6 discusses challenges with counting ballots. The requirements of the voting equipment may also limit options election officials can take related to ballot design. For example, election officials in a jurisdiction in Illinois that used punch cards reported that lengthy ballots could have been a problem in the November 2004 election, but they decided to change the type of punch card ballot used. These officials told us that increasing the number of punch positions allowed for more space on the ballot and prevented challenges related to length of ballot. However, with punch card ballots, the greater the number of choices on a punch card, the greater the potential for voter error in punching the preferred choice, as voters must align the ballot carefully. Election officials in jurisdictions we visited that designed their ballots described steps they took to ensure that ballots were clear to voters, including using templates from the state or election management systems, proofreading both before and after printing, and public viewing or testing of ballots. For example, election officials in a jurisdiction in Colorado told us that prior to printing they send proofs of the ballot designs to candidates for their review. After printing, election officials said that staff members and representatives of the political parties test the ballot designs to ensure that there are no problems with how the ballots are processed through the counting equipment. Election officials in another jurisdiction in Colorado reported conducting a mock election with county employees to review the ballot and test a ballot from each package of printed ballots. Election officials in a jurisdiction in Ohio told us that they displayed the ballots for the general public to view and test. The activities and plans that election officials undertake related to preparing ballots or voting equipment can have a direct impact on a voter’s Election Day experience. For example, reports about the November 2004 election highlighted shortages of ballots and voting machines at some polling places. While election officials may not be able to prepare for every contingency that could affect a voter’s wait time or experience at the polls, ensuring that there is a sufficient number of ballots or voting machines can minimize potential problems. On the basis of our survey of local jurisdictions, we estimate that few jurisdictions had problems with ballot or voting equipment shortages for the November 2004 general election. We estimate that 4 percent of all jurisdictions experienced problems with Election Day ballot shortages, and an estimated 4 percent of all jurisdictions did not have enough voting equipment on Election Day. However, there were statistical differences between large and small jurisdictions in having enough voting equipment. We estimate that 12 percent of large jurisdictions, 4 percent of medium jurisdictions, and 3 percent of small jurisdictions did not have enough voting equipment. Election officials in 23 of the 28 jurisdictions we visited reported that they encountered no challenges with preparing and delivering ballots, voting equipment, and supplies for the November 2004 general election. However, these activities could present logistical challenges for jurisdictions if there are unexpected delays, or for jurisdictions that are required to prepare ballots in multiple languages or prepare and deliver numerous voting machines to a large number of polling places. To ensure that there is an adequate supply of machine-readable paper ballots on Election Day, election officials may conduct numerous activities, such as designing, reviewing, proofreading, printing, and testing ballots. Uncertainties about ballot content, such as whether or not certain candidates or issues will be included on the ballot, could affect these activities by delaying printing or leading to a last-minute rush to ensure that ballots are printed in time for the election. While election officials in most of the jurisdictions we visited did not report encountering these uncertainties, election officials did in 4 jurisdictions. For example, election officials in a jurisdiction in Colorado reported that ballot printing was delayed by three statewide lawsuits regarding the content of the ballot. These officials reported that they prepared two ballot designs—one with a particular candidate’s name and one without—so that they would be prepared to send the ballots to an external printer regardless of the lawsuits’ outcome. Some jurisdictions are required to provide ballots in languages other than English. Producing ballots in multiple languages can add to the complexity of preparing ballots because election officials must take steps to ensure proper translation and printing for each required language. On the basis of our local jurisdictions survey, we estimate that 6 percent of jurisdictions nationwide provided ballots in other languages. We estimate that significantly more large jurisdictions provided ballots in languages other than English than medium and small jurisdictions. We estimate that 26 percent of large jurisdictions (compared to 10 percent of medium jurisdictions and 3 percent of small jurisdictions) provided ballots in languages other than English. Once voting equipment, ballots, and supplies have been prepared, ensuring that they are transported to polling places can be a logistical challenge for jurisdictions with thousands of voting machines and hundreds of polling places. Election officials in 18 of the 28 jurisdictions we visited told us that they contracted with moving companies to deliver voting equipment to polling places prior to Election Day. For example, election officials in a jurisdiction in Pennsylvania told us that they contract with a moving company that transports about 1,000 DREs to about 400 polling places in the week prior to Election Day. Election officials in a jurisdiction in Nevada told us that to ensure that voting machines were delivered to the correct polling places, they bar-coded each DRE and also assigned a bar code to each polling place. Upon delivery, contract movers used scanners to read the bar codes on each DRE and the bar code for the specific polling place. Prior to Election Day, these officials said that teams of election staff technicians then went to each polling place to set up the DREs and verify the scanned bar codes. After setting up the DREs, the rooms in which they were located were secured until Election Day. In contrast, in a jurisdiction we visited in New Hampshire, two election workers delivered 12 optical scan counters to the 12 polling places at 4:00 a.m. on Election Day. Figure 41 shows stored voting equipment—with accompanying delivery instructions for each DRE for 1 location—in 3 large jurisdictions we visited that needed to be prepared and delivered to polling places prior to Election Day. Long voter wait times are a problem that election officials try to avoid. However, voters waiting in line at the polls was an issue identified in reports reviewing the November 2004 general election. These reports identified a variety of factors, including confusion about a voter’s registration status, ballot or voting equipment shortages, or malfunctioning voting equipment that led to long voter wait times. We asked election officials during our site visits whether or not any polling places in their jurisdictions had long lines during the November 2004 general election and to describe factors they thought contributed to or helped to reduce long lines. Election officials in 17 of the 28 jurisdictions we visited reported having long lines at one or more polling places in their jurisdiction at some point on Election Day. However, there was variation in the reported voter wait times, times of day, and numbers of polling places with lines. For instance, election officials described voter wait times that ranged from 15 minutes to 1 ½ hours. Some election officials reported that the longer lines occurred in the morning; others told us that they kept polling places open past the official closing time to accommodate voters who were in line when the polls closed. Election officials in over half these 17 jurisdictions attributed long lines to higher than expected voter turnout, both in general and at peak voting times. Some of these jurisdictions were located in states where the presidential race was considered close (often referred to as “battleground states”). For example, the election official in a jurisdiction in Nevada attributed long lines to using a new voting system in addition to being a battleground state and encountering high voter turnout. This official estimated that there were between 30,000 and 35,000 more voters for the November 2004 general election than in previous elections. Election officials in 2 jurisdictions we visited in Ohio told us that higher than expected voter turnout in some precincts led to long lines. For example, election officials in 1 of these jurisdictions reported that at a polling place where two precincts were located there was higher than expected turnout because of a school board race. According to these officials, at this polling place there was a single line for voters from both precincts to check in at the registration table, and this line backed up. Election officials in another jurisdiction in Ohio told us that some precincts had long lines, and one precinct in particular had a waiting time of up to 1 hour. These officials said that one precinct closed 30 to 45 minutes after closing time for the voters that were in line at 7:30 p.m. Election officials in 11 of 28 jurisdictions we visited told us that none of the polling places in their jurisdictions had long lines, and some described factors that helped to reduce or prevent lines. High voter turnout prior to Election Day—either during early voting or through absentee voting—was one factor they identified. For example, election officials in 2 jurisdictions we visited—a second jurisdiction in Nevada and 1 in New Mexico—told us that about 60 percent of those who cast ballots voted early or absentee. Election officials in a jurisdiction we visited in Washington (which reported that it did not require or allow early voting) told us that they attributed their lack of long lines on Election Day to the fact that two-thirds of voters in their jurisdiction vote by absentee ballot. Election officials in a jurisdiction in Florida reported that in planning for the November 2004 general election, they decided to encourage early and absentee voting as alternatives to Election Day voting in anticipation that there would be heavy turnout for the general election. Their voter education campaign, which included buying airtime on radios and in movie theaters, stressed early voting options. In the end, about 40 percent of voters cast early ballots, which, according to election officials, made crowds easier to manage on Election Day. On Election Day, poll workers may need to communicate with election officials at the central office for a variety of reasons—to inquire about a person’s eligibility to vote if his or her name does not appear in the poll book, to report voting equipment problems, or to report other issues that could occur at a polling place on Election Day. On the basis of our nationwide survey of local jurisdictions, for the November 2004 general election, we estimate that for 48 percent of all jurisdictions, the primary means of communication between polling places and the central office was telephones installed at polling places. Cell phones were also used as a primary means of communication in some jurisdictions. For example, on the basis of our local survey results, we estimate that for 25 percent of all jurisdictions, personal cell phones were the primary means of communication. Having inadequate communication lines on Election Day was a problem for election officials in the November 2000 election, as we noted in our October 2001 report on election processes. On the basis of our 2005 survey of local jurisdictions, communication problems between polling places and the election office on Election Day were a challenge for some jurisdictions in the November 2004 election, and these problems varied by the size category of jurisdiction, with more large jurisdictions encountering major problems than medium and small jurisdictions. We estimate that 36 percent of large jurisdictions, 63 percent of medium jurisdictions, and 89 percent of small jurisdictions encountered no major problems with the communication system used at polling places. Small jurisdictions may not have experienced communication problems on Election Day for a variety of reasons, among them because a single polling place is located in the same building as the central election office, allowing the election officials to be physically present to resolve any questions or issues. Election officials in small jurisdictions provided comments on our nationwide survey of local jurisdictions about the primary communication system used in their jurisdictions on Election Day, including “personal contact—the clerk’s office is across the hall from the polling place,” “ yelled across the room,” or “we are the central office and the polling place.” In addition, the election official in the small jurisdiction we visited in New Hampshire told us that the town clerk was on site at the one polling place. Election Day communication problems encountered by some large and medium jurisdictions included overloaded phones because of the volume of calls. On the basis of our local jurisdictions survey, we estimate that 49 percent (plus or minus 8 percent) of large jurisdictions, 14 percent of medium jurisdictions, and 1 percent of small jurisdictions experienced overloaded phone systems. The differences between all size categories are statistically significant. Election officials in many large jurisdictions we visited reported receiving numerous phone calls on Election Day, both from polling places and from the public. In addition to poll workers calling from polling places, election officials at the central office may receive phone calls from citizens asking about the location of their polling place or whether or not they are registered to vote. For example, a large jurisdiction we visited in Nevada reported receiving over 35,000 calls on Election Day 2004, about three times the number reportedly received in 2000. Election officials reported that most calls received were from people wanting to know whether or not they were registered or where their polling place was, despite providing polling place locations on their Web site, printing the locations in the newspaper, and mailing a sample ballot listing polling place locations to every registered voter in the jurisdiction. Election officials in 2 other large jurisdictions in Florida and Kansas reported that the volume of calls received was extremely high and that most inquiries concerned voter eligibility. In 1 of these 2 jurisdictions, election officials told us that many poll workers could not get through to the elections office to verify voter registration information, which may have increased the number of provisional ballots issued during the election. Election officials in many of the large jurisdictions we visited reported taking steps to manage, or even reduce, the volume of calls from both polling places and the public. These actions included setting up call centers or phone banks, installing additional phone lines in their offices, or hiring temporary workers. For example, election officials in a large jurisdiction in Pennsylvania reported that after experiencing problems being able to handle the volume of calls on Election Day 2000, they implemented a call center at their office with 30 phone lines for the November 2004 election. While these election officials reported receiving “a lot” of calls for the 2004 general election, they said they were able to successfully handle the volume because of the new phone lines. Election officials in a large jurisdiction in Illinois reported that a feature, new for the November 2004 election, on the jurisdiction’s Web site that allowed voters to determine their polling place online helped to reduce the number of phone calls received from people asking about polling location. After the November 2004 general election, some reports highlighted allegations of voter intimidation by third parties (e.g., poll watchers, observers, or electioneers) at polling places. To gain a better understanding of the extent to which this alleged behavior occurred and because the range of behaviors and circumstances in which they could have occurred was difficult to capture on a structured survey, we asked election officials during our site visits about challenges they faced conducting voting on Election Day—specifically, we asked them about any problems they encountered with voter intimidation. Election officials in 19 of the 28 jurisdictions we visited did not report experiencing problems with third parties on Election Day. However, election officials in 9 jurisdictions we visited in battleground states reported challenges with disruptive third- party activities. In some instances these third parties simply increased the number of people that poll workers were to manage at a polling location; in others, election officials told us third-party observers provided misinformation to voters or even used intimidation tactics. Election officials in a jurisdiction in Nevada told us that poll watchers were the biggest challenge on Election Day. Poll watchers, according to election officials, had been bused in from another state to observe the election because Nevada was a battleground state, which led to having 14 poll watchers at some locations. These officials noted that while most poll watchers simply observed, the poll watchers did increase the number of people at polling places, creating more for poll workers to manage. Election officials in other jurisdictions reported that third-party behavior negatively affected poll workers and voters. For example, election officials in a jurisdiction in Pennsylvania reported that one of the biggest challenges on Election Day was managing poll workers’ stress levels in an especially contentious election where poll watchers and observers yelled at them throughout the day. Election officials in another jurisdiction in Nevada told us that outside observers’ behavior was disruptive and noted that the observers were contentious, violated electioneering limits at the polling place, and questioned every action that poll workers made. Election officials in a jurisdiction in Colorado reported that at one polling location on a college campus, poll watchers and representatives of a national organization were encouraging students to go to the polling place at one time to create a disruption. Students were also being encouraged to get back in line after they had voted, which caused long lines for other voters. Election officials said that they ended up calling security officers to help manage the situation. In other instances, election officials reported that observers provided misinformation to voters or even used intimidation tactics. Election officials in a jurisdiction in Florida reported that third-party organizations caused confusion at polling places by misinforming voters and staging demonstrations. In a jurisdiction we visited in Colorado, election officials told us that poll watchers caused problems at some polling places by providing misinformation to voters, such as informing them that their provisional ballots would not be counted. In a jurisdiction in New Mexico, election officials said that one polling place had to remain open until 10:30 p.m. because voters were encouraged by local political advocates to go to that polling place to vote even though the polling location for their precinct had been changed. As a result, according to these officials, hundreds of provisional ballots were cast at the polling place, which made for long waiting times. Election officials in another jurisdiction in New Mexico reported that outside candidate advocates and observers from political parties tried intimidation tactics and treated people at the polls “terribly.” For example, these election officials told us that some advocates were observed taking photographs of the license plates of Hispanic voters as they arrived at polling places. We did not ask a specific question about third-party activities at polling places on our survey of local jurisdictions because of the complexities in capturing the range of alleged behaviors on a structured survey. However, we asked whether local election jurisdictions maintained a written record of issues that occurred on Election Day and, if so, what issue or problem occurred most frequently on Election Day. Several election officials from jurisdictions in battleground states that provided comments on our nationwide survey wrote that electioneering or poll watchers did. For example, election officials from Florida, Colorado, and Iowa wrote “voters complained about being harassed by demonstrators while waiting in line to vote,” “poll watchers acting aggressively,” and “poll watchers (who were attorneys, mostly) were interfering with the process, intimidating precinct officials, and giving erroneous advice to voters who showed up at the wrong polling place.” Administering an election in any jurisdiction is a complicated endeavor that involves effectively coordinating the people, processes, and technologies associated with numerous activities. Many of the challenges that election officials reported encountering in preparing for and conducting the November 2004 election were not new. Recruiting and training an adequate supply of poll workers, finding accessible polling places, and managing communications on Election Day were challenges we identified in our October 2001 report on the November 2000 election. Data from our local elections jurisdiction survey and site visits to 28 locations, indicate that more large, and to some extent medium, jurisdictions encountered challenges in preparing for and conducting the November 2004 general election than did small jurisdictions. This is not surprising. Larger, diverse jurisdictions may face challenges smaller jurisdictions do not, such as recruiting poll workers with non-English language skills. Larger jurisdictions are also likely to need to rely to a greater degree on technology to manage their elections administration process, and this brings its own set of challenges. The complexity of administering an election and the potential for challenges increase with the number of people and places involved, the ethnic diversity and language skills of the voting population, and the scope of activities and processes that must be conducted. Many of the election officials in large jurisdictions we visited told us that being well prepared, having established policies and procedures in place, and having qualified election staff were factors that contributed to a smooth Election Day. One problem that occurred on Election Day in some jurisdictions that election officials reported encountering was the actions of poll watchers and other third parties that election officials considered disruptive. This presents another issue that election officials may need to include in their Election Day preparations and training. A goal of the election process is to ensure that every eligible voter is able to cast a vote and have that vote counted. In the November 2000 general election, reports of some voters showing up at the polls and not being able to vote raised concerns about eligible voters’ names not appearing on the voter registration list at the polling place or poll workers not otherwise being able to determine voters’ eligibility. While many jurisdictions reported in 2001 having at least one procedure in place to help resolve eligibility questions for voters whose names did not appear on a polling place registration list, only 20 states plus the District of Columbia reported using some form of provisional ballot for the 2000 general election. One of the major changes since the 2000 general election has been the implementation of a HAVA provision requiring, in general, that states permit individuals, under certain circumstances, to cast provisional ballots in elections for federal office. In general, under HAVA, voters who claim to be eligible to vote and registered in the jurisdiction they desire to vote in but whose names do not appear on the polling place registration list are to be allowed to cast provisional ballots in a federal election. These ballots are called provisional because they are counted only if an election official determines that the voter is eligible under state law to vote. In terms of ballot access, provisional ballots benefit voters by allowing an individual to cast a vote, in general, when there is some question as to the individual’s eligibility such as when the individual’s name is not on the registration list or the individual’s eligibility has been questioned by an election official. In terms of ballot integrity, provisional ballots benefit election officials by allowing them to determine voter eligibility prior to counting such ballots (i.e., verifying provisional ballots). In this chapter, we describe (1) events that preceded HAVA’s provisional voting requirements, (2) how states and local jurisdictions implemented the requirement to provide provisional ballots, (3) how states and local election jurisdictions qualified provisional ballots for counting, and (4) the difficulties of estimating and comparing the number of provisional ballots that were cast and counted. Concerns were raised with respect to the November 2000 election that some eligible voters were not allowed to vote because of questions regarding the voters’ eligibility. HAVA required that by January 1, 2004, most states permit the casting of provisional ballots in elections for federal office by voters who affirm in writing that they believe they are eligible to vote and registered in that jurisdiction, but are not found on the voter registration list. Such states are also required under HAVA to provide provisional ballots in federal elections under other circumstances such as for certain voters who registered by mail and do not have required identification, and where an election official asserts that an individual is ineligible to vote. Provisional votes cast under HAVA’s provisional voting requirements are to be counted in accordance with state law if election officials determine that the voter is eligible to vote under state law. Under HAVA, 6 states are exempt from the act’s provisional voting requirements because they either permitted the voter to register on Election Day or did not require voter registration. On the basis of reports from state election officials and in local election jurisdictions we surveyed and visited, states and local jurisdictions varied in a number of ways regarding how they implemented HAVA’s provisional voting requirements in the November 2004 election. Among other things, we found variation in the additional circumstances, apart from those circumstances specified in HAVA, where a provisional ballot would be offered, such as when voters claimed they did not receive an absentee ballot; design of ballots themselves and how they were tracked; and voting method used for casting provisional ballots, such as optical scan ballots or DRE. With respect to the counting of provisional votes, states reported various differences in their counting processes such as the prescribed location from which a voter must cast a provisional ballot in order for it to be counted. Also, with respect to the counting of provisional ballots, according to our estimates from our survey of local election jurisdictions nationwide, a voter not meeting residency requirements was the most frequently cited problem, followed by insufficient evidence that the voter was registered. In jurisdictions we visited, election officials also varied in how they handled a lack of information from the voter that was needed to verify a provisional ballot. National figures on provisional ballots for the November 2004 election are difficult to estimate because of a lack of data on provisional ballots cast and counted, and variation in how states implemented provisional voting. Nevertheless, we estimate that between 1.1 million and 1.7 million provisional ballots were cast in the November 2004 election. The variation in how provisional voting was implemented makes it difficult to compare the use and counting of provisional ballots among jurisdictions. A number of factors can affect the number of provisional ballots cast and counted. For example, one such factor could be an instance in which the polling location hours were extended and votes cast during the extended hours were cast provisionally. Following the November 2000 election, in our October 2001 comprehensive report on election processes nationwide, we noted that the biggest problems on Election Day involved resolving questions about voter eligibility. Typically, a voter’s eligibility is established before a voter receives a ballot, most often by a poll worker examining a poll book or registration list for the person’s name. If the name appears on the list and other identification requirements are met, the voter is given a regular ballot and is allowed to vote. We also noted in our report that in the November 2000 election, a large number of voters with eligibility issues created frustration for voters, long lines, and problems communicating between the polls and election headquarters as workers tried to resolve eligibility issues. For the 2000 general election, when the voter’s name did not appear on the registration list, we reported in October 2001 that jurisdictions had different procedures for dealing with the question of the voter’s eligibility. More specifically, we reported that 20 states plus the District of Columbia used some form of provisional ballot when a voter’s name was not on the voter list, with verification of registration conducted after the election. As we reported, provisional balloting measures went by different names among the states, including provisional ballot, challenged ballot, special ballot, emergency paper ballot, and escrow ballot. Further, in 5 states in the 2000 general election, we reported that voters could complete an affidavit when voting with no further verification of their registration information being required by state law prior to the ballot being counted. The U.S. Census Bureau estimated that of the 19 million registered voters who did not vote in 2000, 6.9 percent did not vote because of uncertainty regarding their registration. In our October 2001 report, we noted that headlines and reports questioned the effectiveness of voter registration by highlighting accounts of individuals who thought they were registered being turned away from polling places on Election Day and jurisdictions incorrectly removing the names of eligible voters from voter registration lists. Our report also found that almost half of the jurisdictions nationwide in 2000 reported having problems with registration applications submitted at motor vehicle agency offices that election officials believed could result in individuals showing up at the polls to vote and discovering that they were not registered. Numerous recommendations were made for federal regulations to require that all states provide provisional voting. For example, the Federal Election Commission in June 2001 recommended that all states devise procedures for voters to cast provisional ballots at the polls under certain conditions, as did the National Commission of Federal Election Reform in August 2001 and the National Task Force on Election Reform in July 2001, among others. Under HAVA, in an election for federal office, most states are to permit individuals to cast a provisional ballot under certain circumstances. The statutory deadline for implementing HAVA’s provisional voting requirement was January 1, 2004. For federal elections, states are, in general, required to allow the casting of a provisional ballot by an individual asserting to be registered in the jurisdiction in which the individual desires to vote and eligible to vote but whose name does not appear on the official list of eligible voters for the polling place, or whom an election official asserts to be ineligible to vote, or who registered to vote by mail but does not have (and has not previously provided) the required registration identification when trying to vote in person or by mail, or casting a vote pursuant to a court order or other type of order extending poll closing times. HAVA requires that an individual be permitted to cast a provisional ballot upon the execution of a written affirmation before an election official at the polling place. The written affirmation must state that the individual is registered to vote in that jurisdiction and eligible to vote in that election. HAVA specifies that either the provisional ballot or the written affirmation information be transmitted to an appropriate election official for a determination as to whether the individual is eligible to vote under state law. Under HAVA, if an individual is determined to be eligible, the provisional ballot is to be counted as a vote in accordance with state law. Election officials, under HAVA, are to give the individual written information on how to ascertain whether the vote was counted and, if the vote was not counted, the reason the vote was not counted. HAVA directs that state or local election officials establish a free access system, such as a toll-free number, for provisional voters to ascertain such information. While HAVA established conditions under which an individual must be allowed to cast a provisional ballot, states are not prohibited from offering provisional ballots for other reasons, or from using ballots with other names (e.g., a challenged ballot) to serve provisional vote purposes. HAVA explicitly provides that the specific choices on the methods of complying with certain act requirements, including the provisional voting requirements, are left to the discretion of the state. In addition, HAVA provides that a state may establish election technology and administration requirements that are stricter than HAVA requirements, so long as they are not inconsistent with other specified federal requirements. On the basis of reports from state election officials and in local election jurisdictions we surveyed and visited, states and local jurisdictions provided for provisional voting in a variety of ways for the November 2004 election. These differences contributed to the variation in the number of provisional votes cast among jurisdictions. The results of our state survey of election officials show that states reported using new or existing legislative or executive actions (which included executive orders, directives, regulations, or policies) to implement HAVA’s provisional voting requirements. Specifically, our state survey showed 27 states reported enacting new legislation or taking executive action to meet HAVA’s provisional voting requirements; 11 states and the District of Columbia reported using the state’s existing legislative or executive action to meet the requirements; 7 states said HAVA provisional requirements were met by a combination of new legislation or executive action and existing actions; 5 states (Idaho, Minnesota, New Hampshire, North Dakota, and Wisconsin), in response to the question of how their state established the provisional voting requirements set forth in HAVA, answered that they were exempt from such requirements; these 5 states are exempt from HAVA provisional requirements, in general, because they have same-day voter registration or no voter registration. Connecticut officials responded, for example, that the state enacted legislation after HAVA to establish HAVA provisional voting requirements. Connecticut state laws were enacted in June 2003 related to the application for a provisional ballot, casting of the ballot, and determination of eligibility for counting of provisional ballots, among other things. In contrast, Alaska election officials reported that existing legislation met HAVA’s provisional voting requirements. According to Alaska’s 2005 updated HAVA plan, the state had an existing provisional voting process known as Questioned Voting. This process, established in the early 1980s, required only minimal changes to meet HAVA provisional voting requirements. Alaska requires use of a questioned ballot for any voter who votes at a polling location where his or her name does not appear on the precinct register, or if the voter does not have identification and is not personally known by the election official. In our state survey, New Jersey reported meeting HAVA provisional voting requirements with a combination of existing and new legislation. In one New Jersey jurisdiction we visited, election officials stated that state provisional voting procedures were first established in 1999. According to these officials, the state amended its provisional ballot election law after HAVA to allow use for voting by court (or other) order after the polls have closed, and by first-time mail registrants who do not provide identification. Election officials in 25 of the 26 jurisdictions we visited that provide for provisional voting told us that they used some form of paper ballot for Election Day provisional voting for the November 2004 election. For example, election officials in the Illinois jurisdictions we visited said that the regular punch card ballot was used by provisional voters, and then placed in provisional ballot envelopes. In the New Jersey jurisdictions we visited officials said that provisional votes were cast on paper ballots that could be counted with optical scan machines (if voters were determined to be eligible). Election officials in Connecticut jurisdictions said that they used hand-counted paper ballots for provisional voters. According to election officials in 1 Ohio jurisdiction and 1 Nevada jurisdiction, DRE was used for Election Day provisional voters. According to election officials or documents they provided in the 2 jurisdictions we visited that used DRE for provisional voting on Election Day, the processes used for casting provisional votes were as follows: In the Ohio jurisdiction, election officials said voters first completed an affidavit statement with a preprinted code number, and signed a special section of the poll book. The poll worker then inserted a unit into the DRE that contained the ballot for the precinct. The poll worker then pressed the provisional ballot selection on the DRE and entered the code number for the individual voter associated with the voter’s affidavit statement. The individual then voted. In one Nevada jurisdiction, DREs were used for Election Day provisional voting, but optical scan ballots were used for provisional voters participating in early voting. According to the poll worker’s manual provided by election officials, Election Day provisional voters completed an affirmation with identifying information and the reason they were casting a provisional ballot. As described to us by election officials at this jurisdiction, the poll worker then added precinct information, and both signed the affirmation. The poll worker then activated the DRE with a card. To indicate that the ballot was provisional, the poll worker pressed “0” and the machine provided a provisional voter identification number that the poll worker copied onto the voter affirmation and provisional voter receipt. The voter then voted. According to election officials in the jurisdictions we visited, the design of provisional ballots varied for the November 2004 election. The provisional ballot differences included variation in terms of the races included, ballot and envelope color, the envelopes they were placed in, and the information included on the provisional ballot envelopes. For example, in the Nevada jurisdictions, the provisional ballot only included races for federal offices, while in the Kansas jurisdictions, officials said that the provisional ballot was the same as a regular ballot. In 1 Georgia jurisdiction, election officials stated that they were using an absentee ballot for provisional voters but were inserting it into a salmon-colored envelope, whereas in an Illinois jurisdiction we visited, “Provisional” was printed in pink letters across the punch card ballot used in that jurisdiction so that these ballots were distinguishable from other ballots. The provisional ballot envelopes also varied in terms of what information was provided in the jurisdictions we visited, according to example envelopes provided to us (or described) by election officials. The outside of the provisional ballot envelopes in most of the jurisdictions we visited served as the voter’s written affirmation that is required by HAVA. For example, in a jurisdiction in Illinois, the ballot envelope included instruction to voters on how to cast a provisional ballot; in a Florida jurisdiction (as well as in Illinois) the provisional envelope includes information on the reason why the provisional ballot was cast. In New Mexico and Colorado jurisdictions we visited, the envelope included a tear- off tab with information on how voters could find out whether their vote counted, and if not, why it was not counted. In addition, election officials in some jurisdictions we visited described provisional ballots being placed in envelopes, sometimes with a second security envelope covering the ballot inside. Figure 42 shows an example of a provisional ballot envelope. Officials in jurisdictions we visited described a variety of methods used for tracking provisional ballots in the November 2004 election. Methods included having individual ballots numbered, maintaining an inventory or log, accounting for provisional ballots at the beginning and end of Election Day, and using specially colored ballots or envelopes for holding provisional ballots. The following are examples of how election officials in four jurisdictions we visited said they tracked provisional ballots for the November 2004 election: In a Pennsylvania jurisdiction, election officials tracked provisional ballots cast at the polling place on a form provided by the election officials. Provisional ballots were marked with a sticker indicating that they were provisional. The sticker also had an identification number for tracking the ballot, and the voter was provided a receipt with the identification number to use when calling for information on the status of their ballot. All provisional ballots were placed inside of green envelopes. In a New Mexico jurisdiction, an election official said that ballots were numbered sequentially, so that the poll workers could track the numbers. The precinct judges certified the numbers of the ballots they received, used, delivered, and destroyed. In a New Jersey jurisdiction, the municipal clerk issued a specific number of provisional ballots (25) to each precinct, with a “Custody Receipt” form that identified who was in possession of the orange bag with the provisional ballots and an accounting of all ballots originally issued. A ballot that had been voted was enclosed in a gray envelope and then put back in the orange bag. In a Kansas jurisdiction, separate poll books, separate envelopes for provisional ballots, and separate pouches for envelopes containing provisional ballots (all blue in color) facilitated tracking the ballots as separate items from regular Election Day ballots. No tracking of the actual ballot occurred (before it was voted) because the same optical scan paper ballot was used for regular Election Day voters. Apart from permitting voters to cast provisional ballots under the circumstances specified in HAVA, some jurisdictions we surveyed or spoke with had additional reasons for providing provisional ballots to voters in the November 2004 election and other types of ballots that could be used for different circumstances. In addition, election officials in jurisdictions we visited told us about different approaches for offering provisional ballots. In the local election jurisdictions we visited, election officials described various circumstances, in addition to those required by HAVA, in which a provisional ballot was provided to a prospective voter in the November 2004 election. The additional circumstances under which provisional ballots were provided are established by state officials. For example, In one Colorado jurisdiction we visited, election officials stated that provisional ballots were available to voters who did not have the identification required of all voters in the state and also available if a person was listed as a felon in the poll book. Further, election officials told us that the Colorado Secretary of State issued guidance just prior to the 2004 general election that allowed individuals— claiming to have registered at a voter registration drive but for whom the jurisdiction had no record—to vote provisionally. Election officials in jurisdictions we visited in Colorado, Florida, Kansas, Ohio, and Washington said that voters claiming they had not received their absentee ballots were provided with provisional ballots. In other jurisdictions, such as the 2 we visited in Connecticut, voters were allowed to vote regularly if their absentee ballot did not arrive. Kansas election officials reported that they allowed voters to cast provisional ballots if the voter did not trust the voting machines and wanted a paper ballot, or if the voter had a different last name than the listed one because of marriage or divorce. The extent to which voters are provided with provisional ballots varied depending on whether states required identification of all voters or only certain voters, according to our state survey. Some states reported that they require all voters to provide identification; some reported that they require only provisional voters to produce identification, while others reported that they do not require identification from voters other than first- time voters who registered by mail, as required by HAVA. Chapter 4 on conducting elections discusses state requirements for voter identification for all voters. According to our state survey, 6 states—Arizona, Massachusetts, Michigan, New Mexico, Utah, and Wisconsin—reported requiring identification from only provisional voters in the November 2004 election, but Michigan and Utah reported allowing an alternative to identification for provisional voters who did not have required identification. In Michigan, for example, a voter receiving a provisional ballot who was unable to meet the identification requirement was permitted, according to election officials responding to our state survey, to fax, mail, or hand-deliver an acceptable form of photo identification to the clerk anytime during the 6 days following the election. Some jurisdictions we visited reported that Election Day voting options other than provisional ballots were available. For example, election officials in jurisdictions we visited in Ohio said that provisional ballots were the only special ballots available for that election. In contrast, in a New Mexico jurisdiction we visited, election officials said the state offered an in-lieu-of ballot for voters who requested an absentee ballot, and claimed it did not arrive. These election officials said the in-lieu-of ballot was the same as a provisional ballot, but it was placed in a different sleeve for later determination of whether an absentee ballot had been cast or not. At a Connecticut jurisdiction we visited, election officials described the state’s presidential ballot, available at the clerk’s office on Election Day for the November 2004 election. A presidential ballot, according to election officials and documents they provided, allowed voting for president and vice-president by former Connecticut residents who had moved to another state within 30 days of the election and for that reason could not vote in their new state of residence. Election officials in some jurisdictions we visited, such as 1 jurisdiction in Florida and 2 jurisdictions in New Jersey, said their procedures allowed challenged voters to sign a statement, such as an affidavit declaring their eligibility, and to vote on a regular ballot that would be counted with other ballots on Election Day. According to poll worker guidance provided by election officials in the Florida jurisdiction, a written challenge must be submitted under oath and given to the voter; then the voter has the right to submit an oath affirming his or her eligibility. The polling place clerk and inspectors must resolve the challenge by majority vote, providing a regular ballot if the decision is in the prospective voter’s favor. The guidance states that a challenged voter who refuses to sign the oath must be offered a provisional ballot. In both jurisdictions we visited in New Jersey, voters who were challenged were not issued a provisional ballot, according to documents provided by election officials. As stated in the poll worker manual for one of the jurisdictions for the 2004 general election, a voter who was challenged completed a challenged voter affidavit, as shown in figure 43. The manual stated that the location’s four poll workers take a vote to decide whether the voter would be allowed to vote. On the basis of the decision, the challenged voter cast a regular ballot or was not allowed to vote, according to the manual (in case of a tie, the voter was allowed to vote). In our survey of local election jurisdictions nationwide, we asked for information on the use of provisional ballots, challenged ballots, or other types of ballots under various scenarios for the November 2004 election. Table 18 shows the extent to which we estimate that local jurisdictions provided provisional ballots as compared to providing other types of ballots. Apart from permitting voters to cast provisional ballots under the circumstances specified in HAVA, election officials in jurisdictions we visited described differing approaches under which provisional ballots were utilized for the November 2004 election. Election officials in most of the 28 jurisdictions we visited said that in the November 2004 election they would not refuse an individual a provisional ballot. In a Colorado jurisdiction, election officials said that election judges were instructed to direct all voters meeting the criteria for voting provisionally (e.g., claiming to be registered and eligible, but with some eligibility question) to the provisional voting table. In 1 Nevada jurisdiction, the election official said that anyone could receive a provisional ballot. He said that they had Las Vegas tourists who wanted to vote a provisional ballot, even though they were informed that it would not be counted. Election officials in 1 Washington jurisdiction said voters knew that they could cast a ballot regardless of circumstances, and election officials in the other Washington jurisdiction said that provisional ballots served as a conflict avoidance tool at the polls. Election officials in both New Mexico jurisdictions said that if a voter was not on the registration list, he or she was immediately given a provisional ballot. According to the New Mexico election officials, precinct officials were not to direct a voter to the correct precinct; instead, under the provisional voting rule, they were to offer a provisional ballot to the voter. Election officials in some other jurisdictions we visited told us that poll workers may have taken certain steps before providing a voter with a provisional ballot. In 1 Illinois jurisdiction, an election official said that if a potential voter was not listed, the poll workers first tried to determine if the voter was registered in another jurisdiction. If that was the case, the poll workers then directed the voter to that jurisdiction, but they did not refuse to provide a provisional ballot if a voter requested one. In 1 Ohio jurisdiction, election officials told us that if a voter was registered in Ohio, everything was done to get the voter to the correct precinct. In a New Jersey jurisdiction we visited, election officials explained that poll workers take several steps when the voter’s name was not listed in the poll book. Poll workers were instructed, according to the poll worker’s manual, to check the poll book for misspellings or for the name being out of alphabetical sequence, and to check the county street guide to see if the voter was in the wrong location. Election officials in this jurisdiction also told us that voters who were in the wrong location were directed to the correct location. They added that voters who did not wish to vote provisionally were told to go before a superior court judge to plead their cases. In 5 jurisdictions we visited, election officials said there were instances where election officials would refuse to provide a provisional ballot on Election Day. In 1 Ohio jurisdiction, election officials said that a provisional ballot was provided if the potential voter appeared at the polling place. However, if the person came to the election office on Election Day and no record of voter registration was found by the Registrar, then the voter was not allowed to vote provisionally. A potential voter stating that he or she was not registered or not a resident was a reason not to offer the individual a provisional ballot, according to election officials in 1 jurisdiction in Nevada, and 1 in New Jersey, and both jurisdictions in North Carolina. Officials in 1 Georgia jurisdiction we visited said that an individual might not be offered a provisional ballot if he or she was on the voter registration list and therefore eligible to vote a regular ballot. Whether a provisional ballot was provided or not might have been based, in part, on the size of the jurisdiction and the familiarity of the poll workers with the voters. Several election officials in small local jurisdictions included in our nationwide survey made this point in written comments. For example, comments included the following: “This is a small township. We don’t have the problems big cities have. People know who lives in the township. They know their neighbors.” “Most voters are personally known, including their addresses.” “We were told that the state voter list was the bible for the day. But we had one lady who should have been provisional but we all knew where she lived so we let her vote. It was the choir lady’s niece. Her signature was on file.” In larger jurisdictions, poll workers might be less likely to know the voters in the precinct and may have made greater use of provisional ballots than in smaller jurisdictions. Some jurisdictions we visited reported that knowing how many provisional ballots to have available for the November 2004 election was a challenge. However, on the basis of our survey of local jurisdictions, we estimate that for the November 2004 election, only 1 percent of jurisdictions had a shortage of provisional ballots. The difficulty with anticipating the need for provisional ballots, according to an Illinois jurisdiction election official, was that officials had no historical experience to rely upon in deciding how many to make available at each site. In this jurisdiction, provisional ballots were used for the first time in the November 2004 election, according to the election official. Similarly, in a Pennsylvania jurisdiction we visited, election officials stated that they had no basis to plan for the number needed, and that they had to rush to produce (e.g., placing a provisional ballot sticker over an absentee ballot) additional provisional ballots at the last minute because some precincts needed more than were initially allocated. Election officials in one Nevada jurisdiction we visited said some polling places were overstocked while others were understocked, requiring them to shuttle the ballots between polling places. In a Colorado jurisdiction we visited, election officials said that last-minute changes by state officials created a need for more provisional ballots because this change allowed individuals who registered during a voter registration drive but who were not on the voter list to vote provisionally. On the basis of our local survey, poll workers failing to follow procedures for conducting provisional voting surfaced as an issue in some jurisdictions in the November 2004 election. We estimate that 12 percent of jurisdictions nationwide encountered poll worker performance problems related to their failure to follow procedures with provisional voting. The newness of the provisional procedures or last-minute changes in the guidance were challenges that confused poll workers, according to election officials in jurisdictions we visited. Specifically, In a Georgia jurisdiction, election officials told us there was a question regarding whether several college students were eligible to vote provisionally, and state election officials were called for clarification (the students were allowed to vote provisionally). In a Connecticut jurisdiction, election officials said poll workers were confused about the process, issuing provisional ballots in some cases before checking with the Registrar to try to locate the prospective voters in the statewide database. In both Nevada jurisdictions, election officials we visited identified poll worker training needs; for example, in 1 of the Nevada jurisdictions election officials said provisional ballot materials were not adequately tracked and returned. In an Ohio jurisdiction, election officials identified poll worker handling of provisional ballots as an area for improvement based on finding valid provisional ballots returned in envelopes for soiled and defaced ballots. In addition, they said about half of the provisional voters did not sign the poll book, as they were supposed to have done under this jurisdiction’s requirements. Furthermore, voters were to place their provisional ballots in a colored provisional sleeve for determination of eligibility before the vote was submitted, but the election official estimated that about 10 percent of the provisional ballots were placed directly in the ballot box instead. Some election officials in jurisdictions we visited described actions they took to implement provisional voting that worked well for the November 2004 election. Several identified training given to poll workers that prepared them for provisional voting, or had staff dedicated to handling provisional votes, or poll workers with prior provisional voting experience. For example, election officials in 1 Colorado jurisdiction said that they had election judges whose sole responsibility was conducting provisional voting. According to these election officials, the election judges (i.e., poll workers) were well trained and sat at a separate table to handle provisional voting. One jurisdiction we visited in Illinois had specific instructions on the voter affidavit for election workers to follow. Figure 44 provides an example of the affidavit. HAVA specifies that voters casting ballots under HAVA’s provisional balloting requirements must, in general, execute a written affirmation stating that they are registered in the jurisdiction in which they desire to vote and that they are eligible to vote in that election. Polling place officials, under HAVA, are to transmit either the ballot or the written affirmation information to an appropriate election official for verification to ascertain if the individual is eligible to vote under state law. In the November 2004 election, state requirements regarding the location from which voters had to cast their provisional ballot in order for it to be counted (e.g., in the specific precinct in which the voter is registered or anywhere within the county—city, parish, township—in which the voter was registered) was one key difference among states. States also varied in how missing voter information was handled and how voters were informed whether their vote was counted or not. On the basis of our national survey of local jurisdictions, the most frequent problem encountered by local jurisdictions in counting provisional ballots was that voters did not meet residency eligibility requirements for the precinct or jurisdiction. HAVA requires states to provide provisional balloting where, among other things, individuals assert that they are registered in the jurisdiction in which they desire to vote. The term “jurisdiction” in HAVA’s provisional voting requirements is not specifically defined. As a result, states establish, under their own election codes, the applicable jurisdiction where voters must cast their provisional ballot from in order for such ballot to be eligible to be counted. For example, in some states this location is the specific precinct in which the voter is registered, and in other states, the voter may be anywhere within the county (city, parish, township) in which the voter resides and is registered. Our survey of state election officials asked where a provisional voter needed to cast a vote in order for it to be counted for the November 2004 election. Figure 45 shows where states reported that provisional voters needed to cast their votes in order for such votes to be eligible to be counted. Variation in state requirements as to the location where a provisional ballot must have been cast in order to be counted was also evident in the jurisdictions we visited. For example, voters in Kansas could, according to election officials, vote provisionally in precincts other than where they were registered (but within the same county) and if otherwise eligible to vote have their vote partially counted (e.g., for county, state, or federal offices or issues). Nevada election officials said they count provisional votes cast anywhere in the county where the voter was registered and otherwise eligible, but all provisional ballots only included federal races. Election officials in both Washington jurisdictions we visited reported that a voter in the November 2004 election was allowed to cast a provisional ballot anywhere in the state of Washington, and the ballot would be forwarded to the correct county (if the ballot was cast in a county other than the one in which the voter was registered) and counted if the voter was eligible. Election officials in 1 Washington jurisdiction we visited said that county election workers mailed the provisional ballots for non- Washington residents to the Secretary of State of the state where the voter claimed to be registered, but these officials were not knowledgeable of what became of the ballots. Election officials in several states have faced court challenges to their state requirements regarding the location where a provisional ballot must have been cast in order to be counted. The litigation has primarily arisen in states requiring that a provisional voter had to cast a vote in the specific precinct in which he or she was registered, in order for that vote to be counted. In this context, the courts have generally held that HAVA does not require a state to count provisional votes cast in the wrong precinct as legal votes when they would otherwise be considered invalid under state law. In our state survey, we also asked state election officials if they anticipated that their state would change, by November 2006, where a provisional voter must cast a vote for it to be counted. Forty states reported that they did not anticipate such rules would change. Election officials in 4 states reported they anticipated a change by November 2006. Three out of the 4 states (Arkansas, Nevada, and New Jersey) reporting that they anticipated a change for 2006 had reported for the November 2004 general election that a provisional voter could have cast a vote anywhere within the county (city, parish, township) in which the voter resides and have such vote counted. The fourth state, Colorado, had reported for the November 2004 general election that provisional voters had to cast their votes in the specific precincts in which they were registered in order for their votes to be counted. Georgia, Maryland, and the District of Columbia said they did not know whether rules specifying where a provisional voter must cast a ballot in order to be counted could be anticipated to change, and the remaining 4 states responded that they will not have provisional voting. These 4 states are not subject to provisional voting requirements. In our survey of local election jurisdictions nationwide, we asked about problems that local jurisdictions encountered during the November 2004 election in counting provisional ballots. On the basis of our survey, in jurisdictions where provisional ballots were cast we estimate that the most frequent problems concerned voters not meeting residency requirements or lacking evidence that the voter was registered. Specifically, we estimate 66 percent (plus or minus 7 percent) of jurisdictions had a problem with voters not meeting residency eligibility requirements for the precinct or jurisdiction, 61 percent (plus or minus 7 percent) received insufficient evidence that individuals had submitted voter registration applications at motor vehicle agency offices, 61 percent (plus or minus 7 percent) had instances of insufficient evidence that individuals had registered or tried to register directly with the election office, 34 percent (plus or minus 7 percent) had registration applications received by the registrar very close to or after the registration deadline, 32 percent (plus or minus 7 percent) had voters not providing identification as specified by HAVA for registrants who registered by mail and were voting for the first time in the precinct or jurisdiction, 29 percent (plus or minus 6 percent) received insufficient evidence that individuals had submitted voter registration applications at National Voter Registration Act agencies other than motor vehicle agency offices, 28 percent (plus or minus 6 percent) had provisional ballot envelopes or ballots that were incomplete or illegible, and 20 percent of jurisdictions had problems with voters who did not sign a sworn statement that they met the qualifications to be eligible to vote in the precinct or jurisdiction. Written comments made by local election officials in our nationwide survey identified some additional problems encountered with counting provisional ballots. Examples included uncertainty whether a convicted felon’s voting rights, lost as a result of such conviction, had been restored; a voter’s registration records that had been sealed by a court; and the state changing the rules several times right up to Election Day, creating confusion, according to election officials. In addition to variation in where states required provisional ballots to be cast in order to be counted for the November 2004 election, local jurisdictions we visited reported a variation in how to handle a lack of identification or a missing signature. For example, election officials in one New Mexico jurisdiction we visited said that first-time voters that did not provide the required identification had until the close of the polls on Election Day to bring their identification to the county clerk’s office. In contrast, according to election officials in a New Jersey and a Georgia jurisdiction, provisional voters were allowed up to 2 days to produce identification for their vote to be counted, and in a Nevada jurisdiction, voters had until 5:00 p.m. the Friday after the election. With respect to mail registrants who were permitted to cast provisional ballots because they did not provide required identification when voting for the first time, election officials in 1 Illinois jurisdiction we visited reported a lack of clarity as to what subsequent identification-related verification was needed prior to counting provisional ballots. According to the Illinois election officials, the state’s guidance resulted in a situation where one Illinois jurisdiction required the voter to provide to the county clerk’s office identification with an address that matched the address in the voter registration list within 48 hours after the election in order to be counted, while another jurisdiction did not require the two such addresses match. The Illinois officials stated that this issue has been clarified. Jurisdictions we visited also varied in how they handled a missing voter signature. For example, in 1 Colorado jurisdiction, election officials said that they mailed letters to voters who failed to sign their provisional ballot envelopes and allowed the voters up to 10 days after the election to come in and sign so that their votes would be counted. This was not a procedure described in all jurisdictions we visited. In 1 jurisdiction in New Mexico, ballots would not be counted for voters who did not sign the provisional ballot affidavit or roster. In 1 Georgia jurisdiction we visited, voters had to complete a new voter registration form or their provisional ballots were not counted. HAVA requires that provisional voters be provided with written information about how to find out whether their vote was counted (and if not, why) using a free access system established by state or local election officials. On the basis of our local jurisdiction survey, we estimate that the majority of local jurisdictions that had provisional ballots cast used the telephone (often toll-free) as the free access system for voters in the November 2004 election to obtain information on whether their provisional ballot was counted, and if not counted, why not. Table 19 shows the estimated percentage of jurisdictions that used various methods. Some jurisdictions used more than one method. Election officials from jurisdictions we visited described a number of ways that provisional voters were provided information about how to learn the outcome of their votes for the November 2004 election, such as ballot receipts, a copy of the voter’s affidavit, a form letter, or a tear-off portion of the provisional ballot envelope. In a New Jersey jurisdiction we visited, provisional voters were given a toll-free number at which to leave their name and address, and then the results were mailed to them, according to election officials. The jurisdiction election officials noted that this process worked well. Figure 46 provides examples of the information voters were provided to inquire whether their vote was counted. In our local jurisdiction survey, we asked how soon after Election Day information on the outcome of a provisional ballot was made available to voters. According to written comments, feedback was reported by some election officials to be available to voters after the November 2004 election as early as the next day, or within 7 days after the election, although some allowed 1 month, or until the election was certified. Election officials in some of the jurisdictions we visited reported that few voters called to find out if their provisional votes were counted. For example, in a Colorado jurisdiction, officials reported approximately 100 calls out of over 6,100 ballots cast; a Kansas jurisdiction election official estimated receiving calls from 6 provisional voters out of over 3,600 that voted; a New Jersey jurisdiction reported receiving 69 inquiries from voters out of over 6,300 cast; and in 3 other jurisdictions we visited, election officials reported no one called to find out if his or her vote was counted. Estimating the number of provisional ballots initially cast and those that were counted in the November 2004 election is difficult because complete information is not available, and because of differences in how state and local jurisdictions have implemented HAVA provisional voting requirements affecting how and whether such ballots are provided and counted. Those same factors limit the value of comparing provisional ballots cast and counted among jurisdictions. Although estimation is difficult, our survey allowed us to estimate provisional ballots cast, but with strong caveats. While HAVA required that most states permit individuals to cast provisional ballots under certain circumstances, not all jurisdictions reported having provisional ballots cast in their jurisdiction in the November 2004 election. On the basis of our survey of local jurisdictions, we estimate that provisional votes were cast in 33 percent of jurisdictions and none were cast in 67 percent of jurisdictions. Our estimates varied by size of jurisdiction regarding whether provisional votes were cast or not. We estimate that in 99 percent of large jurisdictions, 84 percent of medium jurisdictions, and 12 percent of small jurisdictions provisional votes were cast in the November 2004 election. The differences between all sizes of jurisdictions were statistically significant. The difference between different sizes of jurisdictions’ use of provisional ballots may be explained in part by comments from election officials in local jurisdictions surveyed and from officials in jurisdictions we visited. For example, officials in several smaller jurisdictions included in our nationwide survey who reported that provisional ballots were not cast in their jurisdiction had indicated in written comments that election workers are likely to have personal knowledge of a voter’s eligibility. As one election official from a Wisconsin jurisdiction wrote, provisional ballots were available, but use of the ballots was not necessary. Similarly, in a small jurisdiction we visited in New Hampshire, election officials told us that given the town’s small population of roughly 1,600 residents, 99 percent of the time someone in the room knew the individual and could vouch for his or her identity. In this circumstance, according to election officials, no verification was necessary at the poll to ensure the voter's identification. The number of provisional ballots cast and counted nationally is difficult to estimate with precision because of the limited data available and data quality concerns. Estimates that are available, however, do serve as an indication that the HAVA provisional voting requirements have allowed potentially eligible voters who otherwise might have been turned away to participate. We requested November 2004 data on provisional ballots cast and counted in our survey of local election jurisdictions nationwide, but because of missing information and other methodological concerns, our estimate is provided only with strong caveats. We estimate that a total of between 1.1 million and 1.7 million provisional ballots were cast. Our range reflects the fact that an estimated 20 percent of the jurisdictions in our survey did not provide data on how many provisional ballots were cast. We could not estimate the number of provisional ballots that were counted with any level of certainty, because of a very high level of missing data—an estimated 40 percent of the jurisdictions did not provide data on the number of provisional ballots counted. In addition, some jurisdictions in our survey providing the number of provisional ballots cast may have actually provided the number of provisional votes counted. It is possible this may have occurred because jurisdictions would more likely have a record of the number of provisional votes determined to be qualified and counted than they would have the number of provisional votes originally submitted at polling places (cast). For example, in 1 jurisdiction we visited, provisional ballot numbers were provided only on the number of provisional votes that were counted. If some responses to our survey of local jurisdictions actually provided the number of votes counted rather than the number of votes cast, then our estimate of provisional votes cast may be an underestimate. HAVA specifies that information be made available to individuals through a free access system (such as a toll-free telephone number or an Internet Web site) regarding whether their provisional votes were counted and, if a vote was not counted, the reason it was not counted. The specifics of implementing such a system, such as the methods by which such information is to be identified, collected, and maintained, however, under HAVA, are left to the discretion of state and local election officials. The National Task Force on Election Reform recommended that states develop a uniform method for reporting provisional ballots at the state and national levels, and also that states collect data on the number of provisional ballots cast on Election Day. Some states might require the information on ballots cast and counted be sent for statewide figures. Election officials in a Connecticut jurisdiction we visited, for example, said that the Registrar completed a provisional ballot report for the Secretary of State in accordance with state guidance. Other national estimates of the number of provisional votes cast and counted in the November 2004 election have been affected by data quality issues. The Election Assistance Commission, using data from its survey of election administrators, estimated that 1.9 million voters cast provisional ballots at the polls in November 2004, and that 1.2 million of those votes cast were counted. As with our estimates, EAC cautioned that the coverage, or response rate, for its estimates was limited. The response rate for provisional ballots cast and counted was 46 percent and 38 percent, respectively. The report authors stated that data quality issues, such as missing data or data error entries (such as in 15 jurisdictions in the EAC report where the number of provisional ballots counted was greater than the jurisdiction reported as cast) were identified and corrected where possible. On the basis of data collected at different times from different sources in different states, electionline.org estimated that over 1.6 million provisional ballots were cast, and nearly 1.1 million of them were counted. However, readers are cautioned here as well about the limitations of the available data. For example, figures are not definitive because of the variation in requirements and procedures among (or even within) states, and estimates are based on incomplete information. The authors stated that they provided provisional voting estimates with the intent of moving the discussion of provisional voting forward. Information provided by some of the jurisdictions we visited illustrates the variation in the reported number of provisional ballots cast and counted during the November 2004 election, as shown in table 20. When looking at provisional ballots cast and counted for a particular jurisdiction, the variability in the implementation of provisional voting by states and jurisdictions makes interpretation and comparison among jurisdictions difficult. As mentioned earlier, the number of provisional votes cast and counted may vary based on a number of factors. In general, states and jurisdictions vary in why and how provisional ballots are provided to potential voters, as well as the state and local procedures for how provisional ballots are counted. A partial list of these factors includes the following: State provisions varied regarding the additional circumstances (apart from the minimum requirements specified in HAVA) under which a provisional ballot may be offered. Some states offered other voting options in addition to provisional ballots to voters with eligibility issues (such as signing an affidavit, then voting normally or casting a challenged ballot). The manner and extent to which the provisional ballot options available to voters are actually utilized varied in connection with the size and approach of the jurisdictions. For example, smaller jurisdictions were, according to election officials, less likely to use the provisional ballot option than larger jurisdictions because they were more knowledgeable of voters in their jurisdictions and therefore better positioned to address eligibility issues than larger jurisdictions, and some jurisdictions reported taking additional steps to send the voter to the correct precinct before offering a provisional ballot, whereas other jurisdictions might not do so. States established the location where voters must cast their provisional ballots from in order for such ballots to be eligible to be counted. For example, in some states this location is the specific precinct in which the voter is registered, and in other states, the voter may be anywhere within the county (city, parish, township) in which the voter resides and is registered. States or local jurisdictions established other conditions (e.g., the time limit for providing required identification) that varied in determining whether a provisional vote was to be counted. There were other factors, such as instances in which the polling location was kept open late because of a federal court, state court, or other order extending the polling hours. Notwithstanding the variations we have identified in provisional voting processes and challenges identified by some election officials in jurisdictions we visited, several election officials reported that they thought the provisional voting process worked well for the November 2004 election, in that people who would normally not have been able to cast a ballot were allowed to do so, and some of those ballots were counted. While many jurisdictions reported that for the November 2000 election having at least one procedure in place to help resolve eligibility questions for voters whose name did not appear on a polling place registration list, only 20 states plus the District of Columbia reported using some form of provisional voting in the November 2000 election. In those states in which it was not available, voters whose names did not appear on polling place registration lists, but stated they had properly registered to vote, were often not permitted to cast a regular ballot. Provisional voting is an important means of enhancing voter access to the polls. HAVA required all states that required registration prior to Election Day to provide for provisional balloting by the November 2004 election, but left to states the specific choices on how they would implement that requirement. In exercising this discretion, states have created varied provisional voting rules and practices. Under HAVA, provisional ballots are to be counted as a vote under state law if the person casting the ballot is determined to be eligible to vote under state law. These statutory provisions and determinations of eligibility and what constitutes a properly voted ballot vary by state and thus affect the state rules and procedures used to determine whether provisional ballots are counted. At least 1 state, for example, allows voters to cast a provisional ballot for statewide offices anywhere in the state, with the ballot returned for eligibility verification and counting to the jurisdiction in which the voter said he or she was registered. Other states required that voters cast provisional ballots in their assigned precinct for the ballots to be counted. The actual impact of these varying practices on provisional balloting and vote counting is unknown. Comparable data across states are not available to determine whether or how these variations affect the number of voters who are permitted to cast provisional ballots or the percentage of provisional ballots that are actually counted. Thus, it is difficult to assess the potential impact of a state changing its existing rules and practices. However, based on the data that are available, it is clear that provisional voting has helped to facilitate voter participation of those encountering eligibility-related issues when attempting to vote. Once the polls close on Election Day, the process of determining and certifying the final results begins. Vote counting is a complex, multistep process with many variations across the nation. The exact process depends upon a number of variables. Among them are state requirements that define standards for determining voter intent for ballots that are not clearly marked, deadlines for certifying the final count, and specifications for conducting recounts when required. The types of ballots to be counted affect vote tabulations because absentee and provisional ballots typically undergo some type of verification before counting, while early and regular Election Day ballots typically do not require this processing. The types of technology used for vote casting and counting—hand-counted paper ballots and machine-counted ballots (punch card, optical scan, and those cast electronically)—also add variance to how votes are handled. The counting process requires attention to detail, and problems in any one election stage can affect the final vote count. Moreover, its orchestration requires the effective interaction of people, processes, and technology. This chapter discusses the continuity and key changes since the 2000 general election and challenges—new and ongoing—encountered by election officials in the 2004 general election with respect to counting votes. In the 2004 general election, vote counting remained an intricate, multistep process characterized by a great variety of local procedures depending on a local jurisdiction’s technology, size, and preferences. As with the 2000 general election, the proportion of jurisdictions nationwide reporting recounts or contested elections remained small in the 2004 general election. There were some notable developments related to vote counting. A significant change was the fact that by the 2004 general election more states had developed guidance for determining voter intent on unclear ballots. Eighteen states that reported not having guidance in the 2000 general election reported in our survey they had such guidance in place for the 2004 general election. In addition, 9 states reported changes relating to the process of conducting recounts. Some added requirements for mandatory recounts. Others changed their conditions and guidance for conducting recounts. The results of our state survey showed that while 29 states and the District of Columbia did not require audits of vote counts, 9 states reported having taken some legislative or executive steps toward doing so. Many of the problems in managing people, processes, and technology that had confronted election officials across the country in the November 2000 general election continued to challenge them in the 2004 general election. Equipment problems, poll worker errors, and voter errors made it difficult to tabulate the votes quickly and accurately, according to some election officials. A new phenomenon emerged as a challenge to election officials, as well: Some jurisdictions reported difficulty completing the extra steps required to verify and count provisional votes within the time allowed for tallying the final vote count. Finally, while recounts and contested elections remained rare in the 2004 general election, those that did occur, particularly in Washington state, revealed the intricacies and vulnerabilities of the election process. The basic elements of the vote-counting process we described in our October 2001 comprehensive report on election processes nationwide remained in practice for the general election of 2004. Of necessity, it was a complex, multistep process, with many variations, depending on a jurisdiction’s technology, size, and preferences. As with other elections, vote counting in the 2004 general election involved certain common steps: closing and securing the polls and voting equipment; securing the ballots; reconciling the number of ballots at the polls (e.g., the number available at the polls compared to the number cast, spoiled, and remaining); transporting ballots and equipment from the polling places to a central location where they were secured; in some cases electronically transmitting results from polling place voting equipment to a central tally location; verifying provisional and absentee ballots for counting; determining whether and how to count ballots that may be improperly or unclearly marked; conducting any necessary recounts; and certifying the final count. Preliminary to counting, a key step was to secure the voting machines and ballots so that no additional votes could be cast. Procedures for securing equipment varied with the equipment that was in use. However, on the basis of our survey of a representative sample of local election jurisdictions nationwide, we estimate that 91 percent of all jurisdictions used hardware locks and seals as one of their predominant security measures. In our site visits, local election officials also described securing DRE tapes and cartridges under lock and key before and after they were delivered to boards of elections or other authorities. For example, election officials from 1 jurisdiction we visited described securing memory cards in optical scan counting machines by attaching a plastic band with a serial number. The band would have to be severed in order for the memory card to be removed, according to election officials. One such band is shown in Figure 47 securing a voting equipment bag. Election officials in 2 jurisdictions we visited also described a variety of measures they took to ensure that ballots were not lost or miscounted. In 1 Washington jurisdiction, officials said they secured punch card ballots at polling places for counting elsewhere by transporting ballots twice: once earlier on Election Day and the other time after the polls had closed. These officials also said that ballots were bundled into groups of 50, separated by type (Election Day, provisional, and absentee ballots), and put into transport carrier safe boxes. Two poll workers, one from each political party, accompanied the ballots when they were transported to the elections office for counting. Similarly, in a Colorado jurisdiction we visited, election officials said that at the close of Election Day they sealed optical scan ballots from the polling place and the optical scan counter to prevent tampering. Then, two election judges transferred the ballots and optical scan counter to the counting center. While ballot-securing methods varied, the results of our local jurisdiction survey showed that most jurisdictions had written policies and procedures in place in the November 2004 general election to secure ballots (including paper and electronically stored ballots). As shown in table 21, on the basis of our survey we estimate that two-thirds of local jurisdictions had written procedures for transporting ballots, and about three-quarters had written procedures in place for secure ballot storage rooms. In addition, reconciling ballots with the number of voters was a common step in securing ballots before they were counted. According to our state survey, 47 states and the District of Columbia reported that they required jurisdictions to count or keep track of ballots that were unused, spoiled, rejected, or issued but not returned. Two states, Montana and Maine, reported not requiring jurisdictions to count or keep track of such ballots. New York reported in our survey that because it does not have paper ballots, such tracking was not necessary. New York reported that it did not use paper, optical scan, or punch card ballots. During our visits to election jurisdictions, we asked officials how they reconcile ballot and voter numbers. The election officials reported conducting cross-checks in a number of ways, but generally followed a process of reconciling any discrepancies between the total numbers of ballots on hand at the beginning of the day, the number of voters who signed in at the polling place, and the number of ballots cast. Once the ballots were reconciled in the November 2004 election, local jurisdictions tabulated and canvassed (or reviewed) the vote. Both counting and canvassing the count were an ongoing process in the effort to ensure an accurate tally. After initial tabulations of votes on election night, which were typically released to the public, canvassing was typically the process of reviewing all votes by precinct, resolving problem votes, and counting all types of votes (including absentee and provisional votes) for each candidate and issue on the ballot and producing an official total for each. The official total was usually certified by an election official. This process varied among jurisdictions in terms of how and where it was done and who was responsible. The counting process involved several different types of ballots, cast under different circumstances: General election votes are cast at polling places on Election Day by voters who appeared in the registration lists for that precinct and voted a regular ballot. Provisional votes are cast by those, for example, whose registration (and qualification to vote) could not be established at the time of voting at the polls on Election Day. Absentee votes are generally votes received and cast by mail before Election Day. Early votes are generally cast in person before Election Day. According to our local survey, for the November 2004 general election, local jurisdictions nationwide used different voting methods for different ballot types. As shown in table 22, we estimate the largest percentages of jurisdictions used optical scan and paper hand-counted ballots for Election Day. Also, optical scan and punch card vote-counting methods were used at precincts or at central locations. Jurisdictions could check more than one voting method. In our local jurisdiction survey, we also asked what predominant voting method was used to process the largest number of ballots in the 2004 general election. We estimate that hand-counted paper ballots were the predominant tabulation method for 30 percent of all jurisdictions, although these were almost all small jurisdictions. Specifically, we estimate that 41 percent of small jurisdictions, 3 percent of medium jurisdictions, and no large jurisdictions hand-counted paper ballots. Small jurisdictions were statistically different from large jurisdictions. As in the November 2000 general election, the counting process for the November 2004 election took place at precincts or at centralized locations, such as election headquarters at town halls and even warehouses. In jurisdictions we visited, we learned about some of the substantial variations in the sequence, procedures, and precautions taken to conduct the count. We found in our site visits that vote counting ranged from a very simple process in a small jurisdiction to more complex processes in larger jurisdictions. For example, a small New Hampshire jurisdiction, with just over 1,000 registered voters, had one polling place and one precinct open on Election Day, according to election officials. They told us the paper ballots were not transferred to any location for counting and were hand- counted by 25 election workers. These officials also said that five teams of five individuals each reviewed votes cast on each paper ballot and used paper and pencil to record and tally vote totals. The final election outcomes were written on a standard form and submitted to the New Hampshire Secretary of State’s office, according to election officials. In contrast, election officials in a large Washington jurisdiction described a more complex process for their centralized vote count of punch card ballots. As described by these officials, their process enabled them to begin reporting results on Election Day evening by precinct and to provide updates of the count every 30 minutes. Once Election Day ballots were transferred to the election office by poll workers, the ballots were counted to determine total numbers, according to election officials. They also told us that after the ballots were separated by precinct, up to 20 inspection boards, composed of two Republicans and two Democrats each, inspected the ballots one precinct at a time. In the inspection process, the officials said that the ballots were further separated into categories—those that were machine-readable and those that required further examination, such as ballots with write-in candidates or with a chad hanging by two or more corners. Once all questions were resolved (including any that would require review by a canvassing board), they told us ballots in batches of 500 each were placed in trays by precinct and brought to the ballot tabulation area. According to these officials, the jurisdiction used a punch card tabulator, which was connected to a computer and had a processing speed of 600 ballots per minute (see fig. 48). Once all ballots were counted, jurisdiction election officials told us they generated an unofficial report with results for all races and voting propositions. This initial tally was posted on the county Web site and released to the press, candidates, and public, according to election officials. Six of the jurisdictions we visited told us that they counted Election Day votes at the local precinct, where poll workers would tabulate results and resolve any ballot issues that could be handled locally. For example, in a large Kansas jurisdiction, election officials said that voters were able to place their ballots in an optical scanner at the polling place that read the ballot and rejected it if there were any problems. According to officials there, the machines could return to the voter any ballot that, for example, had too few or too many votes for a specific office and provide a screen message for what to correct before resubmitting the ballot. After the polls closed, the optical scan machines with their memory cards—which had been programmed for the specific precinct—were transferred to election headquarters, according to election officials. The officials also said the optical scan machines were linked electronically to one computer and data from the memory cards were uploaded so that votes from all precincts could be tallied. Absentee, provisional, and early votes each required some additional steps to manage in order to include them in the vote count. Absentee votes: According to our state survey, all states reported having some provision for absentee voting in the 2004 general election. As we discussed in chapter 3, on absentee voting, absentee ballots must typically undergo some type of verification prior to counting. At 1 Colorado jurisdiction we visited, officials said that they began verifying and counting absentee ballots 10 days before Election Day. At 1 jurisdiction in Washington election officials said that they qualified the absentee ballots as they were received at the election office, but did not count the votes until 3:00 p.m. on Election Day. Additionally, at a jurisdiction in Illinois, election officials said that they distributed most absentee ballots to their respective precincts to be counted along with the Election Day ballots. In each of these jurisdictions, however, according to election officials, the absentee ballot results were not released until after the Election Day polls were closed. Also, on the basis of our local jurisdiction survey, we estimate that 99 percent of election jurisdictions included the counts of qualified absentee ballots in the final certified count, regardless of their effect on the outcome. Provisional votes: Provisional voting, which was required by HAVA in all but 6 states during the 2004 general election, generally required several steps. At all of the local jurisdictions we visited that used provisional ballots, election officials said that the ballots were transferred to an election office or central count location, where the eligibility of the voter was verified before they were counted. We estimate, on the basis of our local jurisdiction survey, that 83 percent of jurisdictions that provided provisional ballots during the 2004 general election transferred the provisional ballots to a central location for counting. Those jurisdictions that did not engage in transfers may have been jurisdictions with only one precinct, in which case, the votes were tallied on-site. At all of the jurisdictions we visited that used provisional ballots election officials said they included provisional ballots determined to be verified in certified vote counts regardless of their effect on the outcome of the election. Early votes: According to our state survey, for the November 2004 election, 24 states and the District of Columbia reported they allowed early voting, and from our local jurisdiction survey, we estimate that about 23 percent of local jurisdictions allowed early voting in the election. In early voting jurisdictions we visited, a variety of reconciliation and counting processes were used, according to election officials. At one jurisdiction we visited, election officials told us that early voting DRE votes were reconciled daily. According to these officials, at the end of the early voting period, election department staff shut down the DRE machines and removed the memory cards (which stored cast votes). The officials said that the memory cards were sealed and returned to the election department office for counting, in a manner similar to Election Day DRE votes. In another jurisdiction we visited that used optical scan machines for early voting, officials told us that ballots were inserted by voters into the machines at the polls—the same procedure used on Election Day. At the end of each early voting day, according to the officials, the ballots from that day were physically transferred to the clerk’s office and the optical scan results were submitted by modem to the jurisdiction’s headquarters. Election returns posted on election night are unofficial and are not considered final until canvassing—the process described earlier of reviewing all votes by precinct, resolving problem votes, and counting all types of votes—is complete and the count is certified. Certification is when the vote count is finalized, generally by state and local officials. Our state survey showed that for the 2004 general election, states reported varied practices for when counts were certified and by whom, similar to the general election of 2000. Our state survey showed that most states reported setting certification deadlines, but the certification periods varied from state to state. Four states (Alaska, Nebraska, New Hampshire, and Rhode Island) and the District of Columbia reported not specifying a deadline following Election Day for certification of election results, while all other states reported specifying such a deadline. For example, certification on the second day after Election Day was reported by Delaware, while not later than 40 days was reported by Michigan. Some states reported caveats and varying levels of specificity in the certification deadlines. Maine reported allowing 3 days for local election official certification and 20 days for state-level certification. Missouri’s reported deadline was by the fourth Tuesday following the election. North Dakota reported a deadline of not less than 3 days, but not more than 6. Similarly, the requirement reported for Texas was 15 to 30 days after the election. An important facet of the canvassing process is the consideration that may or may not be given to ballots that have not been marked properly. An improper mark, for example, could be a circle around a candidate’s name instead of a checked box on a ballot that is to be scanned optically. For those states providing for the determination of voter intent, the importance of having explicit and consistent criteria for treating unclear ballots became evident in the 2000 general election when different interpretations for punch card ballots in Florida made the close presidential race extremely contentious. While subsequent federal reforms have not specified standards for treating unclear ballots, HAVA requires that each state adopt uniform standards, by January 2006, that define what constitutes a vote and what will be counted as a vote for each category of voting system used in the state. In our state survey, 39 states and the District of Columbia reported that for the November 2004 general election they had requirements or guidance for determining voter intent that focused primarily on improper ballot marks. Forty-five states and the District of Columbia reported they had requirements or guidance for determining how or whether to count a machine-unreadable ballot—one that cannot be processed by machine because it is damaged. Eighteen states that had reported not having provisions in place for the 2000 general election reported to us in our 2005 state survey that they had voter intent guidance for the November 2004 general election. Georgia, for example, had developed requirements for four methods: DRE machines, lever-type machines, optical scan, and hand-counted paper ballots. Some of Georgia’s requirements were for certain ballots rejected by optical scan machines. These requirements provide for some measure of subjective determination of a voter’s intent by election officials in certain specified instances. In such an instance, a vote shall be counted, under these Georgia provisions, if in the opinion of the vote review panel, the voter has clearly and without question indicated a choice for which the voter desired to vote. In addition, under specified circumstances, these Georgia provisions also provide for a similar type of voter intent determination with respect to hand-counted paper ballots. As described below, we found in our site visits that under state or local guidance, local jurisdictions we visited had gone to varied lengths in the 2004 general election to salvage ballots that were improperly marked or that were machine unreadable. These efforts varied by the type of voting equipment used in the jurisdiction. Optical scan ballots: In some jurisdictions, election officials told us that optical scan machines located at polling places could notify the voter of an unreadable or incorrectly marked ballot at the moment it was submitted. However, where the ballots were transferred to a central location for counting this would not be the case. In one jurisdiction in Colorado where optical scanning was done centrally for absentee ballots, election officials told us they were required to interpret voter intent or replace an unreadable ballot. According to election officials, the jurisdiction had instructions, which they stated were based on state statutes, specifying that bipartisan election judges would be the responsible parties for determining voter intent. Their deliberations, however, would be observed by others, according to the instructions. If a decision was reached on voter intent, a replacement ballot could be created and run through the optical scanner, according to the officials. Officials in a Kansas jurisdiction we visited said that state election standards were very specific for interpreting an incorrectly marked optical scan ballot. They would count a vote if an oval shape is marked, near but not inside the oval, and not closer to another candidate’s name. A completed oval would also be counted if another oval for the same race was scribbled or crossed out. If the ballot could be interpreted locally, officials said election workers duplicated the vote on a new ballot for the optical scanner to read. According to election officials, if the intent was not clear, the ballot would be sent to the Board of Canvassers for further examination. State guidance also included standards for hand-counted paper ballots. In Florida, guidance in place for the November 2004 general election was even more specific than that provided in Colorado or Kansas. The guidance specified, for example, that, with respect to manual recounts, a vote may be counted if “there is an ‘X’, a check mark, a plus sign, an asterisk or a star, any portion of which is contained in a single oval or within the blank space between the head and tail of a single arrow and which does not enter into another oval or the space between the head and tail of another arrow.” It also allowed for a vote to be counted under additional specified circumstances including if “there is a diagonal, horizontal, or vertical line, any portion of which intersects two points on the oval and which does not intersect another oval at any two points,” provided that the horizontal line does not strike through the name of the candidate. Punch cards: While federal election reforms included provisions promoting replacement of punch card ballots, on the basis of our local jurisdiction survey, some jurisdictions continued to use them in the 2004 election. As was the case for other types of ballots, levels of guidance for interpreting voter intent varied by state. Illinois reported that it had no requirements or guidance for determining voter intent, according to our state survey. Election officials in 2 Illinois jurisdictions using punch card ballots told us in our site visit that election workers did not attempt to ascertain the intent of voters on punch card ballots that were improperly punched. If the ballot could not be counted by a punch card-counting machine because of an improper punch or mark, the votes were not to be counted. In contrast to Illinois, Washington reported that it had guidelines or requirements regarding voter intent and allowed for remaking an unreadable or damaged punch card. In a Washington jurisdiction we visited that used punch card ballots in the 2004 general election, election officials said that state law guided their jurisdiction’s written instructions for determining voter intent. Election officials said voters were given very specific instructions for how to change their vote before casting their vote, if necessary, on a punch card ballot while at the polls. These officials also said ballots could be either enhanced or duplicated if it was clear that a voter had followed these instructions. Also, according to the officials, a problem ballot could be enhanced or duplicated by officials if voter intent could easily be determined. If voter intent was at all unclear, the ballot was to be sent to the canvassing board for review. According to officials, canvassing board meetings were open to the public and state guidelines were to be used to interpret voter intent. Figure 49 shows a punch card voting booth. Hand-counted paper ballots: While we estimate, on the basis of our local jurisdiction survey, that no large jurisdictions and only 3 percent of medium jurisdictions used paper ballots in the November 2004 general election for their predominant voting method, 41 percent of small jurisdictions did. This voting method presented yet another variation in the process of determining voter intent. For example, in one small jurisdiction we visited in New Hampshire, election officials we spoke with said a senior election official was on hand during ballot counting. They said if a ballot was unclear, the senior official would be involved to discuss it. If it was still unresolved, state guidance called for an unclear ballot to “be counted in accordance with a majority vote of the election officials present.” The guidance, which we examined, also provided examples of what marks on a paper ballot to accept, as shown in figure 50. As with the 2000 general election, recounts and contested elections were an uncommon event in the 2004 general election. On the basis of our local survey, we estimate that 92 percent of election jurisdictions nationwide did not conduct a recount for federal or statewide office. Also on the basis of our survey, recounts were more prevalent in large than in small election jurisdictions. Specifically, we estimate that 4 percent of small jurisdictions, 16 percent of medium, and 24 percent of large jurisdictions conducted recounts for federal or statewide offices. Both large and medium jurisdictions were statistically different from small jurisdictions. Similarly, in our state survey, 37 states and the District of Columbia reported they had no recounts for federal or statewide offices during the primary or general elections of 2004, as shown in figure 51. Recounts are, in general, conducted because a candidate, voter, or group of voters has requested it or because the margin of victory was within a certain specified margin such that state provisions required or allowed for a recount. Election officials in local jurisdictions we visited in several states where recounts were conducted described to us the procedures they used for their 2004 general election recounts. In a New Hampshire jurisdiction, where a recount was conducted of the presidential race of 2004, officials said the recount was requested by a presidential candidate to test the accuracy of the optical scan vote-counting equipment. The officials provided the following description of the recount: Five wards in the jurisdiction had been selected for a sample recount. It was conducted by the New Hampshire Secretary of State’s office, not by the local election jurisdiction. The jurisdiction’s only role in the recount was to provide the Secretary of State with the optical scan ballots from the applicable wards. After the Secretary of State recounted a portion of the optical scan ballots and found no significant discrepancies between the initial vote tally and the partial recount, a full recount was not conducted statewide, according to these officials. In North Carolina, races for two statewide offices (the Agricultural Commissioner and the Superintendent of Public Instruction) were subject to recounts because, under state law, the close margin of victory allowed the losing candidates to request a recount, according to election officials. In 1 North Carolina jurisdiction we visited, which used DRE machines, local election officials described the recount process as follows: The recount was conducted in a different manner from the initial count. For the initial count, votes were electronically transferred from each DRE machine to vote storage devices at the polls that stored the vote totals by precinct. The precinct totals were then downloaded from the vote storage devices onto a computer located at the jurisdiction’s election headquarters, and vote tabulation software summed vote totals from each precinct for each election contest in the jurisdiction. During the recount, rather than relying on aggregated votes totaled by precinct for a vote count, officials tabulated individual DRE ballots. To complete this process, the jurisdiction’s tabulation software recognized individual ballot images from the DRE machines rather than aggregated votes per precinct. The individual ballot images were downloaded onto the computer in election headquarters, and votes for the races in question were retabulated (by voter, rather than by precinct as in the initial count). The outcomes of both the Agricultural Commissioner and the Superintendent of Public Instruction races were unaffected by the recount results. Generally, contested elections are court actions initiated by a candidate or voter alleging, for example, that some type of misconduct or fraud on the part of another candidate, election officials, or voters, occurred in a particular election. The results of our local survey indicate that contested elections were rare during the period from 2001 to the 2004 general election. In our local survey, we asked local jurisdictions whether they held any primary or general elections for federal or statewide offices during this period that were contested, and if so, whether the outcomes for these elections changed. On the basis of our nationwide survey, we estimate that 5 percent of local election jurisdictions held a federal or statewide election that was contested during this period. The contested elections in which the winner did change involved races for offices such as state judge or governor, or for the U.S. House of Representatives. Perhaps the most heavily contested election in November 2004, which received a great deal of press coverage, was the Washington state governor’s race. A close margin of victory and a candidate request prompted two recounts, and after the state certification of a winner in the election, the second place candidate’s campaign and seven voters filed a petition in a state Superior Court contesting such certification, alleging that errors, omissions, mistakes, neglect, and other wrongful acts had occurred in conducting the election. The Chelan County Superior Court dismissed the election contest petition, finding that the petitioners failed to prove that grounds for nullification of the election existed. The Superior Court held, in general, that while there was some evidence of irregularities, the petitioners failed to adequately prove that the outcome of the election was changed as a result. The recount itself, however, revealed the substantial complexities involved in accomplishing an error-free count. We discuss this case more closely later in this chapter. State provisions for recount processes vary, and not all states have provided for or required them in the past. For the November 2004 general election, however, several states reported that they had introduced or further developed their specifications for election recounts since the 2000 general election. In our October 2001 report on election processes, we reported that 47 states and the District of Columbia had provisions for recounts, though most did not have mandatory recount provisions. To better understand recount reform efforts to help ensure vote count accuracy since the 2000 election, we asked states in our 2005 survey about changes to their mandatory recount provisions in place for the November 2004 general election. Nineteen states reported requiring a mandatory recount predominantly in cases of a tie or close margin of victory, whereas in 2001, 17 states indicated they required mandatory recounts. Thus, 2 more states reported requiring mandatory recounts for the 2004 general election than for the 2000 general election. In addition, 3 other states reported amending their existing provisions for mandatory recounts, while 3 said they had changed their requirements or guidance for who may request a recount as shown in table 23. Three states—Hawaii, Mississippi, and Tennessee— reported not having any formal provision for conducting recounts—both for the 2000 or 2004 general elections. Alabama, Pennsylvania, and Texas were the states that reported adding mandatory recount provisions for the 2004 election. Alabama law, in place for the 2004 general election, requires a recount when the election returns for any public office indicate that a candidate or ballot measure is defeated by not more than one-half of 1 percent of the votes cast for the office or the ballot measure—unless the defeated candidate submits a written waiver. In Pennsylvania, a recount is mandatory if an election is decided by one-half of 1 percent or less—unless the defeated candidate requests in writing that a recount and recanvass not be made. Texas reported that a recount was required only if two or more candidates tie in an election. For the 2004 general election, Arizona, Minnesota, and Washington reported adding more specifications to the vote margins that trigger recounts in their states than were in effect during the 2000 general election. Arizona added triggers for different types of races. For the 2000 general election, Arizona reported requiring a mandatory recount when the margin of votes between the two candidates receiving the most votes was not more than 0.1 percent of votes cast for both candidates, or 200 votes for statewide offices and 50 votes for the state legislature. For the 2004 general election, Arizona reported in our state survey that it had amended its mandatory recount requirements so that the thresholds triggered by the number of votes only applied when the total number of votes cast was 25,000 or fewer. Washington’s mandatory recount provisions in place for the November 2004 general election had changed since the November 2000 general election. The requirement in 2000 for a mandatory recount by machine was a margin of 0.5 percent or less of total votes cast for the top two candidates. If the margin was less than 150 votes and less than 0.25 percent of total votes cast for the top two candidates, a manual recount was required. The amended requirement, in place for the November 2004 general election, specified that a recount by machine was required when the margin is both fewer than 2,000 votes and less than 0.5 percent of total votes cast for the top two candidates. If the margin was fewer than 150 votes and less than 0.25 percent of total votes cast for the top two candidates, there was to be a manual recount. Since the November 2000 election, Minnesota amended its mandatory recount triggers to include a specific percentage margin of victory in certain circumstances, rather than only a specified difference in the absolute number of votes between the top two candidates. While a margin of 100 votes or fewer in an election had previously triggered a recount for the 2000 general election, Minnesota election officials reported in our state survey that for the 2004 general election their state required a recount if the margin was determined to be either less than one-half of 1 percent of the total number of votes counted or, was 10 votes or less when no more than 400 votes are cast. According to our state survey, state requirements or guidance for who may request a recount, in place for the November 2004 general election, changed in Florida, Maine, and Rhode Island since 2000. While any Florida candidate or candidate’s political party in 2000 could request a recount, this was no longer true for the November 2004 general election. For the 2004 general election, Florida election officials reported that no candidate or political party could request a recount, and that the only authorized recounts were mandatory recounts to be conducted when the margin of victory was 0.5 percent or less of the total votes cast. Rhode Island, which reported that for the November 2000 general election it had allowed recount requests by any candidate who trailed the winning candidate by less than 5 percent, reported that for the November 2004 general election, it required a smaller margin before a losing candidate could request a recount. For example, for races with between 20,001 and 100,000 votes, Rhode Island reported that it required a margin of 1 percent or less (or 500 votes) before a trailing candidate could request a recount, and for races with more than 100,000 votes the required margin was one-half of 1 percent (or 1,500 votes) before a trailing candidate could request a recount. Maine, on the other hand, reported that its recount provisions in place for the November 2004 general election were clarified to provide that an apparent losing candidate, rather than only the second-place candidate, could request a recount. Twenty-nine states and the District of Columbia reported that for the 2004 general election, they did not have provisions requiring or allowing local jurisdictions to conduct a vote count audit of election results. However, in our state survey, 9 states reported taking action since November 2004 (e.g., enacted legislation or took executive action) to require audits of vote counts. As used in this report, a vote count audit is an automatic recount, in full or in part, of the vote tabulation, irrespective of the margin of victory, in order to ensure accuracy before certification. On the basis of our state survey, as shown in figure 52, 8 states reported that for the 2004 general election they had a vote count audit requirement for all local jurisdictions, and 2 states reported requiring vote count audits for some local jurisdictions. Election officials from 29 states and the District of Columbia reported that for the 2004 general election they did not require or allow local jurisdictions to conduct vote count audits. Eleven states reported that they allowed them. We estimate, on the basis of our local survey, that 15 percent of all local jurisdictions were required by their states to conduct such audits as part of the certification process for the 2004 general election. Larger and medium jurisdictions were more likely to have been required to do so than smaller jurisdictions. Nine percent of small jurisdictions, 27 percent of medium, and 38 percent of large jurisdictions conducted a required vote count audit of the 2004 general election. Both large and medium jurisdictions were statistically different from small jurisdictions. Nine states reported in our state survey that they had enacted legislation or taken some executive action to require audits since November 2004. For example, in Washington, beginning January 1, 2006, prior to election certifications, county officials must audit the results of votes cast on DRE machines. The audit must be conducted by randomly selecting up to 4 percent of the DRE voting machines or one machine, whichever is greater, and for each device, comparing the results recorded electronically with the results recorded on paper. During our visits to local election jurisdictions, election officials in 5 jurisdictions described conducting vote count audits as a part of the election certification process for the November 2004 general election. For instance, 2 large jurisdictions in Nevada reported that the state requires each jurisdiction to randomly audit election results when DRE machines were used. According to officials in 1 of these Nevada jurisdictions, they were required to select 1 percent of DRE machines, or 20 machines, whichever amount is greater, and to perform a manual audit of the machine-tabulated vote totals. The officials said that they used a computer program to randomly select which of the jurisdiction’s 740 DRE machines to audit. To conduct a paper-based audit, they told us that for each randomly selected machine, election workers printed the DRE result tapes from the voter-verified paper trail printer, manually counted the vote data on the tapes, and compared the manual count results to the original electronic results. In one large Illinois jurisdiction we visited, election officials told us they were required by the state to automatically audit (by retabulating votes) results of punch card ballots in 5 percent of their precincts, which were randomly selected. According to the officials, the State Board of Elections sent the jurisdiction officials a letter specifying which randomly selected precincts had to retabulate their votes. Election officials in a Pennsylvania jurisdiction we visited said that state law required random audits when electronic voting machines were used. According to these officials, they were required to audit 2 percent of DRE vote totals following an election. They told us, however, that in practice they actually audit all DRE machine vote totals to ensure an accurate vote count. They stated that vote data stored on DRE backup memory cards is printed and compared to vote data stored on DRE cartridges used in original vote counts. They said they operated on the assumption that because the internal memory cards serve as a backup system, there should be no difference in the totals. As in the general election of 2000, the 2004 general election saw failures to properly employ voting equipment. At several of the jurisdictions we visited, officials recounted mistakes in using the DRE systems, for example, that echoed other recent findings (in our September 2005 report on the security and reliability of electronic voting), noting inadequate understanding of the equipment on the part of those using it. In our September 2005 report on electronic voting, we noted that instances of fewer votes counted than cast in one Pennsylvania county in the 2004 general election had resulted from incorrectly programmed DRE machines. Similarly, in our 2005 site visits to election jurisdictions for this report, officials with whom we spoke recounted difficulties that had resulted from mistakes in programming the electronic equipment. In 1 Florida jurisdiction, for example, officials reported that the storage capacity of an optical scan accumulator (used to combine vote data from DREs and optical scanners) had been inadequately programmed to capture all of the votes cast. Officials there were able to discover and rectify the problem so that all votes were counted. In a Nevada jurisdiction, officials said that on Election Day, there were 198 provisional ballots (out of 4,532 cast) that were incorrectly programmed on the DRE machines at several polling locations, resulting in the provisional votes being counted without the voter first being qualified. According to these officials, poll workers forgot to add the “0” to the beginning of the precinct number. The officials noted that 2004 was the first time that the jurisdiction had used provisional voting and that in the future they planned to use paper provisional ballots to avoid any confusion. In a North Carolina jurisdiction we visited, election officials told us about how a misunderstanding of the voting equipment resulted in the loss of votes. Specifically, election officials were unclear about the vote storage capacity of a DRE machine used in early voting and failed to notice the machine’s warning that its file was full. The software installed on this machine was an older version of the program and only recognized up to 3,500 votes, according to election officials. Election administrators believed that it could recognize up to 10,500 votes. They discovered the error at the close of Election Day when reconciling the number of votes cast on the DRE machine used in early voting with the number of voters credited with early voting at the polls. Furthermore they said it was not until they subsequently conducted a simulation of votes cast that they discovered the cause of the problem. They also discovered that while the machine’s software flashed warnings on its screen when the voter file became full, election workers had not seen it because of the screen’s positioning. Also, according to the officials, they had been operating under the assumption that the machine would have automatically stopped accepting votes once the limit had been reached. Instead, the machine had continued to accept votes cast, overwriting earlier votes in order to accommodate the new ones. The officials said they determined that 4,235 votes were lost. Not all equipment failures resulted in lost votes, but some did create technical challenges. Officials in a Colorado jurisdiction stated that memory cards for optical scan machines at early voting sites sometimes failed, which meant that all affected optical scan ballots were rescanned using a new card once poll workers realized that the original card was malfunctioning. Also, in our September 2005 report on the security and reliability of electronic voting mentioned earlier, we noted that a Florida county experienced several problems with its DRE system, including instances where each touch screen took up to 1 hour to activate and had to be activated separately and sequentially, causing delays at the polling place. In addition, we reported that election monitors discovered that the system contained a flaw that allowed one DRE machine’s ballots to be added to the canvass totals multiple times without being detected. In another instance, our report notes that a malfunction in a DRE system in Ohio caused the system to record approximately 3,900 votes too many for one presidential candidate in the 2004 general election. We also reported that a state- designated voting system examiner in a Pennsylvania jurisdiction noted that the county DRE system had technical problems, such as failure to accurately capture write-in votes, frozen computer screens, and difficulties sensing voters’ touches. During our 2005 site visits, officials from 3 jurisdictions also described several cases of jamming problems with optical scan and punch card ballot tabulators. For example, election officials in a Kansas jurisdiction we visited told us that an extensive two-sided optical scan ballot frequently jammed voting machines because of its length. These officials told us that they used a two-sided ballot design which required that the optical scan counting equipment read the ballot front and back, which presented a problem. According to the officials, the ballot was not scored properly to feed easily through the equipment and paper jams occurred frequently. Election officials said the ballots had to be hand-sorted into 13 groups before scanning, which took time. Similarly, officials in a New Jersey jurisdiction told us that their optical scan machines had frequently jammed when reading provisional and absentee ballots. According to the officials, the ballots had two or three folds, which in combination with the high volume of ballots being read, jammed the machine regularly. To repair the jams, officials told us they would straighten ballots and run them through again, or, if needed, would remake the ballot. Also, officials in an Illinois jurisdiction we visited said punch cards had also jammed in their tabulator. Officials there said that this had been likely due to the punch cards swelling in humid weather, and this problem had caused the scanner to misread ballots on several occasions. In all of these instances, the problems were corrected. While we heard in our site visits about some human error at the polls, in our survey of local jurisdictions we found that human error was a problem for a small portion of election jurisdictions in terms of at least one key function. Specifically, we estimate that 6 percent of local jurisdictions nationwide experienced poll worker errors in tracking and accounting for ballots. To the extent that these errors occurred, they were more common in large jurisdictions. We estimate 1 percent of small jurisdictions, 14 percent of medium jurisdictions, and 34 percent of large jurisdictions had these errors. The differences between all size categories are statistically significant. In 10 of the jurisdictions we visited, election officials cited poll worker or voter errors as the cause of discrepancies in the number of ballots and voters. In 1 Ohio jurisdiction, for example, election officials said the discrepancy in the number of ballots and votes was caused by the fact that poll workers did not track some voters who left the polling place without voting. In a Florida jurisdiction, according to election officials, some voters left the polling place without signing a poll book (which was used to reconcile voter numbers). Another cause for discrepancies in the number of ballots and voters cited by election officials in a Washington jurisdiction was that poll workers erroneously counted some provisional ballots as regular Election Day ballots, which led to the appearance of more regular Election Day ballots cast than voters credited with voting in that manner. Finally, from election officials in 2 jurisdictions we visited, we learned of voter errors in using voting technology. In one Kansas jurisdiction, officials reported that some voters did not know how to scroll down the electronic screen to see all of the information. Also, we were told by election officials in a New Jersey jurisdiction that poll workers had noticed that some voters had failed to press a button to finalize their votes. According to these officials, the poll workers watched for such a mistake, and in at least one instance, reached under the curtain to register a vote while both a Democrat and a Republican poll worker observed the maneuver. According to state survey responses, 7 states (Arkansas, California, Georgia, Oklahoma, Pennsylvania, South Carolina, and Virginia) encountered a challenge during the 2004 general election related to timely completion of the certification process. For example, Georgia election officials reported difficulty in certifying election results in a timely manner that would allow a runoff election to commence within 3 weeks of Election Day. California officials responded that achieving an appropriate balance between vote count accuracy and the speed of vote tabulation was a challenge statewide. Arkansas officials said that the Secretary of State’s office had to contact local election jurisdictions numerous times to receive certified election results in a timely manner. In some local jurisdictions we visited, we also heard about difficulty meeting certification deadlines, particularly with regard to provisional ballots. In 7 local election jurisdictions we visited, election officials cited concerns with the timing requirements of election certifications. Specifically, the task of verifying voter information with respect to provisional ballots and counting provisional ballots made achieving certification deadlines difficult. For example, officials in 1 Colorado jurisdiction said that verifying and counting provisional ballots within the state-mandated 12-day period required that the county hire additional workers. A Florida jurisdiction reported a similar challenge, but in this instance, these officials stated that the county canvassing board was required to consider each provisional ballot individually, which added to the challenge to meet the short state certification deadline. One large jurisdiction in Illinois also reported that its 14-day certification deadline was difficult to achieve because of the large number of provisional ballots that had to be verified and counted. In a Washington jurisdiction, officials stated that verifying and counting all ballots (including provisional ballots) within state-mandated periods had been a challenge in 2004. In 2005, the Washington state legislature extended the mandated certification deadline from 15 to 21 days following any general election. While the 2004 recount in Washington was one of few statewide recounts conducted across the country, the types of issues that surfaced during the recount about Washington’s election system identified problems in all three key elements of elections—people, process, and technology. The close gubernatorial race and the recount subjected these elements to close scrutiny, revealing the vulnerability and interdependence of the various stages of the elections process and the unerring attention to detail that is required to run an error-free election. It was, in fact, the closest gubernatorial race in United States history. In the initial statewide count, a mere 261 votes separated the top two candidates—about 0.001 percent of the total votes cast. An initial recount reduced that margin of victory to just 42 votes out of more than 2.7 million cast, and the final recount resulted in a 129-vote margin of victory for the candidate who came in second in the first two vote counts. In part because it is the largest election jurisdiction (in number of voters) in Washington state, King County was the subject of some of the greatest scrutiny. However, problems were identified by courts in other jurisdictions in the state as well. As a result of this scrutiny, as discussed below, Washington state, and King County itself, has subsequently instituted many reforms. We reviewed a variety of reports and studies on this extraordinary election, including state task force studies, an internal county review, a management audit sponsored by the Election Center, and the findings of a state Superior Court that resulted from a lawsuit challenging the results of the final recount. The principal problems we identified in these materials ranged from poll worker errors to challenges in using equipment. Described here, they illustrate how breakdowns in the interface of people, process, and technology may, at any stage of an election, impair an accurate vote count. In at least 11 counties provisional ballots were found by a Washington state Superior Court to have been counted without verifying voter signatures or before verification of voter registration status was completed. For example, in Pierce County, Washington, 77 provisional ballots were found by the Superior Court to have been improperly cast. Provisional ballots were to have included on the ballot envelope the voter’s name and residence. Because the provisional voter’s identity or residence was not marked on the provisional ballot envelope for these 77 ballots, voter registration status could not be verified. In King County, the court found that 348 provisional ballots were improperly cast without verifying voter eligibility. The Election Center management audit found this had occurred because the provisional voters had been allowed to put their ballots, which had not been verified, directly into the optical scan machines at the voting precincts. The Superior Court found that of these 348 provisional ballots, 252 were ultimately determined to have been cast by registered voters. According to the audit, this error resulted from poll worker confusion about who was accountable for the provisional voting process at the polls. No one poll worker was assigned responsibility for tracking provisional ballots. The Superior Court also found that more than 1,400 votes had been cast illegally by felons during the November 2004 general election in counties across Washington. Under Washington state law, in general, persons convicted of a federal or state felony are not eligible to vote unless their right to vote has been restored. According to the King County audit, some felons were registered to vote in King County. The audit stated that election registration officials had very limited information available to them regarding such felons that would have allowed them to periodically purge the rolls. Moreover, according to the audit report, when a former felon who wished to register signed an affidavit to attest to the fact that his or her voting rights had been restored, election officials had no expedient way to verify the claim, particularly for former felons convicted in a different county. In addition, the audit report noted that election officials did not necessarily have the authority to refuse to accept a registration form. In our June 2005 report on maintaining accurate voter registration lists, we found that similar challenges in identifying and removing felons from voter rolls were reported in other states as well. The Superior Court found that more votes were counted than the number of voters credited with voting. Specifically, a judge cited evidence of 190 excess votes counted in Clark County, 77 excess votes counted in Spokane County, 20 excess votes counted in Island County, and 14 excess votes counted in Kittitas County. In a King County internal report, election officials reported that the discrepancy between voters credited with voting and ballots cast was about 0.2 percent, or over 1,000 votes. The Election Center management audit concluded that the discrepancy may have been due, in part, to the use of an electronic wand held by temporary employees to scan the entry codes in the poll book when registrants came to vote. The audit noted space limitations and difficulty hearing the wand’s beep when it processed a bar code may have prevented an accurate count of voters. During our site visit with King County officials, they told us that separate from the wanding issue, poll worker training deficiencies may have contributed to discrepancies in the number of votes credited and cast when voter information was not entered properly into poll books. According to the Superior Court’s findings, in several counties uncounted ballots were discovered after the certification of the initial election results. The Superior Court found that there were 64 uncounted absentee ballots found in Pierce County and 8 in Spokane County. According to the Election Center audit, in King County, the uncounted ballots were both absentee and provisional ballots, and 22 absentee and provisional ballots were discovered in the base units of optical scan machines after the election was certified. The audit concluded that poll workers had failed to adhere to their procedures for checking these units when reconciling ballots after the polls closed, and recommended strengthening both procedures and training. In King County, during the second recount, the King County Canvassing Board discovered that election workers had disqualified 573 absentee ballots during initial canvassing when they could not find the voters’ signatures in the county’s new computerized voter registration list for verification. In addition, the election workers had not checked elsewhere for these signatures, such as on the voters’ paper registration forms. In the recount, the King County Canvassing Board decided to recanvass these ballots to determine whether their disqualification had been appropriate or whether these ballots should have been counted. According to the King County audit, the voter registration list had been very recently updated, and for this reason, not all voter signatures had been scanned and electronically stored in time for the general election so that election workers would have been able to find them. Verifying absentee ballots was another issue highlighted during the recount. According to press accounts, differences existed in how local jurisdictions in the state verified the signatures of absentee and provisional voters. The Seattle Times reported conducting a survey in which it found that signatures went through as many as four levels of review in one county and only one level in another. Also, the newspaper reported that some counties would look for as many as six different identifying traits of a signature, while others “eyeballed the handwriting.” Recommendations by the Governor’s Election Reform Task Force identified the verification of voter signatures as one of several areas needing more procedural consistency among the counties. Washington enacted into law a series of election reform measures in 2005 designed to clarify, standardize, and strengthen election requirements and procedures. Several of the statewide reforms specifically address problems described above, but others are broader measures designed to improve election administration. Examples of these measures are listed below. Unique provisional and absentee ballots: All provisional and absentee ballots are required to be visually distinguishable from one another and must be either printed on colored paper or imprinted with a bar code for the purpose of identifying the ballot as a provisional or absentee ballot. The bar code must not identify the voter. Provisional and absentee ballots must be incapable of being tabulated by polling place counting devices. Standardized guidelines for signature verification processes: The Secretary of State is to establish guidelines for signature verification relating to, for example, signatures on absentee and provisional ballot envelopes. All election personnel assigned to verify signatures are required to receive training on the established guidelines. State law also provides that while signatures on certain mail-in ballot envelopes (such as absentee ballots) must be compared with the voter’s signature in the county registration files, variation between the signature on a return envelope and the signature of that voter in the registration files due to the substitution of initials or the use of common nicknames (e.g., Joseph Smith versus Joe Smith) is permitted so long as the surname and handwriting are clearly the same. Triennial review of county election processes and reports listing corrective actions: Instead of being performed periodically, state-conducted reviews of county election-related policies, procedures, and practices are to be performed at least once every 3 years. If staffing or budget levels do not permit a 3-year review cycle, such reviews must be done as often as possible. The county auditor or the county canvassing board must respond to the review report in writing, listing steps to be taken to correct any problems. Before the next primary or general election, the Secretary of State’s office must visit the county and verify that the corrective action has been taken. Election law manuals for use in all vote-counting centers: The Secretary of State must prepare a manual explaining all election laws and rules in easy-to-understand, plain language for use during the vote counting, canvassing, and recounting process. The manuals must be available for use in all vote-counting centers throughout the state. Option to conduct voting entirely by mail: Another change introduced by the state, which may avoid errors at the polls, has been to give county officials the option to conduct elections entirely by mail. The new measure authorizes the use of all-mail voting in counties upon the express approval by a county’s legislative authority and provides that such approval must apply to all primary, special, and general elections conducted by the county. For example, King County has announced plans to conduct elections entirely by mail in 2007. The King County Independent Task Force on Elections found in 2005 that the King County election process basically involved simultaneously conducting two dissimilar elections. The task force stated that increasingly, a majority of King County voters (565,011, or slightly more than 62 percent in 2004) used the permanent absentee or vote-by-mail process. Despite this fact, the task force reported that the county also conducted a traditional election involving about 330,000 voters assigned to over 2,500 precincts and 540 individual polling places, and the use of hundreds of temporary election workers who must be trained and who work at the polling places for more than 13 hours on election days. Furthermore, the task force stated that both election processes contain independent, complex, and often conflicting requirements that have clearly caused significant problems for King County election officials. Having one means of voting for all citizens is perceived to be both more efficient and cost-effective than the previous process, according to the task force. Paper records for electronic voting devices and precertification audits of electronic voting results: All electronic voting devices must, beginning January 1, 2006, produce an individual paper record of each vote, at the time of voting, that may be accepted or rejected by the voter before finalizing his or her vote. This audit is to be conducted by randomly selecting a specified percentage of electronic voting devices and, for each device, comparing the results recorded electronically with the paper records. The audit process must be open to observation by political party representatives if such representatives have been appointed and are present at the time of the audit. Separate from changes made at the Washington state level, King County, as reported in the Election Center audit, also implemented or was in the process of implementing changes to improve election administration that specifically address issues that arose during the 2004 general election. Examples of such reported changes are below: Controls to manage provisional ballots: Provisional ballots will be color- coded for easy recognition and will have timing marks that prevent the counter at the polling place from accepting them. Therefore, the voter has no option but to return his or her provisional ballot to a poll worker, who will place it in a provisional envelope. One additional poll worker is to be assigned to each polling place to exclusively manage provisional ballots for all voters at that polling place. Controls to prevent misplaced ballots: Poll workers are required to record the serial number located at the bottom of the optical scan bins on the ballot reconciliation transmittal form. The serial number is not visible if any ballots remain in the bin. Increased poll worker training, attaching a flashlight to the inside of each bin, and continued adherence to existing procedures for troubleshooters to examine each bin before certification are also intended to help ensure that all ballots are properly handled and counted in future elections. Additional procedures for tracking absentee ballots and registration signatures: King County performed a database search of the entire voter file prior to the fall 2005 elections, in order to identify missing or unreadable signatures. On the basis of the search results, elections personnel contacted voters and made significant progress in updating the files. In addition, procedures at the absentee ballot operation center have been enhanced. New logs were created for tracking absentee ballots that required additional research because they were not easily verified. Also, in any instance where a voter registration signature is not on file, or is illegible, a search for the original record, as well as a call and a letter to the voter, is required. Improvements to procedures for reconciling ballots and voters: For the 2005 primary and general elections, the use of electronic hand wands to scan poll books, when reconciling ballot and voter numbers, was to be done at a county center where more space would be available. New checklists were developed that required staff to balance the number of signatures recorded with the wand against the number of ballots counted by the computer. Also, the hand-wand process was to occur at the beginning rather than at the end of the canvass to allow more time for any necessary research into potential discrepancies. Although the methods used to secure and count ballots vary across the 50 states and the District of Columbia, the goal of vote counting is the same across the nation: to accurately count all ballots cast by eligible voters. As with the elections process overall, conducting an accurate vote count is not a simple process. It requires many steps, an unerring attention to detail, and the seamless integration of people, processes, and technology. Providing eligible voters multiple means and times within a jurisdiction for casting their ballots—early, absentee, provisional, and Election Day voting—enhances eligible voters’ opportunity to vote. At the same time, multiple voting methods and types of ballots can make the vote-counting process more complicated. In addition, short deadlines for certifying the final vote—as little as 2 days in 1 state—provide little time for election officials to review, verify, and count provisional and absentee ballots. Larger jurisdictions generally face more challenges than smaller jurisdictions because of the sheer volume of votes cast by all ballot types— absentee, provisional, and regular ballots. Provisional ballots were new for many jurisdictions in November 2004 and created some challenges in tracking, verifying, and counting. On the basis of their experience in November 2004, some jurisdictions are implementing new procedures for provisional voting, such as printing provisional ballots in a color different from other types of ballots or using paper ballots rather than DRE machines for provisional voters. Two jurisdictions we visited in Washington have announced plans to move to all-mail elections, which was authorized on a county-wide basis by recent state law. Although replacing in-person voting with all-mail voting eliminates some challenges—e.g., poll worker training on voting equipment operations and provisional voting or the chance of malfunctioning voting equipment at the polls—in some circumstances it could magnify the importance of other aspects of state election processes, such as verifying votes, accurately matching voter signatures and having guidance for determining voter intent from improperly or unclearly marked ballots. For those jurisdictions allowing or requiring the determination of a voter’s intent from an improperly or unclearly marked ballot, the importance of having explicit and consistent criteria for treating such ballots became evident in the 2000 general election when different interpretations for such ballots in Florida made the close presidential race extremely contentious. Eighteen states that reported they did not have voter intent guidance in place for the November 2000 general election reported to us in our state survey that they did have voter intent requirements or guidance in place for the November 2004 general election. While federal election provisions do not address the state counting issue of ascertaining voter intent, HAVA did require states to adopt, by January 2006, uniform and nondiscriminatory standards defining what constitutes a vote and what will be counted as a vote for each type of voting system used by the state. The recount in the close gubernatorial election in Washington revealed the interdependence of every stage of the elections process in ensuring an accurate vote count. That experience also illustrated how small errors in election operations can affect the vote counting process. Were any state’s election processes subjected to the very close scrutiny that characterized the recount in Washington state, it is likely that imperfections would be revealed. Votes are cast and elections are conducted by people who are not and cannot be 100 percent error free in all their tasks all the time. Thus, the consistently error-free vote count may be elusive, particularly in very large jurisdictions with hundreds of thousands of ballots cast in person, absentee, or provisionally. However, diligent efforts to achieve consistent error-free vote counts can help to ensure that any errors are reduced to the minimum humanly possible. Voting methods can be thought of as tools for accommodating the millions of voters in our nation’s more than 10,000 local elections jurisdictions. These tools are as simple as a pencil, paper, and a box, or as sophisticated as programmable computer-based touch screens. Regardless of method, however, the proper operation and functioning of each depends on its effective interplay with the people who participate in elections (both voters and election workers) and the processes (governed by policies, procedures, and so forth) that govern the interaction of people with one another and with the voting method. This chapter focuses on voting methods—the technology variable in the people, process, and technology election equation. It describes the use of voting methods in the 2004 general election, compares this technology environment with that of the 2000 general election, and examines plans for voting technologies in the 2006 election, particularly in light of the roles being played by states and HAVA. It also examines efforts to measure and understand how well voting equipment performed in the 2004 election (see fig. 53 for equipment examples), including the state of performance standards and local jurisdictions’ overall satisfaction with their respective voting methods. Additionally, this chapter discusses the state of practice relative to voting system security, testing, and integration, and presents key challenges facing all levels of governments as voting systems, related election systems, and supporting technologies continue to evolve. The technology of the voting environment can be characterized as varied and evolving, according to our 2005 state survey results and local jurisdiction survey estimates. We estimate on the basis of our local jurisdiction survey that the predominant voting methods most often used for the 2004 general election by large jurisdictions were DRE and precinct count optical scan, while medium jurisdictions most often used precinct count optical scan and small jurisdictions most often used paper ballot. In addition, the predominant voting method most often used for large jurisdictions changed from precinct count optical scan in 2000 to both DRE and precinct count optical scan in 2004, while the predominant voting methods remained the same for the other jurisdiction sizes. Also in the 2004 general election, an estimated one-fifth of jurisdictions used multiple voting methods to support voting activities. Most states generally exercised influence over the voting methods used by their respective elections jurisdictions through a range of approaches such as requiring the use of one specific voting method, helping with local acquisition efforts, or eliminating voting methods, according to our 2005 state survey. Ten states and the District of Columbia reported that they required the use of one specific method for the 2004 general election, and 4 additional states planned to require a specific method for the 2006 general election. Sixteen states and the District of Columbia reported that they were involved to some extent in local jurisdiction efforts to acquire voting systems, components, and services. States also reported that they were eliminating lever and punch card equipment between the 2000 and 2006 general elections. Specifically, for the November 2000 general election, 37 states reported that they used lever or punch card voting equipment; by the November 2006 general election, only 4 states had plans to use lever and punch card equipment. HAVA has influenced state and local decisions regarding particular voting methods by providing funds to states to replace punch card and lever voting equipment with other voting methods. This greater state involvement in jurisdictions’ choice of voting methods, combined with federal funding to replace lever and punch card voting equipment and certain HAVA requirements—among other factors—is likely to influence the adoption of DRE and optical scan voting methods. Federal and state standards provide an important baseline for the performance of voting systems and were widely adopted for the 2004 general election. However, according to our local jurisdiction survey, voting equipment performance was not consistently measured during the 2004 general election and varied by jurisdiction size and voting method, in part because some types of measures were not well suited to particular voting methods. For example, small jurisdictions were generally less likely to collect accuracy measures such as accuracy of voting equipment (estimated at 31 percent for small jurisdictions) than large and medium jurisdictions (66 percent and 54 percent, respectively), and this may be because the predominant voting method most used by small jurisdictions was paper ballot. On the other hand, on the basis of our local jurisdiction survey, we estimate that the vast majority of all jurisdictions were very satisfied or satisfied with their systems’ performance during the 2004 general election. For instance, we estimate that 78 percent of jurisdictions were very satisfied or satisfied with the accuracy of their voting system performance. The estimated high satisfaction levels demonstrated across different voting system performance areas and jurisdiction sizes contrast with our lower estimates of the performance measures that were collected for the 2004 general election. Although the reasons for moderate collection levels for performance measures are unclear, jurisdictions that may not have collected performance data or may have considered such information not applicable to their situation may lack sufficient insight into their system operations to adequately support their satisfaction in the variety of performance areas we surveyed. The moderate collection levels of data on operational voting system performance may present a challenge to state and local election officials in their efforts to make informed decisions on both near-term and long-term voting system changes and investments. A wide range of recently published concerns for the security of voting systems and the development of nationwide mechanisms under HAVA to improve security standards and processes have not yet produced a consistent approach across all jurisdictions for managing the security of voting systems. Our 2005 local jurisdiction survey and our visits to local jurisdictions found that voting system security has been primarily shouldered by local jurisdictions. However, states, vendors, law enforcement officials, and others shared in these efforts to varying degrees for the 2004 general election. Our state survey for the 2004 general election and visits to local jurisdictions indicated that security mechanisms employed by some states—but not others—included promulgation of policies and guidance, compliance of voting equipment with security standards, and monitoring and evaluation of implemented security controls. According to our local jurisdiction survey estimates and visits to local jurisdictions, jurisdictions and their support organizations were largely responsible for implementation of security controls, such as access restrictions to voting equipment, system backup capabilities, and security- related testing. Estimates from our local jurisdiction survey also showed, however, that many jurisdictions nationwide had not documented their security measures, and we found that several of the jurisdictions we visited reported that they had not implemented recommended measures, such as security plans, training, and documentation of policies and procedures. Furthermore, decisions by states to continue using outdated voting system standards may allow the vulnerabilities of newer technologies to go unevaluated and impair effective management of the corresponding security risks. States and local jurisdictions face the challenge of regularly updating and consistently applying appropriate standards and other directives to meet the vulnerabilities and risks of their specific election environments. Testing and evaluation of voting systems also varied across states and jurisdictions for the 2004 general election. Our state survey found that most states required certification testing of their voting systems using a range of criteria. However, responsibility for purchasing a certified system typically rested with local jurisdictions. Other results from our 2005 state survey and responses from jurisdictions we visited indicated that acceptance testing continued to be commonly performed, but there was wide variation in the responsibilities and practices for this type of testing, including whether such testing was applied to new systems or upgrades, the extent of vendor participation, and the coverage of hardware and software functions. Also on the basis of our local jurisdiction survey, we estimate that most jurisdictions conducted readiness (logic and accuracy) testing for the 2004 general election as they did for the 2000 election, but in some jurisdictions we visited, we found they used different procedures that may have included one or more processes such as diagnostic tests, mock elections, or suites of test votes. In contrast, our local survey estimates indicate that parallel testing was employed by fewer than an estimated 2 percent of jurisdictions. This may be due to, in part, the lack of directives for conducting such tests. Finally, postelection voting system audit tests were conducted by fewer than half of jurisdictions for the 2004 general election, according to our local survey estimates, although many more large and medium jurisdictions performed these tests than small jurisdictions. As with other types of testing, the requirements and practices for audit tests were diverse. Factors associated with the testing of voting systems may further challenge states and local jurisdictions as they adapt to changes in voting system capabilities, standards, and national certification for the 2006 general election. Those factors are likely to include increased certification testing workloads to recertify systems with new capabilities, ongoing limits to the number of available testing laboratories until a new laboratory accreditation process becomes fully operational, and more complex testing because a new version of the federal voluntary voting system guidelines has been added in 2005 to older federal standards from 1990 and 2002 that states are already using. The number of jurisdictions that had integrated particular aspects of voting system components and technologies was limited for the 2004 general election, according to estimates from our local jurisdiction survey and visits to local jurisdictions for the selected areas of integration we examined, such as electronic programming or setup and electronic management. Two-thirds of the jurisdictions we visited told us that they used electronic programming or setup of voting equipment, and an estimated 7 percent of jurisdictions that used voting methods other than paper ballots, according to our local survey, connected their voting equipment via a local network at polling locations. Relatively few local jurisdictions we visited also reported having plans for integrating or further integrating their election-related systems and components for the 2006 general election, and in the instances where jurisdictions reported plans, the scope and nature of the plans varied. For instance, officials at 5 jurisdictions we visited reported plans to introduce a voter-verifiable paper trail (VVPT) capability for future elections, and officials from 1 jurisdiction reported plans to purchase an optical scanner with the ability to tabulate both DRE and optical scan election results. Nevertheless, the potential for greater integration in the future does exist as states and jurisdictions act on plans to acquire the kind of voting equipment (e.g., optical scan and DRE products) that lends itself to integration. For example, on the basis of our local jurisdiction survey, we estimate that at least one-fifth of jurisdictions plan to acquire DRE or optical scan equipment before the 2006 general election, and officials from 2 jurisdictions we visited who used DRE equipment told us that their state planned to purchase electronic poll books for its precincts to use during the 2006 elections to electronically link its voter registration system with its voting systems. It is unclear if and when this migration to more technology-based voting methods will produce more integrated election system environments. However, suitable standards and guidance for these interconnected components and systems—some of which remain to be developed—could facilitate the development, testing, operational management, and maintenance of components and systems, thereby maximizing the benefits of current and emerging election technologies and achieving states’ and local jurisdictions’ goals for performance and security. The challenge inherent in such a dynamic environment is to update system standards so that emerging technical, security, and reliability interactions are systematically addressed. The technology of the voting environment can be characterized as varied and evolving, according to our 2005 state survey results and local jurisdiction survey estimates. We estimate on the basis of our local jurisdiction survey that the predominant voting methods most often used for the 2004 general election by large jurisdictions were DRE and precinct count optical scan, while medium jurisdictions most often used precinct count optical scan and small jurisdictions most often used paper ballot. Two key patterns emerged in the use of voting methods between the 2000 and 2004 general elections. First, we estimate that the percentage of large jurisdictions using DREs doubled from 15 percent in the 2000 general election to 30 percent in 2004. The predominant voting method for large jurisdictions changed from precinct count optical scan in 2000 to both DRE and precinct count optical scan in 2004. In contrast, we estimate that the predominant voting methods remained the same for small and medium jurisdictions (paper ballots and precinct count optical scan, respectively) from 2000 to 2004. Furthermore, on the basis of our local jurisdiction survey, we estimate that at least one-fifth of jurisdictions plan to acquire DRE or optical scan equipment before the 2006 general election. Second, in response to our state survey, 9 states reported that they eliminated the lever machine and punch card voting methods for the 2004 general election. In addition, 18 other states plan to eliminate lever or punch card voting methods for the 2006 general election. This greater state involvement in jurisdictions’ choice of voting methods, the availablilty of federal funding to replace lever and punch card voting equipment, and certain HAVA requirements—among other factors—are likely influences on the adoption of DRE and optical scan voting methods. Since the November 2000 general election, the DRE voting method has become more widely used in large jurisdictions, according to our local jurisdiction 2005 survey. During the same period, states’ reported use of lever machine and punch card voting methods has decreased, according to responses to our 2005 state survey. Our state and local jurisdiction surveys also indicate plans for changes to voting technologies for the 2006 general election. Overall, the estimated percentages of predominant voting methods used by local jurisdictions in the 2000 and 2004 general elections did not change appreciably. In particular, from our local jurisdiction survey, we estimate that the mix of predominant voting methods used in the November 2000 general election was 5 percent DRE, 21 percent central count optical scan, 26 percent precinct count optical scan, 5 percent central count punch card, 2 percent precinct count punch card, 8 percent lever, and 31 percent paper. In comparison, we estimate that the mix for the November 2004 general election (in the same order) was 7 percent DRE, 21 percent central count optical scan, 30 percent precinct count optical scan, 2 percent central count punch card, 2 percent precinct count punch card, 7 percent lever, and 30 percent paper. Figure 54 compares these percentage changes. According to our local jurisdiction survey, there may have been a small shift away from punch card and lever machine voting methods (estimated at 3 percent or 1 percent loss of jurisdictions, respectively) and may have been an increase in optical scan and DRE voting equipment (estimated at 5 percent and 2 percent gain of jurisdictions, respectively) for the 2004 general election. However, these differences are not statistically significant. During the same time frame, we estimate that 16 percent of jurisdictions acquired new voting equipment through their own purchases or leases and 15 percent of jurisdictions through purchases or leases by their state. Thus, the new voting equipment acquired by many jurisdictions since 2000 did not substantively affect the predominant voting methods that were already in use. One notable change did occur, however, in the use of predominant voting methods in the 2000 and 2004 general elections. The percentage of large jurisdictions using DREs doubled (estimated at 15 percent in 2000 and 30 percent in 2004, respectively)—an increase that is statistically significant. This increase in the use of DREs changed the predominant voting method most often used for large jurisdictions, which was precinct count optical scan in 2000, to both DRE and precinct count optical scan in 2004. A smaller increase in the use of DREs among medium jurisdictions (from an estimated 13 percent in 2000 to 20 percent in 2004) is not statistically significant, and there was virtually no change in DRE use among small jurisdictions (an estimated 1 percent for both elections). In contrast, the use of paper ballots as a predominant voting method did not appreciably change between the 2000 and 2004 general elections (with overall use at 30 percent in 2000 and 31 percent in 2004, respectively). Small jurisdictions were the major contributors to this steady use of paper ballots (estimated at 43 percent in 2000 and 41 percent in 2004, respectively); medium jurisdictions were minor contributors (3 percent for each election). (No large jurisdictions used paper ballots as their predominant voting method for either of these elections.) We also estimate that use of precinct count optical scan as the predominant voting method for medium jurisdictions did not change appreciably between the 2000 and 2004 elections (estimated at 35 percent in 2000 and 39 percent in 2004, respectively). Figure 55 shows the estimated use of predominant voting methods for small, medium, and large jurisdictions in the 2004 general election. The more widespread adoption of DREs by large jurisdictions was consistent with their greater proportion among jurisdictions that acquired voting equipment since 2000. According to our local jurisdiction survey, we estimate that 37 percent of large jurisdictions bought or leased new voting equipment since 2000, compared with 21 percent of medium jurisdictions and 12 percent of small jurisdictions, where the differences between large jurisdictions and both medium and small jurisdictions are statistically significant. Furthermore, on the basis of our local jurisdiction survey, we estimate that at least one-fifth of jurisdictions plan to acquire DRE or optical scan equipment before the 2006 general election. Both large and medium jurisdictions are more likely to have plans to acquire DREs before the November 2006 general election (estimated at 34 percent each) than small jurisdictions (estimated at 13 percent), while small jurisdictions are more likely to have plans to acquire precinct count optical scan voting equipment (estimated at 28 percent) than medium or large jurisdictions (estimated at 17 percent and 15 percent, respectively). In general, fewer jurisdictions expected to acquire central count optical scan voting equipment than the other two voting methods, although the differences were not statistically significant. The percentages of jurisdictions planning to acquire the newer voting systems before the next general election are shown in figure 56 by the size of jurisdiction. Another interesting pattern emerged in voting methods between November 2000 and November 2004 at the statewide level. Thirty-seven states reported that at least 1 jurisdiction used lever machine or punch card voting equipment for the November 2000 general election. By the time of the November 2004 general election, the number of states that continued to employ these voting methods decreased to 28. Specifically, our state survey results show that 9 states reported that they completed replacement of all their punch card or lever voting equipment before the November 2004 general election, and 4 other states reported that they completed their replacements since the 2004 election. Of the remaining 24 states that reported using the punch card and lever methods in 2000 but had not yet replaced them at the time of our survey, 18 reported that they planned to replace all punch card and lever voting equipment by the November 2006 general election, while 3 planned to replace a portion of their equipment by then. One state reported no replacement plans prior to the November 2006 general election. Figure 57 summarizes the states’ progress and plans for replacing punch card and lever voting equipment. Our local jurisdiction survey provided insight into jurisdictions’ plans for acquiring technology-based voting methods and the time frames for executing these plans, which may increase the predominance of these methods in future elections. Specifically, we estimate that 25 percent of local jurisdictions are planning to acquire precinct count optical scan machines by the November 2006 general election, 19 percent expect to acquire DREs by then, and about 7 percent plan to acquire central count optical scan equipment before that election. In addition, we estimate that between 4 and 10 percent of local jurisdictions had plans to acquire additional equipment in each of these voting methods but had not set a target date for doing so at the time of our survey. During visits to election jurisdictions across the country, local election officials explained some of their motivations behind plans to acquire DRE or optical scan voting equipment. For example, election officials in 6 jurisdictions cited HAVA as the reason for purchasing new DRE equipment, particularly HAVA’s requirement that each voting place have at least one voting method that is accessible to persons with disabilities, as we discussed earlier in chapter 4. More specifically, officials in 1 large jurisdiction in Connecticut said that they would evaluate the use of DREs to meet HAVA accessibility requirements before deciding whether to purchase more DREs in time for the November 2006 general election. Election officials from 5 other jurisdictions stated that they planned to purchase new voting equipment to provide a VVPT, a requirement levied by 3 of the 14 states we visited (Colorado, Nevada, and New Mexico). Officials from 5 other jurisdictions said that they expected to acquire new voting equipment but did not give a reason and, in some cases, did not yet know what type of equipment they would obtain. Officials in jurisdictions that did not plan to purchase new voting equipment told us that their existing equipment was sufficient or that budget constraints prevented the acquisition of new equipment, among other reasons. As for the 2000 general election, some jurisdictions used multiple voting methods to support the 2004 general election, and some of these methods were more widely used than others for particular types of voting. In our October 2001 comprehensive report on election processes nationwide, we reported that 5 percent of jurisdictions used more than one voting method. On the basis of our 2005 local jurisdiction survey, we estimate that 21 percent of jurisdictions used more than one voting method in the November 2004 general election, with the most common combination of methods being central count optical scan with paper ballot (estimated to be 5 percent of jurisdictions). Other common combinations in 2004 were lever machine with paper ballot (4 percent) and DRE with paper ballot (3 percent). DRE with central count optical scan was one of numerous other combinations used by 2 percent or less of local jurisdictions. Figure 58 shows the estimated proportion of jurisdictions with the most prevalent single and combination voting methods. The specific mix of voting methods used can also be viewed with respect to particular types of voting (e.g., absentee, early, provisional) that were supported in the 2004 election. In this regard, some voting methods were applied to a particular type of voting more frequently than others. We estimate that paper ballot was the most widely used voting method for absentee voting (36 percent of jurisdictions), provisional voting (18 percent), and early voting (8 percent). Precinct count optical scan (shown in fig. 59) was generally the second most widely used voting method for these types of voting (24 percent of jurisdictions for absentee, 10 percent for provisional, and 5 percent for early voting, respectively), while central count optical scan was the third most widely used method (20 percent of jurisdictions for absentee, 9 percent for provisional, and 5 percent for early voting, respectively). Most states have generally exercised influence over the voting methods used by their respective elections jurisdictions through a range of approaches. In particular, for our state survey, a majority of states (32) and the District of Columbia said that they restricted the voting methods employed by local jurisdictions in the 2004 election either by requiring the use of one specific method (10 states and the District of Columbia) or providing a list of approved voting methods for the jurisdiction to select from (22 states). An alternate approach reported by 10 states was to require local jurisdictions to obtain state approval when selecting a voting method. The remaining 8 states said that local jurisdictions chose the voting method they used without any state involvement. In addition to affecting the choice of voting methods, 16 states and the District of Columbia reported that they were involved to some extent in local jurisdiction efforts to acquire voting systems, components, and services. For example, 1 state reported that it evaluated voting equipment options and vendors, and then contracted with a single vendor to supply voting equipment for all jurisdictions in the state. Jurisdictions within this state then had the option of purchasing additional voting equipment from this vendor, as needed. The top map of figure 60 shows the role of each state in the selection of specific voting methods for jurisdictions in the 2004 general election. Washington, D.C. Washington, D.C. Responses to our state survey indicate that state influence over the voting methods to be used in the November 2006 general election will continue to increase. Four additional states planned to require the use of a single voting method statewide, which will bring the total number of states doing so to 14, and the District of Columbia will do so as well. Also, 5 additional states reported that they will require local jurisdictions to select a voting method or methods from a state-approved list, bringing this total to 27; 8 states intended to continue to allow local jurisdictions to select their voting methods with state approval. Only 1 state was not expecting to be involved in decisions on voting methods for its jurisdictions for 2006. The bottom map of figure 60 shows the role of each state in the selection of specific voting methods for jurisdictions in the 2006 general election. Consistent with state survey responses indicating their contributions to local jurisdictions’ selection of voting methods and on the basis of our local jurisdiction survey, one of the most frequent factors that influenced the 16 percent of local jurisdictions that bought or leased new voting equipment since the November 2000 general election was state requirements or certification of the equipment (an estimated 83 percent of the 16 percent of jurisdictions that bought or leased the new voting equipment). Other widely influential factors included ease of equipment use (91 percent), vendor demonstrations (72 percent), and affordability (68 percent). In contrast, local requirements and HAVA funding were less influential factors for local jurisdictions’ acquisition of voting equipment (44 percent and 45 percent of jurisdictions, respectively). (See fig. 61.) HAVA has also influenced state and local decisions regarding particular voting methods through mechanisms to encourage the adoption of technology. Among other things, HAVA provided funds to states to replace punch card and lever voting equipment with other voting methods (Section 102 funds). During fiscal year 2003, the General Services Administration (GSA) reported distributing about $300 million to 30 states that applied for these funds. Figure 62 depicts an overview of the funds distributed to states specifically to replace lever machines and punch card voting equipment. (Fig. 57 presented an overview of states’ progress in replacing lever and punch card voting equipment.) In responding to our state survey, 24 of the 30 states reported that they had invested at least a portion of these funds to replace lever or punch card voting equipment as of August 1, 2005. In addition to the funding that HAVA earmarked for voting equipment replacement, states could also apply for other HAVA funds that could be used for multiple purposes, including replacement or upgrade of voting systems (Section 101 funds). In its 2004 annual report, EAC reported that almost $344 million had been distributed to each of the 50 states and the District of Columbia under this multiple purpose funding category. In all, 44 states and the District of Columbia reported in our state survey that they had spent or obligated funds from one or both of these HAVA funding sources in order to improve, acquire, lease, modify, or replace voting systems and related technology. EAC requires states to submit detailed annual reports on the use of those funds but has not yet compiled data from the state reports about spending for voting equipment covered in HAVA Section 101. Besides authorizing funding for changes to voting methods, HAVA also has the potential to influence voting methods through new requirements for the usability and accountability of voting systems. Among other things, HAVA requires that voting systems used in federal elections provide voters with ballot verification and correction capabilities by January 1, 2006, including the opportunity to verify their ballots in a private and independent manner before they are cast; the ability to change their ballots or correct any error in a private and independent manner before the ballots are cast and counted; and the capability to both notify the voter whenever more than one candidate has been selected for a single office and correct the ballots. HAVA also requires voting equipment to generate a permanent paper record with manual audit capacity as an official record of the election. Our October 2001 report on election processes described how voting methods varied in their ability to support features such as error identification and correction for voters. With regard to minimizing voter error at the polls, our local jurisdiction survey for the 2004 general election found that, for instance, voters were provided the opportunity to correct a ballot or exchange a spoiled ballot for a new one in most jurisdictions, and such capabilities were largely available for all voting methods. Our estimates of the availability of ballot correction capabilities range from 100 percent (for jurisdictions whose predominant voting method was central count punch cards) to 70 percent (for jurisdictions predominantly using DREs). However, the differences among these voting methods were not statistically significant. Figure 63 shows one approach that allows voters to verify and correct their ballots using a particular voting method (DRE). With regard to voting equipment that generated a permanent paper record with a manual audit capability for election audits in the 2004 general election (including solutions such as VVPT), we estimate that few jurisdictions that used DREs had this capability. Specifically, from our local jurisdiction survey, a small proportion of jurisdictions that used DREs for the 2004 election had manual audit capabilities such as VVPT (estimated at 8 percent of DRE jurisdictions) or printing of ballot images (11 percent of DRE jurisdictions). An estimated 52 percent of jurisdictions using DREs had equipment that produced an internal paper record that was not voter-verifiable. With this limited implementation of HAVA-related capabilities in the 2004 general election, it appears that most of the voting system and election process changes to comply with these specific HAVA usability and accountability requirements will need to be satisfied by jurisdictions for the 2006 general election. Voting system performance can be viewed in terms of accuracy, reliability, and efficiency. Accuracy refers to how frequently the equipment completely and correctly records and counts votes; reliability refers to a system’s ability to perform as intended, regardless of circumstances; and efficiency refers to how quickly a given vote can be cast and counted. Performance in each of these areas depends not only on how well a given voting system was designed and developed, but also on the procedures governing its operation and maintenance and the people who use and operate it. Thus, it is important that system performance be measured during an election when the system is being used and operated according to defined procedures by voters and election workers. As we have previously reported in our October 2001 report on election processes, measuring how well voting systems perform during a given election allows local election officials to better position themselves for ensuring that elections are conducted effectively and efficiently. Such measurement also provides the basis for knowing where performance needs, requirements, and expectations are not being met so that timely corrective action can be taken. HAVA recognized the importance of voting system performance by specifying requirements for error rates in voting systems and providing for updates to the federal voting system standards, including the performance components of those standards. Moreover, according to our local jurisdiction survey, most local jurisdictions adopted performance standards for the 2004 general election—usually standards selected by their respective states. As was the case for the 2000 general election, jurisdictions collected various types of voting system performance measures for the 2004 general election, although some types of measures were collected by fewer jurisdictions than others—in part because they were not well suited to particular voting methods. Furthermore, from our local jurisdiction survey, we estimate that the vast majority of all jurisdictions were very satisfied or satisfied with their systems’ performance during the 2004 general election, even though performance data may not have been collected to an extent that would provide firm support for these views. In our October 2001 report on voting equipment standards, we reported that the national voluntary voting system standards being used by some states and local jurisdictions at that time were originally approved in 1990 and were thus out of date. Among other things, these standards identified minimum functional and performance thresholds for voting systems in terms of accuracy, reliability, and efficiency. In 2002, the Federal Election Commission updated these standards and, in doing so, provided new or enhanced coverage of certain performance requirements for, among other things, voting system components that define, develop, and maintain election databases; perform election definition and setup functions; format ballots; count votes; consolidate and report results; and maintain records to support vote recounts; direct feedback to the voter that indicates when an undervote or overvote is detected in DRE and paper-based voting systems that encompass punch cards and optical scan; system standards to meet the needs of voters with disabilities, including specific standards for DREs; and strengthened election record requirements to address a range of election management functions, including such functions such as ballot definition and election programming. HAVA further focused attention on voting system performance by establishing a performance requirement for systems used in elections for federal offices and by providing for updates to federal voting system standards. Specifically, HAVA required that voting systems used in federal elections comply with error rate standards specified in the 2002 federal voting system standards. Under these standards, the maximum acceptable error rate during testing is 1 in 500,000 ballot positions. In addition, HAVA directed EAC to revise the voluntary national voting system standards, and to test, certify, decertify, and recertify voting system hardware and software with respect to national voting system standards using accredited testing laboratories. On the basis of our local jurisdiction survey, we estimate that the vast majority of jurisdictions that used some type of automated voting equipment on Election Day generally established written standards for the performance of their voting equipment for the November 2004 general election. Of these, most jurisdictions (an estimated 77 percent) had adopted their state’s standards or requirements pertaining to voting system performance, although a few had adopted performance standards from a source other than their state (10 percent) or developed their own (8 percent). The apparently high adoption rate for standards among states and local jurisdictions is important because it indicates broad acceptance of a basic management tool needed for systematic performance measurement and evaluation. Consistent with our results on voting system performance measurement from our October 2001 report on election processes, estimates from our local jurisdiction survey indicated that jurisdictions used several specific measures that could be generally grouped into the areas of accuracy, reliability, and efficiency to assess the performance of their voting systems for the 2004 general election. However, jurisdictions measured how well their systems actually performed in the 2004 election to varying degrees. In the discussion below, we compare jurisdictions’ collection of selected information on voting system performance for the 2000 and 2004 general elections, and then examine jurisdictions’ performance monitoring in each of the three performance areas. On the basis of on our local jurisdiction surveys for the 2000 and 2004 elections, we estimate that about 50 percent of jurisdictions collected performance information in both elections using three measures— accuracy, undervotes, and overvotes. The percentage of jurisdictions that collected information on a fourth performance measure—average time to vote—was much smaller (estimated at 10 percent or less). The differences between estimates for the two elections are not statistically significant. Figure 64 shows the percentages of jurisdictions that collected these performance measures for the 2000 and 2004 general elections. In the area of accuracy, we estimate that 42 percent of jurisdictions overall monitored the accuracy of voting equipment in the 2004 general election. Other widely used measures of accuracy in the 2004 general election were spoiled ballots (estimated at 50 percent of jurisdictions), undervotes (50 percent of jurisdictions), and overvotes (49 percent of jurisdictions). During our visits to local jurisdictions, election officials in several jurisdictions told us that measuring overvotes was not a relevant performance indicator for jurisdictions using lever machines and DREs because neither permits overvoting. Election officials in several local jurisdictions we visited also told us that undervotes were not a meaningful metric because most voters focused on a limited range of issues or candidates and thus frequently chose not to vote on all contests. Jurisdictions’ collection of the accuracy measures we studied for the 2004 general election varied according to jurisdiction size, with small jurisdictions generally less likely to collect these measures than other jurisdiction sizes. Both large jurisdictions (an estimated 66 percent) and medium jurisdictions (54 percent) were significantly more likely than small jurisdictions (31 percent) to collect data on vote count accuracy. In addition, large jurisdictions (65 percent) were significantly more likely than small jurisdictions (47 percent) to collect data on undervotes. (See fig. 65.) This disparity may be due to the proportion of smaller jurisdictions that use paper ballots and for whom collection of these data would be a manual, time-consuming process. In the area of reliability, we estimate that 15 percent of jurisdictions measured the reliability of their voting equipment in terms of pieces of equipment that failed, and 11 percent measured equipment downtime. As with accuracy, a higher percentage of large and medium jurisdictions collected such reliability data than small jurisdictions, and in the case of equipment failures, there were statistically significant differences in the collection of this information among different sizes of jurisdictions. (See fig. 66.) Importantly, an estimated 55 percent of all jurisdictions kept a written record of issues and problems that occurred on Election Day, which could be a potential source of reliability data. Collection of reliability data for automated voting equipment was also related to the predominant voting method used by a jurisdiction, with jurisdictions that predominantly used DREs more likely to collect reliability data than those that used optical scan voting methods. An estimated 45 percent of jurisdictions whose predominant method was DREs collected information on the number of pieces of voting equipment that failed. The next most frequently collected information on machine failures was for precinct count optical scan systems (an estimated 23 percent of jurisdictions) and central count optical scan systems (an estimated 10 percent). The differences in data collection on equipment failures among jurisdictions that predominantly used DREs and those that used precinct count optical scan or central count optical scan voting methods are statistically significant. (See fig. 67.) In the area of efficiency, we estimate that 13 percent of jurisdictions measured their voting system’s speed of counting votes, 17 percent measured the time it took for election workers to set up equipment, and 4 percent measured the average length of time it took for voters to cast ballots on Election Day. Large jurisdictions (34 percent) were significantly more likely than were both medium jurisdictions (19 percent) and small jurisdictions (9 percent) to collect information on counting speed. There were no significant differences for other efficiency measures by jurisdiction size. (See fig. 68.) It is worth noting that for several types of performance measures in our local jurisdiction survey, jurisdiction size was a factor in whether system performance information was collected. Generally, large jurisdictions were most likely to record voting system performance and small jurisdictions were least likely, with medium jurisdictions in between. Moreover, large jurisdictions were more likely to keep a written record of issues or problems that occurred on Election Day. Specifically, on the basis of our local jurisdiction survey, we estimate that 79 percent of large jurisdictions kept such records, compared with 59 percent of medium jurisdictions and 52 percent of small jurisdictions. The differences between large jurisdictions and both medium and small jurisdictions are statistically significant. The responsibilities for monitoring or reporting voting system performance most often rested with local jurisdictions. On the basis of our local jurisdiction survey, we estimate that 83 percent of local jurisdictions had local officials responsible for performance monitoring or reporting, while states or other organizations (such as independent consultants or vendors) held such responsibilities in 11 percent and 13 percent of jurisdictions, respectively. Information obtained during our visits to local election jurisdictions was generally consistent with the above estimates from our local jurisdiction survey. For example, election officials in the 28 jurisdictions we visited most frequently cited number of undervotes (14 jurisdictions), overvotes (10 jurisdictions), and equipment failures (10 jurisdictions) as types of performance metrics collected. Another collected metric (cited by election officials in 6 jurisdictions we visited) was equipment speed, measured in terms of how fast the voting equipment downloaded vote totals or transmitted totals to its central count location, and the time required to cast a vote (reported by election officials in 4 jurisdictions, although officials in 2 of these 4 jurisdictions limited their measurements to early voting). Another measurement that election officials in some jurisdictions told us they collected was comments from poll workers and voters on the efficiency of the equipment. For instance, an election official in a large jurisdiction in Georgia told us that poll workers commented that it took 20 minutes to vote using the voting equipment’s audio feature. In addition, election officials in several jurisdictions that we visited told us that they had established performance management programs for their voting systems. For example, election officials in 1 jurisdiction reported that they collected data on the time it took to vote to better allocate its voting equipment to various locations. Officials in a large jurisdiction in Kansas said they had conducted a survey of voters concerning their satisfaction with the ease of use of voting equipment during the 2004 general election and determined that they were very satisfied. In our October 2001 report on election processes, we reported that 96 percent of local jurisdictions nationwide were satisfied with the performance of the voting equipment during the November 2000 general election. On the basis of our local jurisdiction survey for the 2004 general election, we estimate that election officials were generally satisfied with their voting system performance. Estimated satisfaction varied for specific areas of voting system performance, ranging from relatively high levels for accuracy (78 percent), speed of vote counting (73 percent), time to set up equipment (63 percent), and number of spoiled or ruined ballots (61 percent), to relatively low levels for equipment failures (37 percent), and downtime (36 percent). Some of these measures may not be applicable to all jurisdictions, such as those using only hand-counted paper ballots. When jurisdictions that used only hand-counted paper ballots were excluded from our results, satisfaction levels were higher in all performance areas—accuracy (86 percent), speed of vote counting (83 percent), time to set up equipment (76 percent), number of spoiled ballots (68 percent), equipment failures (54 percent), and downtime (52 percent). However, even with the exclusion of paper ballot jurisdictions, “not applicable” responses were often selected by jurisdictions in the areas of equipment failures (41 percent not applicable) and downtime (43 percent not applicable). Also on the basis of our local jurisdiction survey, for five of six satisfaction measures, we estimate that medium and large jurisdictions were satisfied or very satisfied with their voting systems more frequently than small jurisdictions and that most of these differences are statistically significant. These ratings may be related to the widespread use of paper ballots by small jurisdictions, where this voting method was predominant in an estimated 41 percent of jurisdictions. Figure 69 shows the frequency of satisfaction in each of six performance areas for large, medium, and small jurisdictions. The estimated high satisfaction levels demonstrated across different voting system performance areas and jurisdiction sizes contrast with our lower estimates of the performance measures that were collected for the 2004 general election. Although the reasons for moderate collection levels for performance measures are unclear, jurisdictions that may not have collected performance data or may have considered such information not applicable to their situation may lack sufficient insight into their system operations to adequately support their satisfaction in the variety of performance areas we surveyed. Local election officials at most of the 28 jurisdictions we visited also expressed satisfaction with the performance of their voting systems or method. For example, Election officials in several jurisdictions using optical scan systems stated that they were pleased with their equipment because it produced a paper trail and permitted fast processing. Officials in 1 large jurisdiction in Florida added that their use of the same equipment over several elections made it easy for voters to use the equipment in both 2000 and 2004. Election officials in several other jurisdictions using DREs told us that their equipment was easy to use and provided results that were accurate and timely. Officials in 1 large jurisdiction in New Jersey reported that, in contrast to paper ballots, DREs do not require poll workers to interpret a voter’s ballot. Election officials in a large Connecticut jurisdiction using lever machines said that voters were happy with the equipment and that it had worked well for over 60 years. They emphasized that the simplicity and transparency of the equipment’s counting mechanisms gave voters confidence that their votes would be counted correctly. Election officials in a small New Hampshire jurisdiction using paper ballots reported that they had used the same hand-counted paper ballot system for decades and it has been very cost-effective for the small population of voters in the jurisdiction. Overall, election officials in few of the 28 jurisdictions that we visited reported substantive performance issues, such as overvoting, undervoting, or equipment failure. Although the estimated level of satisfaction with voting equipment performance in the 2004 general election was high overall, some dissatisfaction existed. On the basis of our local jurisdiction survey, we estimate that between 1 and 4 percent of jurisdictions were dissatisfied or very dissatisfied with their voting systems in the 2004 general election for the six performance areas of our survey. Our local jurisdiction survey provided additional insight into the role of voting equipment in jurisdictions’ dissatisfaction ratings. Of almost 300 responses to our open-ended question about the issue or problem that occurred most frequently on Election Day, November 2004, fewer than 20 responses were specifically related to voting equipment. The most frequent reason for voting system dissatisfaction was voting equipment malfunction. Ballot errors related to voting equipment were much less frequently mentioned. Although such problems were rarely mentioned by election officials during our visits to local jurisdictions, some did describe a few reasons for dissatisfaction with voting equipment, including the additional time required to count ballots using DREs versus the optical scan equipment previously used, the perceived lower reliability and greater failure rates of DREs over the voting equipment used in the past, accuracy problems with DRE computer programs, and difficulty in first-time poll worker operation and voter use of DREs. Election officials in a few jurisdictions we visited noted situations that required considerable effort to resolve. For example, as mentioned in our discussion of vote counting in chapter 6, election officials in a North Carolina jurisdiction told us that 4,235 ballots were lost by one of the DREs used for early voting because the software manufacturer had not installed an upgrade that would have allowed the machine to record up to 10,000 ballots rather than its original limit of 3,500 ballots. The machine continued to show the number of people who voted on the machine after 3,500 ballots had been cast, but did not store the results of their ballots. As a result, the jurisdiction switched to hand-counted paper ballots for elections after the 2004 general election until its state can approve a new automated system for use. Given the real and potential impacts of situations where dissatisfaction was reported, systematic collection and analysis of performance information may help provide election officials with objective support for decisions to improve the operation and upgrade of these systems. Having secure voting systems is essential to maintaining public confidence in the election process, and accomplishing this is a shared responsibility among federal, state, and local jurisdiction authorities. Among other things, voting system security involves ensuring that technical security controls embedded in voting equipment operate as intended, as well as ensuring that security policies and procedures governing the testing, operation, and use of the systems are properly defined and implemented by state and local election officials. Our October 2001 report on election processes identified voting system security challenges facing local jurisdictions, such as consistent application of controls and adequacy of resources. HAVA recognized some of these challenges by requiring specific system security controls and providing improved security management guidance. Nevertheless, while we estimate from our local survey that most jurisdictions have assigned responsibility for voting system security to individuals and implemented certain security controls, the nature and extent of their respective security efforts and activities varied widely. In particular, according to our state survey, estimates from our local jurisdiction survey, and visits to jurisdictions, there are differences across jurisdictions in the (1) adoption of system security standards, with some states requiring jurisdictions to use outdated standards for voting systems; (2) reported implementation of system security controls; and (3) testing performed to ensure that security controls are functioning properly. For instance, we estimate on the basis of our local jurisdiction survey that at least 19 percent of local jurisdictions nationwide (excluding jurisdictions that reported using paper ballots) did not conduct security testing for the systems they used in the November 2004 general election. In addition, 27 states reported in our state survey that they are requiring jurisdictions to apply federal standards to voting systems used for the first time in the November 2006 general election that are outdated, unspecified, or entail multiple versions. This variability in implementation and testing of controls is generally consistent with what we reported for the 2000 general election. Moreover, our September 2005 report on the security and reliability of electronic voting highlighted substantial security issues and concerns for more modern electronic voting systems and reinforced the importance of effective security management. HAVA recognized the importance of effective voting system security through two primary mechanisms. First, it required voting systems to produce a permanent paper record that provides a manual review capability and constitutes the official record for recounts by January 1, 2006. The paper record can be compared with polling place records and voting system documentation to ensure that authorized ballots have been completely and accurately counted. Second, HAVA provided various means to assist states and localities in acquiring and operating secure voting systems. These include provisions for EAC to (1) update voting system standards for voting systems, including standards for security; (2) establish processes for accrediting voting system testing laboratories and conducting tests of voting systems against the standards; and (3) create a process for federal certification of voting systems that undergo the testing process. In doing so, HAVA created tools and resources that states and local jurisdictions can leverage when, for example, acquiring systems from vendors, conducting system testing, and operating and auditing voting systems. However, delays in establishing EAC and commission funding challenges resulted in the first update to the 2002 voluntary voting system standards, and its provisions for system security, not being approved until December 2005. Further, commission efforts to establish processes for accrediting testing laboratories, conducting testing, and certifying systems are still under way. As was the case for the November 2000 general election, the nature and extent of voting system security efforts and activities during the 2004 election varied among jurisdictions. Moreover, these efforts and activities do not in all cases reflect the use of recommended system security management practices and current voting system security standards. In our October 2001 report on election processes, we reported that jurisdictions had taken a number of steps to manage the security of their respective voting systems for the 2000 general election. In particular, we estimated that 89 percent of the local jurisdictions assigned responsibility for performing security-related functions to one or more individuals, and implemented some type of controls to protect their equipment during the election. Examples of implemented security controls included such physical controls as locks and surveillance, and such embedded controls as access restrictions and firewalls. However, we also reported in 2001 that an estimated 40 percent of the jurisdictions had not assessed the security threats and risks on which their controls were based, and 19 percent had not reviewed the sufficiency of their security controls. Moreover, the nature of established controls varied by type of system, and these controls were not uniformly followed across jurisdictions. For the November 2004 general election, jurisdictions addressed system security to varying degrees and through various means. At the foundation of these approaches, responsibilities for voting system and network security were distributed among local officials, the state, and third parties (e.g., independent consultants and vendors) in varying proportions. On the basis of our 2005 local jurisdiction survey, we estimate that 90 percent of all jurisdictions (excluding those that used only hand-counted paper ballots on Election Day) specifically assigned responsibility for voting system security in the 2004 general election. We estimate that 67 percent of these local jurisdictions assigned responsibilities for voting system and network security to local election officials, 14 percent relied on state officials to perform these responsibilities, and 24 percent assigned them to third parties. Moreover, this distribution varied somewhat according to jurisdiction size, with large jurisdictions depending on local officials the most and medium jurisdictions depending on local officials the least. Figure 70 shows how voting system and network security responsibilities were distributed among various parties for each size of jurisdiction. On the basis of our visits to local jurisdictions, the types of system security responsibilities and the groups that performed them further demonstrate the variation among security approaches and controls applied to voting systems. Specifically, election officials in these jurisdictions were typically responsible for implementing security controls, state officials were usually involved with developing security policy and guidance and monitoring local jurisdictions’ implementation of security, and third parties performed tasks such as ensuring adequate security of voting equipment during transport or storage. Table 24 shows examples of security tasks and the parties that performed them as reported to us by election officials in the jurisdictions that we visited. Responses to our state survey showed that both states and third parties participated in security responsibilities related to monitoring and evaluating security and privacy controls. Although the most frequently cited party responsible for this area was local officials (identified by 38 states), just less than one-half of the states (22 states and the District of Columbia) reported that they had some level of responsibility for security monitoring and evaluation as well. In addition, 22 states responded that third parties (e.g., independent consultants or vendors) were involved in monitoring and evaluating controls. Overall, security monitoring and evaluation was performed by two or more entities in 26 of the states. The use of certain security controls was similarly varied. On the basis of our local jurisdiction survey, we estimate that 59 percent of jurisdictions used power or battery backup, 67 percent used system access controls, 91 percent used hardware locks and seals, and 52 percent used backup electronic storage for votes. We further estimate that 95 percent of jurisdictions used at least one of these controls, with hardware locks and seals being most consistently used across the automated voting methods associated with this survey question. Furthermore, we estimate that a lower percentage of small jurisdictions used power or battery backup and electronic backup storage of votes for their voting equipment than large or medium jurisdictions, and these differences are statistically significant in most cases. Figure 71 presents the use of various security controls by jurisdiction size. We estimate that a small percentage of local jurisdictions (10 percent) provided remote access to their voting systems for one or more categories of personnel—local election officials, state election officials, vendors, or other parties. Small jurisdictions, in particular, were less likely to provide remote access to their voting systems (estimated at 7 percent) than either medium jurisdictions (13 percent) or large jurisdictions (19 percent). The difference between small jurisdictions and large jurisdictions is statistically significant. For each category of personnel—local officials, state election officials, vendors, or other parties—7 to 8 percent of jurisdictions did not know if remote access was available to their systems, a situation that could increase the risk of unauthorized access to these systems. Some of the jurisdictions responding to this survey question described a variety of protections to mitigate the risk of unauthorized remote access, including locally controlled passwords, passwords that change for each access, and local control of communications connections. Among the jurisdictions that we visited, election officials reported that various security measures were in use during the 2004 general election to safeguard voting equipment, ballots, and votes before, during, and after the election. However, the measures were not uniformly reported by officials in these jurisdictions, and officials in most jurisdictions reported that they did not have a security plan to document these measures or other aspects of their security program. The security controls most frequently cited by officials for the jurisdictions that we visited were locked storage of voting equipment and ballots, and monitoring of voting equipment. Other security measures mentioned during our visits included testing voting equipment before, during, or after the election to ensure that the equipment was accurately tallying votes; planning and conducting training on security issues and procedures for elections personnel; and video surveillance of stored ballots and voting equipment. Table 25 summarizes the types and frequency of security measures reported by election officials in the jurisdictions we visited. Notwithstanding this range of reported security controls that were used in the 2004 general election by jurisdictions we visited, jurisdictions’ activities and efforts for managing voting system security were not always in line with recommended system security practices. Our research of recommended practices shows that effective system security management involves having, among other things, (1) defined policies governing such system controls as authorized functions and access, and documented procedures for secure normal operations and incident management; (2) documented plans for implementing policies and procedures; (3) verified implementation of technical and procedural controls designed to reduce the risk of disruption, destruction, or unauthorized modification of systems and their information; and (4) clearly assigned roles and responsibilities for system security. On the basis of our local jurisdiction survey, we estimate that 46 percent of election jurisdictions nationwide that used some type of automated voting method had written policies for voting system security and access in place for the November 2004 general election, while 45 percent had formal security procedures. Written security policies were more prevalent among large jurisdictions, an estimated 65 percent, compared to an estimated 52 percent of medium jurisdictions and an estimated 41 percent of small jurisdictions. The difference between large and small jurisdictions is statistically significant. More large and small jurisdictions had formal security procedures (an estimated 51 percent and 47 percent, respectively) than medium jurisdictions (an estimated 39 percent), although these differences are not statistically significant. Figure 72 shows the estimated percentages of jurisdictions with written security policies and procedures by jurisdiction size. In our earlier discussion of local survey responses related to counting votes in chapter 6, we estimated that many jurisdictions had written policies and procedures for ballot security in the 2004 general election. However, we estimate that up to one-fifth of jurisdictions did not have written policies and procedures uniformly in place, including policies and procedures for transporting unvoted and voted ballots or electronic memory, storing unvoted and voted ballots, and electronic transmission of voted ballots. The disparity in written policies and procedures was observed for electronic transmission of voted ballots for counting, where an estimated 18 percent of jurisdictions had such security management tools, compared with between 66 and 76 percent of jurisdictions for each of the other four types of ballot controls—a difference that is statistically significant but which may be linked to the percentage of jurisdictions that used paper ballot and older technologies in the 2004 general election. Yet we also found that an estimated 17 percent of jurisdictions whose predominant method was DRE had no policies or procedures for electronic transmission of voted ballots for counting. In addition, the differences in estimates of policies and procedures for electronic ballot transmission among jurisdictions whose predominant voting method was punch cards and those whose methods were DRE or optical scan are statistically significant. Figure 73 shows the variation in estimates of documented policies and procedures for electronically transmitting ballots among jurisdictions that used specific voting methods. Moreover, our visits to local jurisdictions found diverse approaches to documenting security policies and procedures. Election officials in 8 of the jurisdictions that we visited told us that they had written instructions for managing security aspects of their voting equipment and processes. However, some guidance we reviewed did not cover these topics. Election officials in some jurisdictions stated that their security measures were contained in the voting process documentation for the voting system or were covered in election worker training. For example, the hardware guide for the voting system used by some jurisdictions described the verification and authentication functions that were built into the system to secure vote counts during transmission of the precinct results to the jurisdiction, including processes for ballot creation and vote tabulation that also included security procedures. In contrast, several other jurisdictions that we visited had published detailed security policies and procedures for their voting systems that included, for example, network security policies for election tabulation, procedures for securing and protecting election equipment and software, testing voting equipment to ensure accurate recording of votes, and disaster recovery plans, and they provided them to GAO. Officials in several jurisdictions also described their steps to ensure that election workers had access to, and were trained in, the contents of the policies and procedures for securing ballots and voting equipment. Information system security plans typically identify the responsibilities, management approach, and key controls to be implemented for an information system, based on an assessment of identified risks to the information. Election officials in a few of the jurisdictions that we visited told us that they had security plans in place for the November 2004 general election (8 of 28). Officials at 4 of the jurisdictions that we visited stated that they had security plans or plan components that were approved at the state level, and officials in 1 large jurisdiction in Nevada reported having a state statutory requirement for a voting system security plan. However, jurisdictions that employed advanced security technologies, such as encryption, in their systems did not always have a plan that would document how the elections people, process, and technologies would work together to provide comprehensive protections. Moreover, the contents of plans we obtained from our visits to local jurisdictions varied widely. One of the jurisdiction security plans we examined covered most aspects of the voting process, from ballot preparation through recount, while another plan focused on the security of its vote-tallying system in a stand-alone environment. Two security plans covered several security topics including risk assessment, physical and personnel controls, and incident response. Table 26 shows the variation in topics covered in the security plans we reviewed. Security testing is an important way to verify that system security controls have been implemented and are functioning properly. From our survey of state election officials, 17 states and the District of Columbia reported that they had conducted security testing of the voting systems used in the 2004 general election, and 7 other states reported that they required local jurisdictions to conduct such testing. The remaining 22 states said that they did not conduct or require system security testing. (Three states reported that security testing was not applicable for their voting systems.) Moreover, from our local jurisdiction survey, we estimate that at least 19 percent of local jurisdictions nationwide (excluding jurisdictions that reported that they used paper ballots) did not conduct security testing for the systems they used in the November 2004 general election. Although jurisdiction size was not a factor in whether security testing was performed, the percentage of jurisdictions performing security testing was notably higher when the predominant voting method was DRE (63 percent) and lower for jurisdictions where the predominant method was central count optical scan (38 percent) or precinct count optical scan (45 percent). However, the difference in the percentages of jurisdictions performing security testing on DRE or central count optical scan is not statistically significant. Beyond jurisdictions’ efforts to verify implementation of voting system security controls, some states required that their voting systems be nationally qualified against the federal voluntary voting system standards, which include a security component. In particular, from our state survey, most states that used a new voting system for the first time in the November 2004 general election said that they required the system to go through qualification testing. For example, all 26 states that used DREs for the first time in the 2004 general election, as well as the District of Columbia, required qualification testing and approval by the National Association of State Election Directors (NASED). Similarly, of the 35 states and the District of Columbia that used optical scan systems for the first time in the 2004 general election, 31 reported that they required voting systems to be qualified. Nine of the 10 states that used new punch card systems for the first time in the 2004 general election also reported that they required voting systems to be qualified. States and jurisdictions are applying a variety of security standards to their voting systems, some of which are no longer current. Specifically, 44 states and the District of Columbia reported on our state survey that they were requiring local jurisdictions’ voting systems being used for the first time in the November 2006 general election to comply with voluntary federal voting system standards, which include security standards. However, they are not all using the same version of the voluntary standards. This is troublesome because the 2002 standards are more stringent than the 1990 standards in various areas, including security. For instance, the 2002 standards establish security requirements and acceptable levels of performance for the telecommunications components of voting systems, while the 1990 standards do not include detailed requirements for this control measure. According to our analysis of responses states reported in our state survey, 17 of the 44 states and the District of Columbia reported that their voting systems must comply solely with the 2002 standards that were developed and approved by the Federal Election Commission and later adopted by EAC. However, 27 other states are requiring their jurisdictions to apply federal standards to their new voting systems that are outdated, unspecified, or entail multiple versions. In the case of 5 of these 27 states where multiple versions of voluntary federal standards will be applied, one of the versions is the Voluntary Voting System Guidelines, which was approved by the EAC in December 2005. These guidelines promote security measures that address gaps in prior standards and are applicable to more modern technologies, such as controls for distributing software and wireless operations. Nevertheless, these same 5 states reported that they will also apply older federal standards to systems that are new to the 2006 election. Furthermore, 2 other states responded that they do not plan to require their voting systems to comply with any version of the voluntary federal standards, while 3 additional states reported that they had not yet made a decision on compliance with voluntary federal standards for 2006. (One state did not respond.) Figure 74 depicts the number of states that reported applying voluntary federal voting system standards to their new voting systems. Appendix X summarizes responses for all states and the District of Columbia regarding reported requirements for local jurisdictions’ use of federal standards for their voting systems. Simultaneous use of multiple versions of voting system standards is not new for the 2006 election. Not all NASED-qualified voting systems that may have operated during the 2004 election were tested against a single version of security standards. For example, many systems that were qualified before the 2004 general election had been tested against the 1990 Federal Election Commission standards, rather than the more stringent 2002 standards. The use of outdated system security standards increases the risk of system integrity, availability, and confidentiality problems for all voting methods, but it is of special concern for jurisdictions that use their systems in a networked environment or transmit election data using telecommunications capabilities. This is because the use of such connectivity introduces vulnerabilities and risks that the older versions of the standards do not adequately address, as we have previously described in our September 2005 report on the security and reliability of electronic voting. After the 2000 general election, Congress, the media, and others cited numerous instances of problems with the election process. As the use of electronic voting systems expanded and the 2004 general election approached, the media and others continued to report problems with these systems that caused some to question whether they were secure and reliable. To clarify the wide range of concerns and issues raised and identify recommended practices for addressing them, our September 2005 report on the security and reliability of electronic voting analyzed over 80 recent and relevant studies related to the security and reliability of electronic voting systems. We focused on systems and components associated with vote casting and counting, including those that define electronic ballots, transmit voting results among election locations, and manage groups of voting machines. In summary, our September 2005 report stated that while electronic voting systems hold promise for a more accurate and efficient election process, numerous organizations and individuals have raised concerns about their security, citing instances of weak security controls, system design flaws, inadequate system version control, inadequate security testing, incorrect system configuration, poor security management, and vague or incomplete voting system standards, among other issues. For example, we reported that studies found (1) some electronic voting systems did not encrypt cast ballots or system records of ballots, and it was possible to alter both without being detected; (2) it was possible to alter the files that define how a ballot looks and works so that the votes for one candidate could be recorded for a different candidate; and (3) vendors installed uncertified versions of voting system software at the local level. We also reported that some of these concerns were said to have caused local problems during national elections—resulting in the loss or miscount of votes. We added, however, that many of the reported concerns were drawn from specific system makes and models or from a specific jurisdiction’s election, and that there has been a lack of consensus among election officials and other experts on the pervasiveness of the concerns. We also reported in September 2005 that federal organizations and nongovernmental groups have issued recommended practices and guidance for improving the election process, including electronic voting systems, as well as general practices for the security of information systems. For example, in mid-2004, EAC issued a collection of practices recommended by election experts, including state and local election officials. This guidance includes approaches for making voting processes more secure and reliable through, for example, risk analysis of the voting process, poll worker security training, and chain of custody controls for Election Day operations, along with practices that are specific to ensuring the security and reliability of different types of electronic voting systems. As another example, in July 2004, the California Institute of Technology and the Massachusetts Institute of Technology issued a report containing recommendations pertaining to testing equipment, retaining records of ballots, and physically securing voting systems. In addition to such election-specific practices, numerous recommended practices are available that are relevant to any information system. For instance, we, the National Institute for Standards and Technology (NIST), and others have issued guidance that emphasizes the importance of incorporating security and reliability into the life cycle of information systems through practices related to security planning and management, risk management, and procurement. We noted that the recommended practices in these election- specific and information technology-focused documents provide valuable guidance that, if implemented effectively, should help improve the security of voting systems. Further, our September 2005 report stated that since the passage of HAVA, the federal government has begun a range of actions that are expected to improve the security and reliability of electronic voting systems. Specifically, after beginning operations in January 2004, EAC was leading efforts to (1) draft changes to the existing federal voluntary standards for voting systems, including provisions related to security; (2) develop a process for certifying, decertifying, and recertifying voting systems; (3) establish a program to accredit the national independent testing laboratories that test electronic voting systems against the federal standards; and (4) develop a software library and clearinghouse for information on state and local elections and systems. However, we observed that these actions were unlikely to have a major effect in the 2006 federal election cycle because at the time of our report publication the changes to the standards had not yet been completed, the system certification and laboratory accreditation programs were still in development, and the software library had not been updated or improved since the 2004 elections. Further, we stated that EAC had not defined tasks, processes, and time frames for completing these activities, and we recognized that other organizations had actions under way that were intended to improve the security of electronic voting systems. These actions include developing and obtaining international acceptance for voting system standards, developing voting system software in an open source environment (i.e., not proprietary to any particular company), and cataloging and analyzing reported problems with electronic voting systems. To improve the security and reliability of electronic voting systems, we made recommendations to EAC for establishing tasks, processes, and time frames for improving the federal voluntary voting system guidelines, testing capabilities, and management support available to state and local election officials. The EAC commissioners agreed with our recommendations and stated that actions to address each were either under way or intended, and the NIST director agreed with our conclusions. To ensure that voting systems perform as intended during use, the systems must be effectively tested, both before they are accepted from the manufacturer and before each occasion that they are used. Further confidence in election results can be gained by conducting Election Day and postelection audits of voting systems. For the November 2004 general election, voting system testing was conducted for almost all voting systems, but the types and content of the testing performed varied considerably. Most states and local jurisdictions employed national and state certification testing and readiness testing to some extent, but the criteria used in this testing were highly dependent on the state or jurisdiction. Also, many, but not all, states and jurisdictions conducted acceptance testing of both newly acquired systems and those undergoing changes or upgrades. In contrast, relatively few states and jurisdictions conducted parallel testing during elections or audits of voting systems following elections. To assist election officials in testing voting systems for the 2004 general election, most local jurisdictions documented policies and procedures related to some types of testing, according to estimates based on our survey of local jurisdictions. However, the testing approaches embodied in policies and procedures that the local jurisdictions we visited shared with us varied considerably. Furthermore, in jurisdictions we visited, few voting system problems were reported as a result of local testing, and correspondingly few changes were made to the systems or election processes. The variability in testing approaches among states and jurisdictions underscores our previously reported concerns from our September 2005 report about whether actual testing of voting systems is sufficient to ensure satisfaction of system requirements, including those associated with accuracy, reliability, and security. Voting system test and evaluation can be grouped into various types or stages: certification testing (national level), certification testing (state level), acceptance testing, readiness testing, parallel testing, and postelection voting system audits. Each of these tests has a specific purpose, and is conducted at the national, state, or local level at a particular time in the election cycle. Table 27 summarizes these types of tests. Many states have laws or regulations that mandate specific types of testing for voting equipment and time frames for conducting those tests. Documented policies and procedures for testing and evaluation provide an important means for ensuring that testing is effectively planned and executed. Effective test and evaluation can greatly reduce the chances of unexpected or unknown equipment problems and errors. From our local jurisdiction survey for the 2004 election, we estimate that 85 percent of local jurisdictions had documented policies and procedures for some type of voting system testing, 6 percent of jurisdictions did not have policies and procedures for testing, and 9 percent did not know whether their jurisdictions had them. Larger jurisdictions were more likely to have these management tools than smaller ones. An estimated 96 percent of large jurisdictions had documented testing policies and procedures, compared with 89 percent of medium and 82 percent of small jurisdictions. The difference between large and small jurisdictions is statistically significant. The testing policies and procedures of the local jurisdictions we visited presented a wide variety of approaches and details for the 2004 general election. For instance, election officials in 1 large jurisdiction in Connecticut told us that they did not conduct acceptance testing on their lever equipment, which had been in use for many years, and did not conduct either parallel testing or audit testing, stating that these tests were not applicable to its systems for 2004. However, officials said they did conduct readiness testing at the polling place prior to the election. Election officials in a large Ohio jurisdiction that used punch card voting equipment told us that readiness testing had been conducted by local officials. However, election officials stated that certification and acceptance testing were not performed for 2004 because this system had been used in prior elections. They also said that neither parallel testing nor audit testing of voting systems was performed. Officials in a large Colorado jurisdiction we visited that used central count optical scan equipment told us that they obtained state certification of the newly purchased equipment, conducted acceptance and readiness testing prior to the election, and executed another readiness test following the election. Election officials in a large Georgia jurisdiction that used DRE voting equipment reported that the state performed both certification and acceptance testing when the equipment was purchased and conducted a parallel test of the tabulation system during the election. Further, local officials reported that they conducted readiness testing prior to the election, but did not perform postelection audit testing. For the 5 local jurisdictions that provided us with copies of procedures for readiness testing, three sets of procedures were developed by the jurisdictions themselves and two sets were developed by the voting equipment vendors. The enactment of HAVA in 2002 established federal responsibilities for the certification of voting systems to meet federal standards and provided the framework for a national testing program. The act charged EAC, supported by NIST, with instituting a federal program for the development and adoption of voluntary voting system guidelines against which voting systems can be evaluated, establishing processes and responsibilities for accrediting laboratories to test systems, and using the results of testing by the accredited labs to certify the voting systems. In 2005, EAC developed guidelines for the certification process and defined the steps needed for the process to transition from NASED to EAC. States and local jurisdictions are to decide whether and how to use the testing and certification results from the federal program in their elections processes. Most states continued to require that voting systems be nationally tested and certified. In our October 2001 report on election processes, we reported that 38 states required that their voting systems meet federal standards for the November 2000 general election, which meant that the systems were tested by NASED. For voting systems being used for the first time in the 2004 general election, national certification testing was almost uniformly required. From our prior discussion of state survey responses in the context of voting system security, 26 of 27 states using DRE for the first time in this election, as well as the District of Columbia, required them to be nationally certified, while 9 of the 10 states using punch card equipment for the first time, and 30 of 35 states and the District of Columbia using optical scan equipment for the first time, said they had such requirements. It is unclear whether the proportion of nationally certified systems changed between the 2000 and 2004 general elections. In our October 2001 report on election processes nationwide, we reported that an estimated 39 percent of jurisdictions used NASED-qualified voting equipment for the 2000 general election. However, for the 2004 general election, we estimate that 68 percent of jurisdictions did not know whether the respective systems that they used were NASED-qualified. This uncertainty surrounding the national qualification status of a specific version of voting system at the local level underscores a concern we recently reported with respect to electronic voting security and reliability in our September 2005 report on this topic—that is, even though voting system software may have been qualified and certified at the national or state levels, software changes and upgrades performed at the local level may not be qualified and certified. The upcoming 2006 general election can be viewed as a challenging transition period in the voting system capabilities, standards, and national certification, with several testing-related factors potentially increasing the difficulty of this transition. First, HAVA’s requirements for voting system capabilities, such as voter error correction and manual audit, along with the attendant new guidelines, are likely to require additional testing at the national level to recertify previously fielded and certified systems that have been upgraded. Second, this increased workload is not likely to be met with added national testing capacity, since the process for accrediting new voting system testing laboratories is not expected to produce newly accredited labs in time for the 2006 election. Third, the complexity of the testing being performed is likely to increase because states report that they will collectively apply the full range of available standards—1990, 2002, and 2005 standards, as well as various combinations of these—to voting systems first used for the November 2006 election. As a result, a range of test protocols must be developed or maintained, and a variety of corresponding tests must be planned, executed, and analyzed to meet the variety of standards. Most states continue to certify voting systems to ensure that they meet minimum state election requirements. In our October 2001 report on election processes, we reported that 45 states and the District of Columbia had certification programs for their voting systems, 38 of which required that the systems be tested before they were certified for the 2000 general election. In addition, we reported that an estimated 90 percent of local jurisdictions used state-certified voting equipment for the November 2000 general election. However, we also reported that state officials had expressed concerns with voting system changes that did not undergo recertification. Since then, we have reported that security experts and election officials have expressed similar concerns. For the November 2004 general election, 42 states and the District of Columbia reported on our state survey that they required state certification of voting systems. (See fig. 75.) Seven states required certification of the voting equipment purchased at the state level for local jurisdictions in the 2004 election. However, in 35 states and the District of Columbia, officials reported that responsibility for purchasing a state-certified system rested with the local jurisdiction. While state certification requirements often included NASED testing, as well as approval or confirmation of functionality for particular ballot conditions, some states also included additional requirements for features such as quality of construction, transportation safety, and documentation. Although the remaining 8 states did not require state certification, the officials we contacted described other mechanisms to address the compliance of voting equipment with state-specific requirements, such as a state approval process or acceptance of voting equipment based on federal certification. Figure 75 shows states’ reported certification requirements for voting systems used in the 2004 general election. For the 2006 general election, 44 states reported that they will have requirements for certification of voting systems, 2 more states than for the 2004 general election. The District of Columbia reported that it will not require voting system certification for the 2006 general election. Of the 44, all but 1 expected to conduct the certification themselves; the 1 state reported that it would rely solely on a national independent testing authority to make its certification decision. Furthermore, of the 43 other states conducting certification themselves, 41 reported that they would include testing of system functions to obtain certification. In addition, 18 of the 43 states planned to involve a national testing laboratory in their certification process. As we reported previously in our October 2001 report on election processes, either states or local jurisdictions conducted acceptance tests prior to the 2000 general election. However, the testing processes, test steps, and involvement of vendors in the testing performed varied by jurisdiction and by type of equipment. Also, we reported in our 2001 report that states and local jurisdictions sometimes relied heavily on vendors to design and conduct acceptance tests. With respect to vendor involvement in particular, we reported that vendors were sometimes heavily relied upon to design and conduct acceptance tests. For the 2004 election, the extent and variety of acceptance testing was similar to those for the 2000 election. With regard to state roles and involvement in acceptance testing of new voting systems, 26 states and the District of Columbia reported responsibilities at some level of government. Specifically, 8 states and the District of Columbia reported on our survey that they had responsibility for performing acceptance testing, 15 states required local jurisdictions to perform such testing, and 3 states reported that requirements for acceptance testing existed at both the state and local levels. Twenty-two states either did not require such testing or did not believe that such testing was applicable to them. (Two states did not know their acceptance testing requirements for the 2004 election.) More states required that acceptance testing be performed for changes and upgrades to existing systems than they did for new systems—30 states in all and the District of Columbia. Specifically, 15 states and the District of Columbia were responsible for performing acceptance tests for changes and upgrades, 10 states required local jurisdictions to perform these tests, and 5 states required acceptance testing at both the state and local levels. Election officials at a majority of the local jurisdictions that we visited told us that they conducted some type of acceptance testing for newly acquired voting equipment. As with the 2000 general election, these officials described a variety of approaches to acceptance testing for the 2004 general election. For example, the data used for testing could be vendor- supplied, developed by election officials, or both, and could include system initialization, logic and accuracy, and tamper resistance. Other steps, such as diagnostic tests, physical inspection of hardware, and software configuration checks, were also mentioned as testing activities by local election officials. Further, election officials from 3 jurisdictions that we visited said that vendors were heavily involved in designing and executing the acceptance tests, while officials from another jurisdiction that we visited said that vendors contributed to a portion of their testing. In 2 jurisdictions in Georgia, officials said that acceptance tests were conducted at a university center for elections systems. Most jurisdictions conducted readiness testing, also known as logic and accuracy testing, for both the 2000 and 2004 general elections. In addition, some states reported that they conducted readiness testing for the 2004 general election. The content and nature of these tests varied among jurisdictions. According to our state survey, 49 states and the District of Columbia reported that they performed readiness testing of voting systems at the state level, the local level, or both (1 state did not require readiness testing). Most states required local jurisdictions to perform readiness testing (37 states in all). However, 7 states reported that they performed their own readiness testing of voting equipment for the 2004 general election in addition to local testing. Five states and the District of Columbia reported that they had no requirements for local jurisdictions to perform readiness testing but conducted this testing themselves. State laws or regulations in effect for the 2004 election typically had specific requirements for when readiness testing should be conducted and who was responsible for testing, sometimes including public demonstrations of voting system operations. For example, one state mandated that local jurisdictions conduct three readiness tests using all types of election ballots including audio ballots. One test took place before Election Day and two occurred on Election Day—before the official counting of ballots began and after the official counting had been completed. Another state required the Secretary of State to conduct testing using pre-audited ballots before Election Day, as well as on Election Day before ballots were counted. On the basis of a subgroup of local election jurisdictions from our 2000 election survey, we estimate that 96 percent of jurisdictions nationwide conducted readiness testing before the 2000 general election. For a comparable subgroup of jurisdictions in the 2004 general election, we estimate that 95 percent of local jurisdictions conducted readiness testing. The frequency with which readiness testing was conducted in 2004 was largely stable across all jurisdictions of various sizes that did not solely use hand-counted paper ballots, ranging between an estimated 90 percent (for small jurisdictions) to an estimated 96 percent (for large jurisdictions). Whenever the sample of jurisdictions permitted statistical comparison, there were also no significant differences between the percentages of jurisdictions that said they conducted readiness testing for various predominant voting methods. The variety of readiness testing activities performed by jurisdictions for the 2000 general election was also evident for the 2004 general election. Election officials in all of the local jurisdictions we visited following the 2004 election reported that they conducted readiness testing on their voting equipment using one or more of the approaches we identified for the 2000 election, such as diagnostic tests, integration tests, mock elections, and sets of test votes. Election officials in many of these jurisdictions told us that they combined test approaches. For example, officials in 1 large jurisdiction in Florida told us that they conducted pre-election testing using complete ballots (not test decks) to determine the accuracy of the marks and to see if there were any errors in voting machine programming. They told us that logic and accuracy testing was performed for each machine using undervoted ballots and overvoted ballots, and that zero tapes were run for each voting machine before the election. In addition, a diagnostic test was run before the election on each voting machine. According to the local officials, this was the test approach described in the manufacturer’s preparation checklist. Election officials in another Florida jurisdiction stated that readiness testing included integration testing to demonstrate that the voting system is properly programmed; the election is correctly defined on the system; and all system inputs, outputs, and communication devices are in working order. In the case of these jurisdictions, the state requires logic and accuracy testing and submission of the test parameters to the state. Parallel testing was not widely performed by local jurisdictions in the 2004 general election, although 7 states reported on our state survey that they performed parallel testing of voting systems on Election Day, and another 6 states reported that this testing was required by local jurisdictions. From our survey of local jurisdictions, we estimate that 2 percent of jurisdictions that did not solely use hand-counted paper ballots conducted parallel testing for the 2004 general election. Large and medium jurisdictions primarily performed this type of testing (7 percent and 4 percent of jurisdictions, respectively). The percentage of small jurisdictions performing this type of testing was negligible (0 percent). The differences between both large and medium jurisdictions and small jurisdictions are statistically significant. Our visits to local jurisdictions affirmed the limited use of parallel testing. Specifically, election officials in 2 of the 28 jurisdictions that we visited told us that they performed parallel testing. Officials in 1 large jurisdiction in Georgia told us that parallel testing was conducted by the state in conjunction with a university center for voting systems. In another case, officials in a large jurisdiction in Kansas told us that parallel testing was required by the local jurisdiction and was publicly conducted. In both cases, the tests were conducted on voting equipment for which security concerns had been raised in a voting equipment test report issued by the state of Maryland prior to the 2004 general election. Local officials who told us that parallel testing was not performed on their voting systems attributed this to the absence of parallel testing requirements, a lack of sufficient voting equipment to perform these tests, or the unnecessary nature of parallel testing because of the stand-alone operation of their systems. According to our state survey, 22 states and the District of Columbia reported that they performed postelection voting system audits for the 2004 general election. Specifically, 4 states and the District of Columbia reported that they conducted postelection audits of voting systems themselves, 16 states required that audits of voting systems be conducted by local jurisdictions, and 2 states reported that audits of voting systems were performed at both the state and local levels. State laws or regulations in effect for the 2004 general election varied in when and how these audits were to be conducted. In addition, a variety of statutes cited by states for testing requirements did not mention postelection voting system audits, and the one that did lacked details on the scope or components of such audits. According to our local jurisdiction survey, postelection voting system audits were conducted by an estimated 43 percent of local jurisdictions that did not solely use hand-counted paper ballots on Election Day. This practice was much more prevalent at large and medium jurisdictions (62 percent and 55 percent, respectively) than small jurisdictions (34 percent). The differences between small jurisdictions and both medium and large jurisdictions are statistically significant. We further estimate that these voting system audits were conducted more frequently in jurisdictions with central count optical scan voting methods (54 percent) than they were in jurisdictions with precinct count optical scan voting methods (35 percent). Figure 76 shows the estimated use of postelection audits for jurisdictions with different voting methods in the 2004 general election. Election officials in 14 of 28 local jurisdictions that we visited told us that they conducted postelection voting system audits. However, the conditions and scope of voting system audits varied. Some were routine, while others were conducted only in the event of close races or challenges to results. Among the 14 jurisdictions, most of the officials we spoke with said that they focused on reconciling voting machine counts with known votes, and officials in 2 of these jurisdictions characterized the voting system audits largely as voting system logic and accuracy tests. However, officials with a few jurisdictions told us that they also reviewed voting machine logs, sampled results from random precincts, or employed independent auditors to repeat and verify vote counting. In 1 large jurisdiction in Nevada, an election official told us that paper results were compared to the tabulated results of votes counted on 24 machines. In addition, every voting machine was activated and the same scripts used for pre-election testing were rerun through the machines. According to the election official, this level of testing was required by law. The number of jurisdictions that have integrated particular aspects of voting system components and technologies was limited for the 2004 general election for the areas of integration we examined, based on estimates from our local jurisdiction survey and visits to local jurisdictions. For the areas of integration we did examine, the scope and nature of this integration was diverse and included remote programming of electronic ballots, statewide tabulation of voting results, and end-to-end management of the election process. Nevertheless, the potential for greater integration in the future does exist as states and jurisdictions act on their earlier discussed plans to acquire the kind of voting equipment (e.g., optical scan and DRE products) that lends itself to integration. It is unclear if and when this migration to more technology-based voting methods will produce more integrated election system environments. However, suitable standards and guidance for these interconnected components and systems—some of which remain to be developed—could facilitate the development, testing, operational management, and maintenance of components and systems, thereby maximizing the benefits of current and emerging election technologies and achieving states’ and local jurisdictions’ goals for performance and security. Various voting systems, components, and technologies—some of which have been available since the 2000 general election—encompass a wide range of functional capabilities and system interactions. According to our local jurisdiction survey estimates and visits to election jurisdictions for the 2004 general election, officials reported various types of integration, but there were few instances. The areas in which integration was reported can be grouped into four categories: (1) electronic programming or setup of voting equipment from a centralized facility, (2) electronic aggregation and tabulation of voting results from multiple voting systems or locations, (3) add-on voting features and technologies, and (4) electronic management of voting equipment and operations. Electronic programming or setup of voting equipment involves integration between an administrative system and voting equipment to initialize vote count totals, load ballot definitions, and authorize voter access. As we previously reported in our September 2005 report on the security and reliability of electronic voting, this type of integration has raised security concerns. Election officials in 19 of the 28 jurisdictions that we visited used portable memory cartridges or cards for electronic programming or setup of their voting equipment. To accomplish programming or setup, officials at some of the local jurisdictions that we visited said that they used a computer to preload voting equipment with ballots or tabulation logic prior to transporting the equipment to polling locations. At 1 large New Jersey jurisdiction, officials stated that the administrative computer used a dedicated connection to the election server to electronically transmit the data and logic necessary to program and enable the units for the election. Election officials in some jurisdictions told us that an administrative system loaded ballot definitions onto portable electronic devices, such as memory cartridges or smart cards, which were then physically transported to the locations where the voting equipment was being prepared for the election—either at a storage facility or polling location (see fig. 77). The cartridges or cards were then inserted into individual voting units to prepare or activate them for the election. Some electronic ballot cards were provided directly to the voter to activate the voting equipment, then returned to election workers when the ballot has been cast. Electronic aggregation or tabulation of cast ballots also requires integration between voting equipment and another computer system that is responsible for collecting and aggregating the votes. Figure 78 shows examples of computer systems used for vote tabulation. Transfer of votes or election results between the voting equipment and the central tabulator may employ portable electronic media or telecommunication lines. Portable electronic media were the means that officials at 7 of the 28 jurisdictions that we visited said they used to electronically aggregate election results from multiple voting locations. For DRE equipment, memory cartridges that stored cast ballots from individual voting units were transferred to the election office, and the data they contained were uploaded and tallied by an electronic tabulation system. Some jurisdictions also used telecommunications services to transfer election data from polling locations or election coordination centers to tabulation facilities, although how these services were used varied. Officials at 4 jurisdictions that we visited told us that they employed dial-up connections to transmit local vote tallies for further tabulation. For instance, election officials in a large jurisdiction in Washington told us that after the polls were closed and all ballots were scanned and recorded by the optical scan machines at each polling place, the machines were taken to storage areas, where the results were transmitted to the central computer for tabulation using the jurisdiction’s phone line. Officials at a large jurisdiction that we visited in Ohio said that they had election judges take voting machine memory cartridges from their polling locations to facilities where laptop computers would read the cartridges and transmit vote tallies over phone lines to a remote access server at the elections office. In a large jurisdiction that we visited in Illinois, election officials told us that they took their portable precinct ballot counters to 1 of 10 stations throughout the city, where vote totals from the counters were encrypted and transmitted to a remote access server via a cellular network. Add-on features and technologies to ensure the accuracy of votes, provide easier access to persons with disabilities or special needs, and enhance security or privacy were also integrated into voting systems by a few states and jurisdictions for the 2004 general election. Officials at both large jurisdictions in Nevada that we visited told us that they had integrated a VVPT capability into their DREs to meet a state requirement for VVPT. Figure 79 shows one example of a VVPT voting system component. Overall, we estimate that about 8 percent of jurisdictions operating DRE voting equipment in the November 2004 general election produced VVPT. Audio features were also added to voting systems for the 2004 election. Officials at 6 of the jurisdictions that we visited reported that they had incorporated an audio ballot component into their DRE machines for voters with sight impairments. Election officials in 3 jurisdictions reported that they offered audio ballots in languages other than English. Security and privacy capabilities, such as data encryption and virtual private networks, were also reportedly integrated into several jurisdictions’ voting system environments for the 2004 general election to protect electronically transferred election data or to secure remote system access. Election officials at 6 of the 28 jurisdictions that we visited said they used encryption to protect ballots during electronic storage. Officials at both jurisdictions in Georgia explained that their state-selected DRE equipment used individual access cards for each voter, uniquely encrypted data on the card (including the voter’s cast ballot) for each polling location, and a separately encrypted electronic key needed to access the voter’s ballot. Officials at 7 jurisdictions said they applied encryption to the transmission of election results during the 2004 general election. Election officials in 1 large Colorado jurisdiction stated that they used a virtual private network to ensure the secrecy of data and authenticity of parties when transmitting election results from jurisdictions to the state. Electronic management of voting equipment and operations was another form of integration employed for the 2004 general election. Electronic management covers such functions as equipment testing, initializing, operational monitoring, diagnosis, troubleshooting, shutdown, and auditing. It also includes election operations that affect voting equipment, such as voter processing at the polling place and handling of absentee ballots. We previously reported that some of these capabilities were available during the 2000 general election in our October 2001 report on election processes. For the 2004 general election, on the basis of our local jurisdiction survey, we estimate that 7 percent of jurisdictions that used voting methods other than paper ballots connected their voting equipment via a local network at their polling locations. The frequency with which remote access to voting systems was provided for the 2004 general election was similarly low (estimated at 10 percent of jurisdictions that used voting methods other than paper ballots) but was again affected by the size of jurisdictions. We estimate that a higher percentage of large jurisdictions used remote access to voting equipment (estimated at 19 percent) than medium jurisdictions (13 percent) or small jurisdictions (7 percent). The difference between large and small jurisdictions is statistically significant. Furthermore, we estimate that remote access was primarily provided to local election officials (in 6 percent of jurisdictions) and to a lesser extent, state election officials, voting equipment vendors, and third parties. Figure 80 shows the estimated percentages of jurisdictions of various sizes that used networking or various types of remote access. These capabilities pose voting system security and reliability concerns as reported in our September 2005 report on the security and reliability of electronic voting. From approximately 20 open-ended text responses to our survey of local jurisdictions that described steps taken to prevent unauthorized remote access to voting systems, four safeguards were identified: employing passwords for remote users, limiting operations to specific election activities, use of virtual private networks, and system monitoring. As we previously reported in our September 2001 report on voting assistance to military and overseas citizens, state and local election officials used technologies like electronic mail and faxing to better integrate activities during the 2000 general election and to improve communications with absentee voters. According to our estimates from the local jurisdiction survey for the 2004 election, jurisdictions continued to use electronic mail to interact with voters and also relied on Web sites for a variety of election needs including voter registration status, the application and processing of absentee ballots, and the status of provisional ballots. For seven items in our survey where we asked about jurisdictions’ use of e-mail and Web sites for voter services, we estimate that large jurisdictions generally used these technologies more frequently than both medium and small jurisdictions, and that differences in six of these items were statistically significant. Figure 81 shows the extent to which jurisdictions of different sizes employed e-mail and Web sites for selected voter services. In addition to using technology to support individual voters, election officials in 1 large jurisdiction we visited in New Mexico described their use of telecommunications technology to support early voting at multiple locations. This jurisdiction connected its registration database to its early voting locations with dedicated phone lines, thus making voter registration information electronically available at each location. Relatively few local jurisdictions we visited reported having plans for integrating or further integrating their election-related systems and components for the 2006 general election, and in cases where they had plans, the scope and nature of the plans varied. At the same time, we estimate on the basis of our local jurisdiction survey that a relatively large proportion of jurisdictions expect to acquire DREs and optical scan systems, which will introduce greater integration opportunities. However, given the uncertainty surrounding the specific types of systems and features to be acquired, the extent and timing of greater integration of voting systems and components, as well as election-related systems, remains to be seen. More specifically, officials in several jurisdictions that we visited told us about plans to integrate relatively modular add-on components to their systems, while officials with several other jurisdictions described plans for more complex end-to-end interactions among election systems and technologies. For example, officials at 5 jurisdictions that we visited reported plans to introduce a VVPT capability for future elections, and officials at 2 jurisdictions reported plans to integrate an audio component to comply with HAVA requirements. In another case, officials in 2 jurisdictions told us that their state is planning to purchase electronic poll books for its precincts to use during the 2006 elections to electronically link its voter registration system with its voting systems. Officials at another jurisdiction told us that they plan to obtain a new optical scanner that will be used to tabulate both DRE and optical scan election results. The scope and magnitude of election system integration may be influenced, in part, by the jurisdictions’ adoption of the optical scan and DRE voting methods and the corresponding products that support add-on automated features, such as languages and accessibility tools, and interactions among automated components of the election process, such as ballot generation and tabulation. As we discussed earlier in this chapter, one-fifth of local jurisdictions are planning to acquire new optical scan and DRE voting equipment in time for the 2006 general election. For instance, on the basis of our survey of local jurisdictions, we estimate that 25 percent of jurisdictions plan to acquire precinct count optical scan voting equipment by the November 2006 general election. However, some jurisdictions had not yet finalized their time frame for acquiring voting equipment at the time of our survey. In addition, their acquisition plans also include technologies for their election Web sites. Figure 82 estimates the percentages of jurisdictions with acquisition plans for various technologies and their implementation time frames. While the advent of more technology-based voting methods provides greater opportunities for integration, the uncertainty around the timing and nature of their introduction makes the future extent of this integration unclear at this point. It is important for voting system standards developers to recognize the opportunity and potential for greater integration of election systems. EAC recently adopted a new version of the voluntary voting system guidelines in December 2005 that will become effective in December 2007. However, this version does not address some of the capabilities discussed above. For instance, the guidelines do not address the integration of registration systems with voting systems. Neither do they address commercial-off-the- shelf devices (such as card readers, printers, or personal computers) or software products (such as operating systems or database management systems) that are used in voting systems without modification. EAC has acknowledged that more work is needed to further develop the technical guidelines in areas such as voting accessibility, usability, and security features. Such efforts have the potential to assist states and local jurisdictions in maximizing the benefits of emerging election technologies. The challenges confronting local jurisdictions in acquiring and operating voting technologies are not unlike those faced by any technology user— adoption and consistent application of standards for system capabilities and performance, reliable measures and objective data to determine whether the systems are performing as intended, rigorous and disciplined performance of security and testing activities, and successful management and integration of the people, process, and technology components of elections during system acquisition and operation. These challenges are heightened by other conditions common to both the national elections community and other information technology environments: the distribution of responsibilities among various organizations, technology changes, funding opportunities and constraints, emerging requirements and guidance, and public attention. The extent to which states and local jurisdictions adopt and consistently apply up-to-date voting systems standards will directly affect the security and performance of voting systems. A substantial proportion of jurisdictions have yet to adopt the most current federal voting system standards or related performance measures. Even if this happens, however, other challenges loom because systems will need to be tested and recertified by many states (and by federal processes whenever states have adopted national standards) to meet any newly adopted voting standards and HAVA requirements for accuracy. Organizations involved with recertification—including federal, state, and local governments; testing authorities; and vendors—may need the capacity to assume the workloads associated with expected increases in the adoption of current standards and the use of new voting systems so that potential risks to near-term election processes are minimized. Reliable measures and objective data are also considered essential management practices for determining whether the technology being used is meeting the needs of the jurisdiction’s user communities (both the voters and the officials who administer the elections). Looking back to the November 2000 and 2004 general elections, we estimate that the vast majority of jurisdictions were satisfied with the performance of their respective technologies. However, considering that our local jurisdiction surveys for the 2000 and 2004 elections indicated limited collection of voting system performance data, we conclude that estimated levels of satisfaction with voting equipment found in our local surveys have been mostly based on a patchwork of operational indicators and, based on site visits to local jurisdictions, have involved anecdotal experiences of election officials. Although these impressions should not be discounted, informed decision making on voting system changes and investment would benefit from more objective data about how well existing equipment is meeting specific requirements, such as those governing system accuracy, reliability, efficiency, and security. No one voting method, or particular voting system make and model, will meet the needs of every jurisdiction. The challenge is thus to ensure that decisions about staying with an existing voting method or investing in new or upgraded voting equipment are made on the basis of reliable and relevant data about the operational performance of the existing method against requirements and standards, as well as the benefits to be derived versus the costs to be incurred with each choice. Effective execution of well-planned security and testing activities provides opportunities to anticipate and address potential problems before they affect election results. This is important because even a few instances of election errors or disruptions can have a sizable impact if election results are close. We estimate that the vast majority of jurisdictions performed security and testing activities in one form or another for the 2004 general election. However, the nature and extent of these activities varied among jurisdictions—to some degree by jurisdiction size, voting method, or perceived applicability of the activities. These activities were also largely responsive to—and limited by—formal state and local directives. When appropriately defined and implemented, such directives can promote the effective execution of security and testing practices across all phases of the elections process. As voting technologies and requirements evolve, states and local jurisdictions face the challenge of regularly updating and consistently implementing the directives to meet the needs of their specific election environments. As we noted for the 2000 general election, managing the three election components of people, process, and technology as interrelated and interdependent variables presents an important challenge in the acquisition or operation of a given voting method. Whether a state or jurisdiction is acquiring, testing, operating, or maintaining a new voting system or an existing one, how successfully the system actually performs throughout the election cycle will depend not only on how well the technology itself has been designed, but also on how well the people and processes associated with the system fulfill their roles for each stage. The technical potential of more extensive integration of voting equipment, components, and election systems also holds the prospect for even more interrelationships and interdependencies among the people, processes, and technologies, with all their attendant risks. In addition to establishing minimum functional and performance requirements and processes for voting system aspects of the election process, system standards can also be used to govern the integration of election systems; address the accuracy, reliability, privacy, and security of components and interfaces; and deliver needed support for the people and processes that will use the integrated election systems. Timely development of integration standards presents a challenge to the election community to keep pace with the advancement of election systems and technology. | The 2004 general election was the first presidential election that tested substantial changes states made to their election systems since the 2000 election, including some changes required by the Help America Vote Act of 2002 (HAVA). HAVA required some major changes in the nation's elections processes, not all which had to be implemented by the November 2004 election. HAVA addressed issues of people, processes, and technology, all of which must be effectively integrated to ensure effective election operations. GAO initiated a review under the authority of the Comptroller General to examine an array of election issues of broad interest to Congress. For each major stage of the election process, this report discusses (1) changes to election systems since the 2000 election, including steps taken to implement HAVA, and (2) challenges encountered in the 2004 election. For this report, GAO sent a survey to the 50 states and the District of Columbia (all responded) and mailed a questionnaire to a nationwide sample of 788 local election jurisdictions about election administration activities (80 percent responded). To obtain more detailed information about experiences for the 2004 election, GAO also visited 28 local jurisdictions in 14 states, chosen to represent a range of election system characteristics. In passing HAVA, Congress provided a means for states and local jurisdictions to improve upon several aspects of the election system, but it is too soon to determine the full effect of those changes. For example, 41 states obtained waivers permitted under HAVA until January 1, 2006, to implement a requirement for statewide voter registration lists. States also had discretion in how they implemented HAVA requirements, such as the identification requirements for first-time mail registrants. Some local election jurisdictions described different identification procedures for first-time mail registrants who registered through voter registration drives. Although states differed regarding where voters who cast provisional ballots for federal office must cast those ballots in order for their votes to be counted, provisional voting has helped to facilitate voter participation. HAVA also created the Election Assistance Commission, which has issued best practice guides and voluntary voting systems standards and distributed federal funds to states for improving election administration, including purchasing new voting equipment. The results of our survey of local election jurisdictions indicate that larger jurisdictions may be replacing older equipment with technology-based voting methods to a greater extent than small jurisdictions, which continue to use paper ballots extensively and are the majority of jurisdictions. As the elections technology environment evolves, voting system performance management, security, and testing will continue to be important to ensuring the integrity of the overall elections process. GAO found that states made changes--either as a result of HAVA or on their own--to address some of the challenges identified in the November 2000 election. GAO also found that some challenges continued--such as problems receiving voter registration applications from motor vehicle agencies, addressing voter error issues with absentee voting, recruiting and training a sufficient number of poll workers, and continuing to ensure accurate vote counting. At the same time, new challenges arose in the November 2004 election, such as fraudulent, incomplete, or inaccurate applications received through voter registration drives; larger than expected early voter turnout, resulting in long lines; and counting large numbers of absentee ballots and determining the eligibility of provisional voters in time to meet final vote certification deadlines. |
Given DOD’s difficulties in achieving audit readiness and addressing its long-standing financial management deficiencies, you asked us to assess DOD’s risk management process for implementing its FIAR Plan. Our objective was to determine the extent to which DOD has established an effective process for identifying, analyzing, and addressing risks that could impede its progress in achieving audit readiness. To address this objective, we identified relevant guiding principles and leading practices of risk management used by the private sector and GAO. Based on our analysis, we found commonalities and identified five basic guiding principles governing effective risk management: (1) identify risks, (2) analyze risks, (3) plan for risk mitigation, (4) implement a risk mitigation plan, and (5) monitor risks and mitigation plans. Using these guiding principles as criteria, we analyzed DOD documents related to risk management, such as the May 2012 and November 2012 FIAR Plan Status Reports, which identified DOD’s program risks and mitigation plans, and FIAR oversight committee meeting minutes, which documented the results of DOD’s efforts to prioritize and manage these risks. We interviewed the FIAR Director and other officials responsible for the FIAR Plan in the Office of the Under Secretary of Defense (Comptroller) and the Office of the Deputy Chief Management Officer (DCMO) to obtain an understanding of DOD’s risk management process. We also inquired about coordinated risk management efforts and about DOD’s plans to revisit identified risks, identify new risks, and mitigate those risks. Although the FIAR Directorate is responsible for DOD-wide risk management activities to implement the FIAR Plan, FIAR Directorate officials told us that some of DOD’s component entities may have risk management activities under way. Accordingly, we made inquiries of the military components and two of the largest defense agencies—the Defense Finance and Accounting Service (DFAS) and the Defense Logistics Agency (DLA)—to identify those that had risk management efforts under way for implementing the FIAR Plan. Of these, the Department of the Navy (Navy) and DLA had risk management practices being implemented at the time of our review, and we included them for comparison purposes to the DOD-wide efforts. Using the five risk management guiding principles as criteria, we reviewed and analyzed the Navy’s and DLA’s risk management plans and supporting documents that identified, described, and prioritized risks to audit readiness as well as progress or status reports related to their efforts to address and monitor those risks. We also interviewed the Navy’s and DLA’s Financial Improvement Plan directors and other knowledgeable officials about their risk management processes and coordination with DOD’s FIAR Director and the Office of the DCMO. We conducted this performance audit from October 2011 to August 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In 2005, the DOD Comptroller established the FIAR Directorate, consisting of the FIAR Director and his staff, to develop, manage, and implement a strategic approach for addressing financial management deficiencies, achieving audit readiness, and integrating those efforts with other initiatives. Also in 2005, DOD first issued the FIAR Plan—a strategic plan and management tool for guiding, monitoring, and reporting on the department’s ongoing financial management improvement efforts and for communicating the department’s approach to addressing its financial management weaknesses and achieving financial statement audit readiness. In August 2009, the DOD Comptroller sought to focus FIAR efforts by giving priority to improving processes and controls that support the financial information most often used to manage the department. Accordingly, the DOD Comptroller revised the FIAR Plan strategy to focus on two priorities—budgetary information and asset accountability. The first priority was to strengthen processes, controls, and systems that produce DOD’s budgetary information. The second priority was to improve the accuracy and reliability of management information pertaining to the department’s mission-critical assets, including military equipment, real property, and general equipment. In May 2010, the DOD Comptroller first issued the FIAR Guidance, which provided the standard methodology for the components—including the Departments of the Army, Navy, and Air Force and DLA—to implement the FIAR Plan. According to DOD, the components’ successful implementation of this methodology is essential to the department’s ability to achieve full financial statement auditability. In recent years, legislation has reinforced certain DOD financial improvement goals and initiatives and has strengthened the role of DOD’s Chief Management Officer (CMO). For example, the NDAA for Fiscal Year 2010 tasked the CMO, in consultation with the DOD Comptroller, with the responsibility for developing and maintaining the FIAR Plan, and required the plan to describe the specific actions to be taken and the costs associated with validating audit readiness by the end of fiscal year 2017. This act also mandated that the department provide semiannual reports—no later than May 15 and November 15—on the status of its implementation of the FIAR Plan. In October 2011, the Secretary of Defense directed the department to achieve audit readiness for its SBR for general fund activities by the end of fiscal year 2014, and the NDAA for Fiscal Year 2012 required that the next FIAR Plan update include a plan to support this goal. Most recently, the NDAA for Fiscal Year 2013 made the 2014 target for SBR auditability an ongoing component of the FIAR Plan by amending the NDAA for Fiscal Year 2010 such that it now explicitly refers to describing the actions and costs associated with validating as audit ready both DOD’s SBR by the end of fiscal year 2014 and DOD’s complete set of financial statements by the end of fiscal year 2017. The department has established a FIAR governance hierarchy to oversee the FIAR Directorate’s management and implementation of the FIAR Plan. At the top is the CMO, who approves the vision, goals, and priorities of the FIAR Plan, which are provided by the DOD Comptroller, in coordination with stakeholders within the department (e.g., military departments) as well as external stakeholders (e.g., the Office of Management and Budget and Congress). The CMO chairs the Deputy Management Action Group, which (1) provides advice and assistance to the CMO on matters pertaining to DOD enterprise management, business transformation, and operations and (2) reviews DOD component FIAR Plans and monitors their progress. To manage and oversee FIAR Plan implementation efforts, a number of committees and working groups, beginning with the FIAR Governance Board, have been established, as shown in table 1. The FIAR Governance Board engages the department’s most senior leaders from the functional and financial communities and oversees DOD component progress. The FIAR Committee and Subcommittee oversee the management of the FIAR Plan. Descriptions of these key FIAR oversight bodies are presented below. In the November 2012 FIAR Plan Status Report, DOD reported the following: Fifteen percent of the department’s reported general fund budgetary resources were undergoing audits, including the Marine Corps’ budgetary resources. The military departments, defense agencies, and other components were preparing the remaining budgetary resources to be ready for audit by the end of September 2014. For mission-critical assets, DOD reported that 4 percent of these assets were undergoing audits, 37 percent had been validated as audit ready, 12 percent had been asserted as audit ready by the respective component, and the remaining 47 percent were being prepared for audit readiness assertions. DOD’s projected funding for the FIAR effort for fiscal years 2012 through 2018 is shown in table 2. Risk management is a strategy for helping program managers and stakeholders make decisions about assessing risk, allocating resources, and taking actions under conditions of uncertainty. Risk management can be applied to an entire organization, at its many levels, or to specific functions, projects, and activities. While risk management does not provide absolute assurance regarding the achievement of an organization’s objectives, an effective risk management strategy can be particularly useful in a decentralized organization—such as DOD—to help top management identify potential problems and reasonably allocate resources to address them. Leading risk management practices recommend that organizations develop, implement, and continuously improve a process for managing risk and integrate it into the organization’s overall governance, strategy, policies, planning, management, and reporting processes. When planning for risk, an organization determines the methodology, strategies, scope, and parameters for managing risks to the objective. In researching risk management principles, we identified five basic guiding principles of risk management, as shown in figure 1. Identify risks. The goal of risk identification is to generate a comprehensive list of risks, regardless of whether those risks are under the control of the organization, based on events that could significantly affect the achievement of objectives. Risk identification involves continuous and iterative communication and consultation with internal and external stakeholders to identify new risks, sources of risk, areas these risks affect, events (including changes in circumstances), their causes (root causes), and potential consequences to the objective. This can be performed through additional inquiry with subject matter experts, surveying and interviewing experienced executives, high-level and detailed documentation reviews, checklists based on historical information, and diagramming processes. Analyze risks. Risk analysis involves developing an understanding of identified risks to assist management in determining the most appropriate methods and strategies in prioritizing and responding to risk. It requires risks to be analyzed to determine the impact of interdependencies between the overall program risks and program component risks. According to guiding principles, risk analysis is a vital part of the entire risk management process as it helps managers determine where to focus their attention and allocate resources to maximize the likelihood of achieving objectives. This requires management to consult with key stakeholders, project managers, and experts to discuss, analyze, and rank risks based on their expert analysis. Suggested techniques for risk analysis include the following: o Risk categorization. Risks can be categorized by sources of risk, the area of the program affected, or other useful categories to determine the areas of the program most exposed to the effects of uncertainty. Grouping risks by common root causes can lead to developing effective risk responses. o Risk urgency assessment. Risks requiring near-term responses may be considered more urgent to address. Indicators of priority can include time to affect a risk response, symptoms and warning signs, and the risk rating. o Modeling. This includes techniques that can be used to assess the effect of risk interdependencies (i.e., one risk is dependent on another risk being resolved) with specific attention to life cycle program costs. Examples include (1) sensitivity analysis, which helps to determine which risks have the most potential impact on the program, and (2) financial analysis methods, such as life cycle program costs, return on investment, or cost benefit analysis, which helps to determine the viability, stability, and profitability of a program. Plan for risk mitigation. Planning for risk mitigation entails selecting the most appropriate and timely action to address risks while balancing the costs and efforts of implementation against the benefits derived. The mitigating actions must also be realistic, achievable, measurable, and documented. Among other things, the plan should include the point of contact responsible for addressing each risk, the root causes of the risk, the options for mitigation, risk status, contingency actions or fallback approach, and resource needs. Implement risk mitigation plan. Implementing the risk mitigation plan determines what planning, budget, requirements, contractual changes, or a combination of these is needed; provides a coordination vehicle for management and other stakeholders; directs the team to execute the defined and approved risk mitigation plans; outlines the risk reporting requirements for ongoing monitoring; and documents the history of changes. Monitor risks and mitigation plan implementation. Effective tracking of risk mitigation implementation (risk monitoring) provides information that assists managers with making effective decisions before problems occur by continually monitoring mitigation plans for new and changing risks. Risk monitoring is the process of identifying, analyzing, and planning for new risks; tracking identified risks; and reanalyzing existing risks throughout the life of the program. Monitoring is also intended to help management determine whether program assumptions are still valid and whether proper risk management policies and procedures are being followed. Risk management is an iterative process and these guiding principles are interdependent such that deficiencies in implementing one guiding principle will cause deficiencies in performing other guiding principles. For example, if the procedures for identifying risks are not comprehensive and not all significant risks are identified, then the other guiding principles for risk management will not be carried out for any risks not identified. Similarly, if identified risks are not sufficiently analyzed, then it is less likely that effective risk mitigation plans will be developed. DOD carried out some risk management practices centrally with respect to implementing the FIAR Plan, but did not follow many risk management principles necessary for effective risk management and did not document its risk management policies and procedures. Specifically, DOD identified some risks to its FIAR effort, but its risk identification procedures were not comprehensive or documented. In addition, its procedures for analyzing, mitigating, and monitoring risks were also undocumented and did not adhere to guiding principles. We found, however, that two DOD components—the Navy and DLA—had documented risk management processes that were consistent with many of the guiding principles for effective risk management. Although DOD has identified several risks that could hinder its efforts to achieve financial statement auditability, it did not identify or address additional key risks that were reported in external audit reports. In January 2012, DOD identified six risks that if not mitigated, could impede its efforts to achieve auditability. The department included these risks in its May 2012 semiannual FIAR Plan Status report. The following is DOD’s summary of the six risks it identified. 1. A lack of DOD-wide commitment. Stakeholders must be committed to improving controls and providing supporting documentation. 2. Insufficient accountability. Leaders and managers must be incentivized to achieve FIAR goals. 3. Poorly defined scope and requirements. Financial improvement plans should address accounting requirements important to audit success. 4. Unqualified or inexperienced personnel. DOD must ensure that personnel are capable of making and supporting judgments that auditors will agree meet accounting standards. 5. Insufficient funding. Resources must be aligned to the scope and scale of the FIAR effort. 6. Information system control weaknesses. Many processes and controls reside entirely in software applications, and therefore these systems and interfaces must support complete and accurate records. DOD did not have written policies and procedures or a documented process for identifying these risks; however, DOD officials told us that they held internal management meetings, brainstormed with internal and external stakeholders, and reviewed prior GAO and DOD Inspector General (IG) reports. While DOD’s identification of risks was a positive step, DOD did not identify sufficient information about these risks, such as the source, root cause, the audit area(s) the risk will impact, and potential consequences to the program if the risk is not effectively mitigated—all critical to properly analyze and prioritize risk. Further, DOD’s risk identification process did not identify all significant risks to achieving its auditability goals. DOD officials told us that risk management practices were embedded throughout the FIAR process and that these six risks were identified in whole or in part through this process. Specifically, they said that monthly and quarterly meetings of the various FIAR oversight committees included ongoing discussions with DOD components regarding their progress in meeting FIAR goals and milestones. According to guiding principles, agencies should generate a comprehensive list of risks based on those events that might create, enhance, prevent, degrade, accelerate, or delay the achievement of objectives. In addition, guiding principles state that risk identification is an iterative process where program stakeholders continually forecast the outcomes of current strategies, plans, and activities and exercise their best judgment to identify new risks as the program progresses throughout its life cycle. Although DOD indicated that risks are discussed on an ongoing basis during various meetings, the risks it initially identified were not comprehensive, and it did not provide evidence of efforts to identify additional risks. We identified additional risks based on prior audit work. For example, DOD did not identify risks related to (1) the components’ reliance on service providers for significant aspects of their financial operations, such as processing and recording financial transactions, and (2) the lack of a department-wide effort to follow documentation retention standards to ensure that required audit support can be provided to auditors. We did not attempt to identify all significant risks to DOD’s audit readiness effort, but these two examples indicate that DOD did not identify all significant risks to the FIAR effort. Conducting a risk identification process in accordance with guiding principles would have increased the likelihood of DOD identifying additional risks that could impede the department’s ability to achieve its auditability goals. As noted previously, the guiding principles are interdependent, and deficiencies in the identification of risks will hinder implementation of other guiding principles, such as risk mitigation. Reliance on service providers: The Marine Corps received a disclaimer of opinion on its fiscal years 2010 and 2011 SBRs because of its inability to provide timely and complete responses to audit documentation requests. Specifically, the DOD IG reported that DFAS—the service provider responsible for performing accounting, disbursing, and financial reporting services for the Marine Corps—did not have effective procedures in place to ensure that supporting documentation for transactions was complete and readily available to support basic audit transaction testing. In December 2011, we reported that the Navy and Marine Corps could not reconcile their Fund Balance with Treasury accounts in large part because they depend on DFAS to maintain the data necessary for the reconciliation, and DFAS did not maintain reliable data or the documentation necessary to complete the reconciliation. DOD officials stated that although they did not identify the reliance on service providers as a risk, they recognized it as a challenge and, as a result, developed requirements in the FIAR Guidance. The FIAR Guidance requires the service providers to have their control activities and supporting documentation examined by the DOD IG or an independent auditor in accordance with Statement on Standards for Attestation Engagements (SSAE) 16 so that reporting entities (components) have a basis for relying on the service provider’s data for their financial statement audits. To prepare for an SSAE 16 examination, the FIAR Guidance requires a service provider first to evaluate its control activities and supporting documentation, take corrective actions as necessary, and then assert audit readiness to the FIAR Directorate. Once the FIAR Directorate validates that the service provider has sufficient controls and supporting documentation, the service provider can then engage an auditor to conduct an SSAE 16 audit examination. The FIAR Guidance states that service providers should identify the reporting components’ audit readiness assertion dates so that they can complete SSAE 16 examinations in time to meet the components’ needs. However, the November 2012 FIAR Plan Status Report indicates that key service providers will not have SSAE 16 examinations completed until sometime in fiscal year 2014. DOD components need to rely on the results of SSAE 16 examinations of key service providers so that the components can effectively assess their own controls in accordance with the FIAR Guidance. In light of the expected completion dates of SSAE 16 examinations, it is not clear if components will have sufficient time to carry out the activities necessary to test and validate their own controls and assert audit readiness for their SBRs by September 2014. For these reasons, the requirements in the FIAR Guidance have not fully mitigated the risk associated with the reliance on DOD’s service providers. Although DOD recognized this issue as a challenge, the reliance on service providers was not identified by DOD management as a significant risk to DOD achieving audit readiness. If DOD formally identified the reliance on service providers as a risk, it is more likely to manage and monitor this risk in accordance with risk management guiding principles. Need for supporting documentation: Document retention and the ability to provide supporting documentation for transactions have been pervasive problems throughout DOD. For example, during the Marine Corps audits, the DOD IG found that DFAS had only retained selected pages of the documents supporting payment vouchers, such as the voucher cover sheet, and did not have critical items, such as the purchase order, receiving report, and invoice, to support that payments were made as required. In addition, we reported in March 2012 that the Army did not have an efficient or effective process or system for providing supporting documentation for its military payroll expenses and, as a result, was unable to locate or provide supporting personnel documents for our statistical sample of fiscal year 2010 Army military pay accounts. DOD officials told us that they recognized document retention as a challenge, and that this issue was addressed in the FIAR Guidance as well as in DOD’s Financial Management Regulation (FMR) and requirements established by the National Archives and Records Administration (NARA). Both the FIAR Guidance and the FMR refer to NARA for guidance on record retention, and the FIAR Guidance also refers to Standards for Internal Control in the Federal Government.However, neither the FIAR Guidance nor the FMR was specific enough to ensure that the documents needed to support audit readiness were retained and available in a timely manner. For example, the FIAR Guidance and the FMR did not address which types of documentation to retain and the required time frames for retaining these documents, thus leaving these decisions to the judgment of DOD component personnel responsible for preparing for audit readiness. DOD officials informed us that they were in the process of updating the FMR to address documentation types and retention periods; however, the updated guidance was not yet available at the time of our review. As a result, we could not determine how and to what extent a revised FMR would address document retention issues. Continuous and comprehensive risk identification is critical because, if a risk is not formally identified, it is less likely to be managed effectively and in accordance with risk management guiding principles. The first step to managing and mitigating risks is to identify them. For example, if DOD had identified the reliance on service providers and the need for document retention standards as risks, it might have implemented actions to address these risks sooner so that they would not have been major impediments to Navy, Marine Corps, and Army audit readiness efforts. If risks to the FIAR effort are not comprehensively identified, DOD is less likely to take the actions necessary to mitigate or minimize the risks and therefore less likely to meet its audit readiness goals. Both the Navy and DLA employed techniques that are consistent with guiding principles for risk identification. For example, they collaborated with stakeholders, experts, support personnel, and project managers on a weekly or monthly basis to discuss potential new risks to the audit effort using techniques such as brainstorming, interviewing key stakeholders, diagramming, and SWOT (strengths, weaknesses, opportunities, and threats) analysis, and documented the results in risk registers or risk databases. Both the Navy’s and DLA’s identified risks included the reliance on service providers and the need for better document retention. DOD did not follow guiding principles for performing risk analysis. The FIAR Director plotted the six risks DOD identified on graphs that were intended to show the likelihood of the risks occurring (or probability) and the effect (or impact) on the overall implementation of the FIAR Plan (see fig. 2). The FIAR Director said that he did not consult with key stakeholders, project managers, and experts to analyze these risks as suggested by guiding principles. He also stated that he did not use recommended analytical techniques, such as (1) risk categorization, (2) risk urgency assessment, or (3) sensitivity analysis. In addition, the FIAR Director did not perform an assessment to determine the individual DOD components’ ability to achieve audit readiness. For example, if one DOD component has significantly more information technology system control weaknesses or fewer skilled personnel than another DOD component, it is likely to have a higher risk of not achieving audit readiness. Performing effective risk analysis could enable DOD to develop appropriate risk mitigation plans to address such concerns, including resource allocation among the components. A probability and impact matrix is generally used for both communication and prioritization. Guiding principles state that risk analysis is a vital part of the risk management process because it helps management determine the most appropriate methods and strategies for mitigating risks. In addition, it allows management to better allocate resources to maximize the likelihood of achieving objectives. By not analyzing risks in accordance with guiding principles, DOD increased the likelihood that it would not adequately address the most critical risks in a timely manner. Navy and DLA officials generally followed guiding principles for risk analysis. For example, at both the Navy and DLA, project management teams worked together to determine who was primarily responsible for managing each identified risk. The Navy and DLA employed analytical techniques to assess risk and documented the results of their analyses— such as the impact each risk has or could have on the objectives and the risk’s priority—in risk registers. In addition, both the Navy and DLA documented their risk analysis processes to allow for consistent implementation. As a result of these analyses, Navy identified the following as its three highest risks to audit readiness efforts: (1) reliance on service providers, (2) internal resources in information technology operations, and (3) tracking unmatched disbursements, while DLA identified (1) data access limitations, (2) standard accounting and financial management functions, and (3) audit response capabilities as its three highest risks. The Navy and DLA each considered its respective risks to have a high impact on audit readiness and a high probability of occurrence. The DOD FIAR Directorate developed risk mitigation plans first published in the May 2012 FIAR Plan Status Report. However, DOD did not have documented policies and detailed procedures for planning risk mitigation actions. As a result, its plans did not have most of the elements recommended by guiding principles. For example, the plans did not include (1) assignment of responsibility or ownership of the risk mitigation actions, (2) information about DOD’s or the components’ roles and responsibilities in executing these plans, (3) deadlines or milestones for individual mitigation actions, and (4) resource needs. The lack of details makes it difficult to determine whether the planned risk mitigation actions are sufficient to address the risks. For example, the risk mitigation plan for addressing the risk of unqualified or inexperienced personnel did not provide sufficient information as recommended by guiding principles. According to the plan, DOD intends to hire experienced individuals who are certified public accountants hire independent public accounting firms to help the department provide FIAR training to the appropriate functional and financial modify existing military department training and education programs to include FIAR objectives, and conduct limited-scope audits of portions of the financial statements to provide firsthand experience in preparation for future financial statement audits. However, the mitigation plan did not provide further details, such as the following: DOD’s actions to comply with the mandate, included in the NDAA for Fiscal Year 2010, to prepare a strategic workforce plan and conduct a gap analysis for mission-critical skills in its civilian workforce, including those in its financial management community. As we recently reported,gap analyses for financial management. DOD has not completed any of its competency How many CPAs DOD plans to hire, in what capacity these CPAs will be utilized, what components will be involved, and at what cost. The relevant criteria for determining which employees should attend new FIAR training, whether training is mandatory, and how many employees are affected. How DOD’s financial management certification program would coincide with the current mandatory training. How or which existing training and education programs would be modified, the time frames for doing so, the intent of the modifications (i.e., how this training would differ from FIAR training), and which employees will be attending these classes. DOD FIAR officials stated that their mitigation plans were straightforward and did not require additional detail for implementation purposes. However, as discussed earlier, guiding principles state that effective planning ensures that the activities to be performed to achieve the objectives are realistic, known, and understood by those who are responsible for performing them, including the milestones and available resources. Without sufficiently detailed plans for risk mitigation, achieving the program’s overall objectives—financial management improvements and auditability—is at increased risk of failure. The Navy and DLA included risk mitigation plans for each of their identified risks in their risk registers. The plans documented the mitigation strategy, assignment of responsibility or ownership of the risk mitigation actions, status updates, and the potential impact of the risk on the objectives. The DOD FIAR Directorate did not maintain documentation of specific mitigation actions taken or who performed them. Specifically, evidence of risk mitigation actions provided by the FIAR Directorate consisted of metrics reported each month and each quarter to the key oversight entities, such as the FIAR Governance Board and FIAR Committee. According to FIAR Directorate officials, they compiled these metrics— related to such matters as the total attendance at FIAR training classes and the number of information technology systems assessed—based largely on information self-reported by the components. The FIAR Directorate did not independently validate this information for reliability as suggested by guiding principles. We found that the reported metrics did not provide a complete picture of the status of the department’s efforts to implement its risk mitigation action plan. Specifically, the metrics did not provide the details needed to determine what actions had been taken, their status and impact, who performed the work, the resources used, the remaining resource needs, and the actions still to be taken. The FIAR Director did not provide an explanation for how these particular metrics were selected for reporting or why more information about mitigation actions was not reported. DOD did not have policies and procedures requiring DOD to (1) document the implementation of mitigation actions, (2) develop appropriate metrics, and (3) validate reported metrics. If DOD does not effectively measure its progress in the implementation of risk mitigation plans, it cannot sufficiently manage risk mitigation actions and monitor the extent to which they are or are not succeeding. Without such information, DOD is limited in its ability to make informed decisions about ongoing mitigation efforts, adjust course as necessary, and identify and mitigate any new risks. This, in turn, could adversely affect DOD’s ability to meet the mandated deadlines of an audit-ready SBR by fiscal year 2014 and audit-ready consolidated financial statements by fiscal year 2017. The following are examples of two of DOD’s identified risks wherein the reported risk management metrics did not adequately measure DOD’s progress in implementing its risk mitigation plans. Unqualified or inexperienced personnel. To address this risk, the November 2012 FIAR Plan Status Report stated that DOD is hiring experienced individuals who are CPAs, modifying existing training programs, and providing FIAR training to employees. DOD reported one metric for this risk, which relates to attendance at FIAR training classes. However, DOD’s metrics did not address the number of CPAs hired or to be hired, who is responsible for the hiring, or the progress to date in hiring experienced personnel. Moreover, the reported metric related to FIAR training classes did not provide key information for assessing progress. As of January 2013, DOD components reported that approximately 7,000 of their financial management personnel had attended FIAR training classes. However, DOD acknowledged that this metric likely included some individuals who were counted multiple times. For example, an individual who attended each of the six FIAR training courses would be counted six times. As a result, it was unclear how many staff members had taken the training courses. In addition, the metrics did not identify the total number of DOD’s approximately 58,000 financial management personnel who are required or expected to take these training courses. As a result, DOD’s “Total Attendance at FIAR Training” metric did not provide a meaningful measure of progress against the identified risk of unqualified and inexperienced personnel. Information system control weaknesses. DOD engaged the DCMO to oversee development and implementation for enterprise resource planning (ERP) business system modernization and has required ERP deployment plans to be integrated with components’ financial improvement plans to mitigate risks of information system control weaknesses. However, DOD’s metric for information systems control weaknesses focuses on the number of information technology systems that have been assessed against the Federal Information System Controls Audit Manual (FISCAM) requirements. As of January 2013, DOD components reported that only 18 of 140 information technology systems had been assessed against FISCAM requirements. This metric does not provide needed details, such as the number of systems assessed that were found to be noncompliant with FISCAM requirements, the number of system change requests identified or completed, and the status of corrective actions. Moreover, the 140 systems identified in DOD’s metric may not constitute the total universe of relevant financial management systems; as we recently reported, DOD had identified 310 financial management systems. GAO, DOD Financial Management: Reported Status of Department of Defense’s Enterprise Resource Planning Systems, GAO-12-565R (Washington, D.C.: Mar. 30, 2012). ERP systems incurring additional cost increases and schedule delays that could affect its ability to achieve an auditable SBR by 2014 and a complete set of auditable financial statements by 2017. As noted previously, if DOD had been more specific in its identification of risks related to its information systems, it would have been in a better position to analyze these risks and develop effective mitigation plans to address them. Navy and DLA officials document their risk implementation efforts by including status updates on a weekly or monthly basis for each risk in their risk registers. The Navy’s risk register had detailed status updates for each risk that included the current status of mitigation efforts and any updates or additional comments that need to be addressed. DLA’s risk register indicated whether risk mitigation efforts were under way (active) for each risk. DLA also identified events (triggers) for each risk that provided an alert as to when a certain risk was close to being realized or imminent, which could then initiate the next course of action. DOD officials, including those in the FIAR Directorate and key FIAR oversight entities such as the FIAR Governance Board and the FIAR Committee, were monitoring risk mitigation efforts using the metrics previously discussed. However, these metrics do not provide the information that managers need to (1) track identified risks and assess the effectiveness of implemented mitigation actions, (2) make effective decisions, and (3) identify and plan for new risks. Further, our review of oversight committee meeting minutes did not find evidence that the metrics were discussed in any greater detail or that decisions were made based on these metrics. If DOD is not effectively monitoring risks, it may be unaware of deficiencies in risk mitigation action plans or implementation that may weaken the effectiveness of its risk mitigation. Guiding principles state that risk monitoring reduces the impact of risk by identifying, analyzing, reporting, and managing risks on a continuous basis for the life of the program. Moreover, if DOD management does not follow guiding principles for monitoring risks to the FIAR effort, it lacks assurance that the department is doing all it can to ensure the success of its audit readiness efforts. Also at risk are the substantial resources that DOD estimates it will need to become audit ready. Based on the data in table 2, DOD’s reported audit readiness resources will average approximately $515 million annually over 7 years. Without the awareness gained through effective monitoring, DOD will not have the information it needs to proactively respond to new risks or adjust its plans based on lessons learned in a manner that can benefit the entire department. For example, as we have previously reported, the Marine Corps’ unsuccessful attempts to have its SBR audited for fiscal years 2010 and 2011 have resulted in lessons learned that may be helpful to other components in preparing for audit readiness. Navy and DLA officials told us that they monitor their risk management efforts during their weekly and monthly meetings that include risk owners and their internal financial management oversight teams. Those meetings are used to discuss new risks, update risk registers, and provide status updates and feedback to component managers about the status of audit readiness efforts. In light of current budget constraints and fiscal pressures throughout the federal government and particularly at DOD, it is more important than ever for DOD to have reliable information with which to manage its resources effectively and efficiently. This necessity and DOD’s estimated costs for the FIAR effort make the successful implementation of its FIAR Plan even more imperative. DOD has taken some actions to manage its department-level risks associated with preparing auditable financial statements through its FIAR Plan. However, DOD had not followed most risk management guiding principles, and had not designed and implemented written policies and procedures to fully identify and manage risks affecting implementation of the FIAR Plan. DOD identified some risks to the FIAR effort, but its risk identification process was not comprehensive. Moreover, DOD did not sufficiently analyze the risks, plan and implement mitigation actions, and monitor the results. To improve management of the risks to the FIAR effort throughout the department, the risk management processes established by two DOD components—the Navy and DLA—could serve as a starting point. Ineffective management of the risks to successful implementation of the FIAR Plan increases the likelihood that DOD will not achieve its audit readiness goals. We recommend that the Secretary of Defense direct the Under Secretary of Defense, in his capacity as the Chief Management Officer and in consultation with the Under Secretary of Defense (Comptroller), to take the following two actions: Design and implement department-level policies and detailed procedures for FIAR Plan risk management that incorporate the five guiding principles for effective risk management. The following are examples of key features of each of the guiding principles that DOD should, at a minimum, address in its policies and procedures. Identify risks. Generate a comprehensive and continuously updated list of risks that includes the root cause of each risk, audit area(s) each risk will affect, and the potential consequences if a risk is not effectively mitigated. o Analyze risks. Consult with key stakeholders, including program managers; use analytical techniques, such as risk categorization, risk urgency assessment, or sensitivity analysis; and determine the impact of the identified risks on individual DOD components’ abilities to achieve audit readiness. o Plan for risk mitigation. Assign responsibility or ownership of the risk mitigation actions, define roles and responsibilities in executing mitigation plans, establish deadlines or milestones for individual mitigation actions, and estimate resource needs. Implement risk mitigation plan. Document the implementation of mitigation actions, develop appropriate metrics that allow for tracking of progress, and validate reported metrics. o Monitor risks. Track identified risks and assess the effectiveness of implemented mitigation actions on a continuous basis, including identifying and planning for new risks. Consider and incorporate, as appropriate, the Navy’s and DLA’s risk management practices in department-level policies and procedures. DOD officials provided written comments on a draft of this report, which are reprinted in appendix I. DOD acknowledged that it does not have a written risk management policy specifically related to the FIAR effort, but did not concur with our assessment of the department’s overall risk management of the FIAR initiative. However, DOD cited planned actions that are consistent with our recommendations and findings, including (1) improving the documentation related to FIAR risk management activities, (2) reinforcing the importance of more detailed risk management activity within each DOD component executing its detailed FIAR Plan, (3) reinstating the DOD probability and impact matrix for risk analysis for the FIAR initiative, and (4) reevaluating all metrics used to monitor progress and risk for audit readiness and developing new measures as appropriate. DOD’s planned actions, if implemented effectively and efficiently, would help address some aspects of the five guiding principles of risk management that are the basis for our recommendations. While these are good first steps, we continue to believe additional action is warranted. Consequently, we reaffirm our recommendations. DOD stated that its risk management processes and activities were embedded into the design of the FIAR initiative. DOD also stated that all common risk management activities were occurring, including identification, evaluation, remediation, and monitoring of enterprise-wide risks for the FIAR initiative, and these activities were effectively managing risk. As stated in our report, while DOD does have some aspects of risk management activities under way in each of these areas, these activities do not go far enough in addressing most risk management guiding principles, nor has DOD designed and implemented written policies and procedures to fully identify and manage risks affecting implementation of the FIAR Plan. For example, although DOD identified six enterprise-wide risks through its risk identification process, DOD did not provide any evidence that the six identified risks were reevaluated on a continuous basis or that new risks were identified or discussed. Additionally, DOD did not identify sufficient details about these risks, such as the root cause, areas the risks will affect, and consequences to the program if a risk is not effectively mitigated nor did it develop a comprehensive list of risks. As noted in our report, we identified at least two additional risks that could impede DOD’s ability to achieve audit readiness—reliance on service providers and lack of documentation standards. DOD’s response noted that it did not label these as risks but as challenges and had actions under way to address them. While we commend DOD for taking some actions to address these two issues, by not adding them to the formal list of risks during the risk identification process, they may not undergo the same level of risk analysis, mitigation, and monitoring as the six formally identified risks. In addition, DOD did not agree with our finding that its planned mitigation actions lacked details and made it difficult to determine whether the planned actions were sufficient to address the risk. For example, we reported that DOD’s mitigation plans did not address specific details related to its mitigating actions for the risk of unqualified or inexperienced personnel, such as the number of CPAs or experienced personnel the department planned to hire, relevant criteria to determine which personnel would attend FIAR training, timing of the DOD financial management certification program, and how existing training and education programs will be modified. In response to our report, DOD provided additional details related to its mitigating actions to address this risk, which were not previously provided to us or reported in the FIAR Plan status updates, including time frames for implementing some of its mitigating actions. However, DOD’s additional details still do not address the findings in our report or issues related to the timing for implementing planned mitigation actions, as many actions are to be implemented beginning in fiscal years 2013 and 2014. This raises concerns about whether DOD can effectively manage and mitigate risks in time to meet its audit readiness goals, beginning with achieving an audit-ready Statement of Budgetary Resources by September 30, 2014, as mandated. Given these concerns, we continue to believe that DOD could improve its risk management processes by designing and implementing department-level policies and detailed procedures that reflect the five guiding principles of effective risk management, as we recommended. DOD also provided one general comment, suggesting that we delete reference to our prior reports on our reviews of the Navy’s Civilian Pay and Air Force’s Military Equipment audit readiness efforts, in which we identified significant deficiencies in the components’ execution of the FIAR Guidance. Although DOD provided an update of progress made since we issued those reports, we have not reviewed those results, and in any case, we included these examples to demonstrate the difficulties encountered by the components in successfully executing the FIAR Guidance effectively and consistently. The examples also show that the components’ initial attempts to assert audit readiness may not be successful and that additional time and mitigating actions may be needed to address components’ deficiencies in implementing the FIAR Guidance. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Secretary of Defense, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-9869 or khana@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix II. In addition to the contact named above, Cindy Brown Barnes (Assistant Director), Francine DelVecchio, Kristi Karls, and Carroll M. (CJ) Warfield, Auditor-in-Charge, made key contributions to this report. Also contributing to this report were Cynthia Jackson, Maxine Hattery, and Jason Kirwan. | The National Defense Authorization Act (NDAA) of Fiscal Year 2010 mandated that DOD's consolidated financial statements be validated as audit ready by September 30, 2017. The NDAA for Fiscal Year 2012 further mandated that DOD's General Fund Statement of Budgetary Resources be audit ready by the end of fiscal year 2014. DOD issued the FIAR Plan and related guidance to provide a strategy and methodology for achieving its audit readiness goals. However, substantial risks exist that may impede DOD's ability to implement the FIAR methodology and achieve audit readiness. GAO was asked to assess DOD's risk management process for implementing its FIAR Plan. This report addresses the extent to which DOD has established an effective process for identifying, analyzing, and mitigating risks that could impede its progress in achieving audit readiness. GAO interviewed DOD and component officials, reviewed relevant documentation, and compared DOD's risk management processes with guiding principles for risk management. The Department of Defense (DOD) has taken some actions to manage its department-level risks associated with preparing auditable financial statements through its Financial Improvement and Audit Readiness (FIAR) Plan. However, its actions were not fully in accordance with widely recognized guiding principles for effective risk management, which include (1) identifying risks that could prevent it from achieving its goals, (2) assessing the magnitude of those risks, (3) developing risk mitigation plans, (4) implementing mitigating actions to address the risks, and (5) monitoring the effectiveness of those mitigating actions. DOD did not have documented policies and procedures for following these guiding principles to effectively manage risks to the implementation of the FIAR Plan. In January 2012, DOD identified six departmentwide risks to FIAR Plan implementation: lack of DOD-wide commitment, insufficient accountability, poorly defined scope and requirements, unqualified or inexperienced personnel, insufficient funding, and information system control weaknesses. DOD officials stated that risks are discussed on an ongoing basis during various FIAR oversight committee meetings; however, the risks they initially identified were not comprehensive, and they did not provide evidence of efforts to identify additional risks. For example, based on prior audits, GAO identified other audit-readiness risks that DOD did not identify, such as the reliance on service providers for much of the components' financial data and the need for better department-wide document retention policies. Risk management guiding principles provide that risk identification is an iterative process in which new risks may evolve or become known as a program progresses throughout its life cycle. Similarly, DOD's actions to manage its identified risks were not in accordance with the guiding principles. GAO found little evidence that DOD analyzed risks it identified to assess their magnitude or that DOD developed adequate plans for mitigating the risks. DOD's risk mitigation plans, published in its FIAR Plan Status Reports, consisted of brief, high-level summaries that did not include critical management information, such as specific and detailed plans for implementation, assignment of responsibility, milestones, or resource needs. In addition, information about DOD's mitigation efforts was not sufficient for DOD to monitor the extent of progress in mitigating identified risks. Without effective risk management at the department-wide level to help ensure the success of the FIAR Plan implementation, DOD is at increased risk of not achieving audit readiness initially for its Statement of Budgetary Resources and ultimately for its complete set of financial statements. GAO identified two DOD components--the Navy and the Defense Logistics Agency (DLA)--that had established practices consistent with risk management guiding principles, such as preparing risk registers, employing analytical techniques to assess risk, and engaging internal and external stakeholders consistently to assess and identify new risks. These components' actions could serve as a starting point for improving department-level risk management. GAO recommends that DOD design and implement policies and procedures for FIAR Plan risk management that fully incorporate the five risk management guiding principles and consider the Navy's and DLA's risk management practices. While DOD did not fully concur, it cited planned actions that are consistent with GAO's recommendations and findings. These are good first steps, but GAO believes additional action is warranted. GAO affirms its recommendations. |
In general, safeguards are temporary import restrictions that provide an opportunity for domestic industries to adjust to increasing imports. Both the WTO Agreement on Safeguards and article XIX of the General Agreement on Tariffs and Trade establish general rules for the application of safeguard measures. Safeguard actions taken under the WTO usually apply to all imports of a product irrespective of source. Other multilateral and bilateral trade agreements also contain safeguard provisions. China’s WTO accession agreement is an example of such an agreement. Its provisions contain a transitional product-specific safeguard that permits WTO members, including the United States, to take measures to address disruptive import surges from China alone. Under the terms of China’s WTO accession agreement, members may use the China safeguard until 2013. In addition to the China safeguard, three other safeguards have been applied to imports from that country in the United States. First, a communist country safeguard applied to China prior to its WTO accession and still applies to import surges from other communist countries that are not WTO members. Second, Chinese imports are subject to a U.S. global safeguard that applies to all WTO members. Third, a textile safeguard provided for in China’s WTO accession agreement covers textile and apparel imports from China. In the United States, the China safeguard is implemented under section 421 of the Trade Act of 1974, as amended, which Congress enacted as part of the legislation authorizing the President to grant China permanent normal trade relations status. Under section 421, U.S. firms may petition the government to apply a China safeguard. The section establishes a three-step process to consider China safeguard petitions. This three-step process involves ITC, USTR, and the President, and it results in determinations about whether import surges from China have caused market disruption and whether a remedy is in the national economic interest or, in extraordinary circumstances, would cause serious harm to national security. The entire process takes approximately 150 days (see fig. 1). The China safeguard was modeled on the communist country safeguard, which applied to China before it became a WTO member. ITC determination: ITC informs both the President and USTR of its determination. If affirmative, USTR requests consultations with the Chinese within 5 days. U.S. producers and certain other entities may file petitions to initiate China safeguard investigations with ITC. These include trade associations, firms, certified or recognized unions, or groups of workers that represent an industry. The President, USTR, the Senate Committee on Finance, and the House of Representatives’ Committee on Ways and Means can also request investigations. The petition must include certain information supporting a claim that imports from China are causing market disruption to an industry. Petitions must include, among other things, the following: product description, import data, domestic production data, and data showing injury. Petitions must also include information on all known producers in China and the type of import relief sought. ITC determines whether imports from China are causing market disruption to U.S. producers and, if so, recommends a remedy to address it. Upon receiving a petition, ITC initiates an investigation by publishing a notice in the Federal Register and holding public hearings to afford interested parties the opportunity to present information. ITC receives information on both market disruption and potential remedies from parties through written submission and oral testimony. ITC has 60 days to determine whether the imports from China are causing–or threatening to cause–market disruption to domestic producers. More specifically, ITC must determine whether imports from China are entering the United States in “such increased quantities or under such conditions as to cause or threaten to cause market disruption” to domestic producers. According to section 421, to determine that market disruption exists ITC must make the following three findings: Imports of the subject product from China are increasing rapidly, either absolutely or relatively. The domestic industry is materially injured or threatened with material injury. Such rapidly increasing imports are a significant cause of the material injury or threat of material injury. If a majority of ITC commissioners determine that market disruption does not exist, the case ends. After an affirmative determination, ITC must propose a remedy. This could include the imposition of a duty, or an additional duty, or another import restriction (such as a quota) necessary to prevent or remedy the market disruption. Within 20 days after making a determination of market disruption, ITC must transmit a report to the President and USTR. The ITC report must include the determination, the reasons for it, recommendations of proposed remedies, and any dissenting or separate views of commissioners. The report must also describe the short- and long-term effects that recommended remedies are likely to have on the petitioning domestic industry, other domestic industries, and consumers. In addition, the report must describe the short- and long-term effects of not taking the recommended action on the petitioning domestic industry, its workers, the communities where production facilities of the industry are located, and on other domestic industries. If ITC renders an affirmative determination, USTR undertakes two parallel efforts. First, USTR consults with China about ITC’s finding and seeks to reach an agreement that would prevent or remedy the market disruption. If the U.S. and Chinese governments do not reach agreement after 60 days (or if the President determines that an agreement reached is not addressing the market disruption), the United States may then apply a safeguard. Concurrently, USTR obtains and evaluates information from interested parties on the appropriateness of ITC’s or any other proposed remedy and makes a recommendation to the President. Within 20 days after receiving the ITC report, USTR issues a Federal Register notice to solicit comments from the public (e.g., importers and consumers). USTR must hold a public hearing if requested to do so. USTR evaluates the information it receives and consults with the other agencies of the Trade Policy Staff Committee (TPSC). Within 55 days after receiving the ITC report, USTR must make a recommendation to the President about what action, if any, to take to prevent or remedy market disruption. Under section 421 the President makes the final decision on the provision of import relief. Within 15 days after receiving a USTR recommendation, the President must decide whether and to what extent to provide relief. Section 421 states: “the President shall provide import relief… unless the President determines that provision of such relief is not in the national economic interest of the United States or, in extraordinary cases, that the taking of action… would cause serious harm to the national security of the United States.” Although the law does not define “national economic interest,” it further states that the President may determine “that providing import relief is not in the national economic interest of the United States only if finds that the taking of such action would have an adverse impact on the United States economy clearly greater than the benefits of such action.” Finally, section 421 requires the President to publish his decision and the reasons for it in the Federal Register. The China safeguard was modeled on the communist country safeguard. In fact, according to its legislative history, it was intended to replace the communist country safeguard for China since it would no longer apply once China became a member of the WTO. As shown in table 1 below, the safeguards share several important characteristics. Both safeguards are limited in scope to imports from particular countries; while the former is limited to imports from China, the latter is limited to imports from one or more communist countries. They also share similar criteria with regard to ITC market disruption determinations and identify the President as final decision maker on whether to provide relief. In addition, both safeguards have a 150-day determination period. In contrast, the China safeguard is significantly different from the global safeguard. The China safeguard is narrower in scope than the global safeguard; it can only be applied to imports from that one country, whereas the global safeguard generally must be applied to all foreign sources of a particular product. Also, the China safeguard’s market disruption standard is regarded to be easier to meet than the criteria for determining injury due to imports under the global safeguard. Furthermore, the standard for presidential action is also different under the global safeguard as it places more emphasis on assisting the domestic industries’ efforts to adjust to international competition (including worker adjustments), and sets forth a broader range of factors for the President to consider in determining whether to provide relief. Finally, the time frame for the China safeguard process is shorter than the global safeguard. Between August 2002 and September 2005, the United States considered five petitions from domestic producers to apply the China safeguard but it has not provided relief. ITC made negative determinations on two petitions and, in three other cases, found market disruption and recommended restricting imports to remedy the situation, ITC is expected to make a determination in a sixth case in early October 2005. In each of the three cases where ITC found market disruption, USTR formulated a presidential recommendation after evaluating various options. The President then decided not to provide any import relief. The success rate for China safeguard petitions is similar to communist country safeguard petitions, but differs from that of global safeguard petitions. U.S. firms have filed six petitions for China safeguard relief since section 421 was enacted (see fig. 2). The petitioners representing the domestic industry ranged from one firm in two cases to seven firms and a union in the most recent petition. The products involved are the following: pedestal actuators (for raising and lowering seats in mobility scooters), certain steel wire garment hangers, brake drums and rotors, ductile iron waterworks fittings (for municipal water systems), uncovered innersprings used in mattresses, and circular welded nonalloy steel pipes. ITC made negative determinations in two of five completed China safeguard cases. In cases brought by manufacturers of brake drums and rotors and mattress innersprings, ITC determined that Chinese imports had not disrupted the domestic market. More specifically, in the brake drums and rotors case, ITC found that although imports from China were increasing rapidly, the domestic industry was neither materially injured nor threatened with material injury. In the mattress innerspring case, ITC was divided on the reasons for making a negative determination. Three of the commissioners determined that imports from China were not increasing rapidly. The other three commissioners determined that the domestic industry was not materially injured or threatened with material injury. In both cases, ITC cited the industries’ healthy profit margins and stable or rising prices as evidence that neither industry was materially injured or threatened with material injury. In the remaining three cases (pedestal actuators, wire hangers, and waterworks fittings), ITC found market disruption and recommended measures to remedy it. In all three cases, ITC cited factors such as falling production and employment in its determinations that the industry was materially injured. Furthermore, ITC noted declines in the industries’ health that coincided with a surge in Chinese imports when they determined that rapidly increasing imports from China were a significant cause of material injury. In other words, Chinese imports caused market disruption to the domestic industry. In deciding which import restriction to recommend, ITC considered the conditions of competition in the domestic industry (e.g., demand conditions, and import and domestic supply conditions), as well as comments received from parties in the cases. ITC recommended different import restrictions to remedy the market disruption it found in each case. For example, as noted in table 2 below, ITC found that a 3-year, declining tariff on wire hangers from China was the most appropriate remedy in that case. In contrast, ITC recommended a 3-year quota in the pedestal actuator case because there was only one supplier and one primary purchaser of pedestal actuators, and the domestic-imported price differential was large. In addition, the ITC also proposed that the President direct the Departments of Commerce and Labor to provide expedited consideration of trade adjustment assistance applications for workers in the wire hangers and waterworks fittings industries. USTR consulted with the Chinese government, solicited and obtained comments from a variety of sources, and analyzed the advantages and disadvantages of the ITC remedies and other options in formulating its recommendations to the President. After receiving each of the three affirmative market disruption determinations from ITC, USTR requested consultations with the Chinese government. USTR notified the WTO Committee on Safeguards of the consultation requests. Representatives of the two governments met but did not reach any agreements to address the market disruption found by ITC, according to USTR officials. During the 60-day consultation period, USTR continued to gather information from interested parties about any potential remedies. USTR, in conjunction with other agencies on the TPSC, held a 1-day public hearing for each of the cases and obtained views on what, if any, type of import restriction was in the public interest. The parties also had the opportunity to provide written comments. In addition to the ITC- recommended remedies, USTR sought comment on alternate remedies and on not providing relief. The hearings included both the domestic petitioners and Chinese respondents, as well as other interested parties such as importers and downstream users. For example, the wire hanger hearing included testimony from a hanger distributor. In the pedestal actuator and wire hanger cases, a representative from the Chinese government testified that applying the safeguard would damage U.S.-China bilateral economic relations, in addition to raising procedural and substantive concerns. USTR officials said that certain information relevant to the effectiveness of potential remedies surfaced at these hearings, that did not surface in the ITC proceedings. After the hearings, the USTR staff weighed the pros and cons of the various courses of action. USTR considered ITC’s analysis, as well as the testimony and written submissions provided by interested parties, and sought comments from other TPSC members. According to detailed briefings from USTR officials, in each case USTR considered the ITC- recommended remedy among other remedies presented, as well as the option of having no remedy. USTR staff worked with the U.S. Trade Representative throughout the proceedings. USTR staff then drafted a recommendation in a memorandum to the U.S. Trade Representative, who assessed the various options. The Trade Representative then made a recommendation in a memorandum to the President. The President declined to provide relief in all three cases. He found that imposing remedies such as duties and quotas would not be in the national economic interest. The President’s reasons for not providing relief were printed in the Federal Register and are summarized in table 3. The President’s decisions did not cite national security concerns as a reason in any of the three cases. The final outcomes of China safeguard cases are similar to those of communist country safeguard cases but different than global safeguard cases. As shown in table 4, domestic industries have sought relief under the China and communist country safeguards far less frequently than they have sought relief under the global safeguard. Overall, petitioners have been denied relief in almost all China and communist country safeguard cases but have been granted import relief in about one quarter of global safeguard cases. Of those cases where ITC found the industry was injured by imports, the President denied relief in all but one of the China and communist country safeguard cases. Conversely, the President granted relief in about half of the global safeguard cases where the ITC found injury. Moreover, since Congress amended the global safeguard’s standard for presidential action in the 1988 Trade Act, the President has always provided relief when ITC found injury. The President’s decisions not to impose relief in the three China safeguard cases in which ITC found market disruption have been criticized. Nevertheless, the President has broad discretionary authority under section 421 to consider U.S. national economic and security interests when weighing the facts and circumstances particular to each case. This broad discretion was upheld by the U.S. Court of International Trade. This, together with the fact that the President considers factors that ITC does not, including consumer cost and the potential for imports from other countries, allows him to reject relief even when it has been recommended by ITC. Several different groups have criticized the President’s decisions not to apply China safeguard relief. For example, company officials and trade lawyers who were unsuccessful in obtaining relief criticized the President’s decisions in several congressional hearings. As we discuss later, one company subsequently filed a lawsuit against the President claiming he exceeded his authority in rejecting ITC’s recommended remedy. The bill establishes clear standards for the application of Presidential discretion in providing relief to injured industries and workers. If the ITC makes an affirmative determination on market disruption, there would be a presumption in favor of providing relief. That presumption can be overcome only if the President finds that providing relief would have an adverse impact on the United States economy clearly greater than the benefits of such action, or, in extraordinary cases, that such action would cause serious harm to the national security of the United States. This legislative history, together with the China safeguard’s shorter time frames and lesser injury standard, and other procedural characteristics, may have created an expectation that the likelihood for relief under the China safeguard was going to be greater compared with the global safeguard. Similarly, the U.S.-China Economic and Security Review Commission, a body established by Congress to monitor and investigate the security and economic implications of the bilateral economic relationship between the United States and China, held hearings and criticized the administration for failing to apply the safeguard after an affirmative ITC injury determination. In March 2005, this commission recommended that Congress consider amending the China safeguard to either eliminate the President’s discretion or limit it to the consideration of noneconomic national security factors after an affirmative ITC finding. In addition, the lack of any positive decisions by the President in these cases may have discouraged other U.S. producers from seeking relief under the China safeguard. Several trade lawyers representing domestic U.S. producers with whom we spoke told us about their reluctance to bring additional China safeguard cases in the future because they thought that the President would reject them based on political considerations. The U.S.-China Economic Security Review Commission expressed similar concern that repeated presidential refusal to apply the safeguard had undermined the instrument’s efficacy. Indeed, until August 2005, when producers filed a petition on steel pipe, no China safeguard petition had been filed since March 2004, when the President rejected an ITC recommendation to provide relief from imports of Chinese ductile iron waterworks fittings. Despite criticisms, the President’s discretion under the China safeguard is quite broad. The President must provide relief unless he finds that it is not in the national economic or security interest. With regard to the former, the President is authorized to deny relief when he finds that the relief would have an adverse impact on the United States economy clearly greater than the benefits. In June 2004, the U.S. Court of International Trade affirmed the President’s broad discretionary authority in a case brought by the petitioner in the first China product safeguard case. In that case, Motion Systems Corp. contended that the President had exceeded his authority under section 421 by not providing relief. In particular, Motion Systems argued that the President was required to quantify the adverse impact of providing relief and demonstrate that the adverse impact was clearly greater than the benefits that the relief would provide to the domestic industry. In this regard, Motion Systems maintained that section 421 created a presumption of relief once ITC made an affirmative determination of market disruption. In affirming the President’s decision, the Court held that the President had not exceeded his authority and said the law granted him “considerable discretion.” The Court found that section 421 made no reference to evidence or a burden of proof that the President must satisfy to support his conclusion that the imposition of a safeguard would have an adverse impact on the U.S. economy clearly greater than its benefits. The Court also noted that the President was not prohibited from considering political factors in making a finding about the adverse impact on the U.S. economy, including trade relations between the United States and China. Finally, the Court did not specifically comment on the presumption of relief issue. While ITC makes remedy recommendations that would alleviate market disruption, the President considers a broader range of factors than ITC in determining whether to apply China safeguard relief. Specifically, under section 421, ITC focuses on the domestic industry involved in the proceeding, both in the context of making injury determinations and recommendations for relief. For example, among the factors ITC considered in determining material injury were the idling of U.S. production facilities and the ability of firms within the industry to produce at reasonable profit, wage, and employment levels. Thus, ITC did not weigh the interests of other groups such as consumers and downstream industries against potential benefits to the domestic industry when developing its recommendations for the President to consider. Nevertheless, ITC reports on the potential economic effect of its recommended remedies, as described earlier. However, section 421, does not require ITC to consider these broad economic effects when developing its recommendations. In contrast, as discussed above, section 421 authorizes the President to consider overall U.S. economic and security interests in deciding whether to impose China safeguard relief. In each of the three cases where ITC found injury and recommended a remedy, the President found, among other things, that relief would have an adverse impact on other participants in the economy. The President determined that relief would carry substantial costs for consumers or downstream users of the products involved. Specifically, the President cited the increased costs to aged and disabled consumers of mobility scooters as a reason for not providing relief in the pedestal actuator case. In the wire hangers case, the President stated that relief would have an uneven impact on wire hanger distributors and impose increased costs on dry cleaning companies. Finally, in the waterworks fittings case, the President found that the costs to consumers would substantially outweigh producer income benefits. The President’s decisions also took into account the unique facts and circumstances in each case. For example, in the pedestal actuator case there was only one petitioner seeking relief and one dominant purchaser. In the wire hanger case, domestic producers had different business models that affected whether a remedy would benefit or disadvantage them. In addition, the U.S. Trade Representative noted in a March 2004 congressional hearing that, while not necessary to the President’s decision, in the waterworks fittings case the petitioner faced serious problems besides competing Chinese imports. Although the President did not provide import relief in these cases, he stated that he remains committed to applying the China safeguard when circumstances warrant. The President has considered whether relief would benefit the producers involved in every case. In his decisions denying relief the President stated that imposing a safeguard would have limited benefits. One factor that the President has cited in all three cases is that applying a safeguard would lead to production being shifted from China to other countries rather than to U.S. producers. In the waterworks fittings case, the President specifically identified other current suppliers to the U.S. market such as India, Brazil, Korea, and Mexico. Similarly, in all but one communist country safeguard determinations, the President found, among other things, that providing relief would have resulted in imports shifting from the communist country involved to other offshore sources. With only one exception, the President has never approved a remedy under the communist country safeguard. In contrast, under the global safeguard, imports from other countries generally cannot diminish the potential benefits of import relief. Since the global safeguard statute was enacted in 1974, the President applied relief in approximately half of the cases in which ITC has made a positive injury determination. Moreover, since it was substantially amended in 1988, the President has provided relief in every such global safeguard case. It is not possible to identify all the factors that contribute to such opposite results among the different safeguards. However, one consistent factor has been that the China and communist country safeguards, respectively, are limited in scope to products from one or a few countries; this allows other foreign sources to gain market share of the product and reduce the potential benefit of the safeguard to the domestic producers. We provided ITC and USTR a draft of this report for their review and comment. Both agencies chose to provide technical comments from their staff. USTR staff cautioned against drawing overall conclusions about the use of the China safeguard given the small number of cases considered thus far. Additionally, both USTR and ITC staff suggested we clarify our characterizations of section 421’s legislative history and of the Motion Systems Corp. v. Bush lawsuit. We modified the report in response to their suggestions. USTR and ITC also provided other suggestions to make the report more accurate and clear, which we incorporated as appropriate. We are sending copies of this report to ITC and USTR, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or any of your staff have any questions about this report, please contact me at (202) 512-4347 or yagerl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. To address our objectives, we reviewed U.S. laws and procedures as well as relevant World Trade Organization (WTO) agreements and China’s accession agreement. To ensure our understanding of relevant laws, procedures, and agreements, we spoke with officials from the Office of the United States Trade Representative (USTR) and the International Trade Commission (ITC). In addition, we interviewed officials from the WTO and officials from the government of China. Finally, we spoke with law firms that had direct experience in China safeguard cases, as well as law firms with broad experience in trade actions against China. To describe how the safeguard has been applied thus far, we examined each phase of the process. For the ITC phase, we reviewed and analyzed each of the determinations the ITC commissioners issued during the five China safeguard injury investigations completed as of July 31, 2005, to understand the rationale behind them. We further obtained private sector views of the ITC process by speaking with law firms that had represented petitioners and/or respondents in each of the five China safeguard injury investigations. For the USTR phase of the process, we spoke with law firms that had represented petitioners and/or respondents that participated in each of the three China safeguard remedy investigations and reviewed the transcripts of all three USTR hearings. USTR neither made the documents related to their analyses nor their recommendations available to us. Instead, we relied on detailed briefings from USTR officials on the nature and substance of their deliberations culminating in a recommendation to the President. For the presidential phase of the process, we reviewed each of the President’s three determinations made under the China safeguard. To compare the use of the China safeguard with the communist country and global safeguards, we reviewed ITC import injury investigation statistics and presidential determinations in the China, communist country, and global safeguard cases. We found the ITC injury statistics to be sufficiently reliable for presenting and contrasting ITC’s final disposition of cases brought under these statutes. To examine the issues related to the application of presidential discretion, we analyzed the reasons the President gave in his decisions not to provide import relief. Additionally, we reviewed the legislative history of the China safeguard and written and oral testimony before Congress and the U.S.- China Economic and Security Review Commission. We reviewed the Court of International Trade’s decision in the Motion Systems case against the government and the submissions of parties to the case. Finally, we analyzed the presidential determinations made under the communist country and global safeguards. In addition to the individual named above, Adam R. Cowles, R. Gifford Howland, Michael McAtee, and Richard Seldin made significant contributions to this report. | In joining the World Trade Organization (WTO) in December 2001, China agreed to a number of mechanisms to allow other WTO members to address disruptive import surges from that country. Among these was a transitional product-specific safeguard. In general, safeguards are temporary import restrictions of limited duration that provide an opportunity for domestic industries to adjust to increasing imports. U.S. law includes a number of other safeguards including a communist country safeguard, known as "section 406," and a global safeguard, known as "section 201," which have both applied to China. In light of increased concern about Chinese trade practices and the U.S. government response to them, the conference report on fiscal year 2004 appropriations requested that GAO review the efforts of U.S. government agencies responsible for ensuring free and fair trade with that country. In this report, which is one of a series, GAO (1) describes the China safeguard, (2) describes how it has been used thus far, and (3) examines issues related to the President's discretion to apply the safeguard. Other safeguards provide context to understand this mechanism. We provided ITC and USTR a draft of this report for their review and comment. Both agencies chose to provide technical comments from their staff. We incorporated their suggestions as appropriate. The China safeguard permits WTO members, including the United States, to address disruptive import surges from China. In the United States, the China safeguard is implemented under section 421 of the Trade Act of 1974, which allows U.S. firms to petition for relief and establishes a three-step process. This process involves the International Trade Commission (ITC), Office of the U.S. Trade Representative (USTR), and the President and determines whether Chinese imports are causing market disruption to domestic producers and whether a remedy is in the national economic interest. The entire process takes about 150 days. Under the terms of China's WTO accession agreement, WTO members may use the China safeguard until 2013. To date, the United States has not applied the China safeguard in five cases brought by domestic producers. In a sixth case, ITC has not yet reached a decision. In two cases, ITC found no market disruption. In three cases, ITC found market disruption and USTR evaluated the pros and cons of various options and made a recommendation to the President. In all three cases, the President declined to provide relief to the domestic industry after he found it would not be in the national economic interest because the costs would outweigh the benefits. The success rate for China safeguard petitions is similar to communist country safeguard petitions, but differs from that of global safeguard petitions. The President's decisions not to provide import relief after ITC found market disruption generated controversy, including a lawsuit claiming that he exceeded his authority. The relevant House committee intended that the law create a presumption in favor of relief upon an ITC injury finding. Nonetheless, the U.S. Court of International Trade found the President has broad discretion not to apply a China safeguard. Moreover, the President considers the question of whether to provide relief from a broader perspective than ITC. The President weighs the benefits of relief against the costs and considers factors such as the effect on consumers and downstream users, which ITC does not. The President cited third-country imports in all his decisions denying relief under both the Chinese and communist country safeguards. Under the global safeguard, third-country imports generally cannot diminish the potential benefits of import relief to the domestic industry and the President has often provided relief, especially since 1988 when U.S. trade laws were revised. |
No one commonly accepted definition of SWFs exists, although the feature of a government-controlled or government-managed pool of assets is a part of most definitions. Government officials and private researchers use varying characteristics to categorize SWFs, and depending on the source and primary defining characteristic, different types of funds may be included or excluded. Definitions have been developed by Treasury, IMF, and private researchers. Some definitions include pension funds or investments made from foreign currency reserves maintained in central banks. An explanation of how we chose funds to include in our analysis is in appendix I. Countries that are major exporting nations or natural resource providers may accumulate large amounts of foreign currency reserves through the sale of their manufactured goods or natural resources to other nations. While all countries need some amount of foreign currency reserves to meet their international payment obligations, in some cases countries may accumulate currency reserves in excess of the amounts needed for current or future obligations. Some countries invest their foreign exchange reserves in assets such as the sovereign debt of other countries, including securities issued by Treasury to fund U.S. government operations. However, some countries have formed SWFs to invest a portion of their excess foreign currency reserves in assets likely to earn higher returns, such as the equity shares issued by foreign publicly traded companies. Some countries with current account (a broad measure of international flows that includes trade balances) surpluses have created SWFs. These include countries that are major exporters of commodities or natural resources, such as oil, as well as those, such as China, that are exporters of manufactured goods. In contrast, as the world’s largest importer of goods and natural resources, the United States has run increasingly large current account deficits since the early 1990s. The current account deficit of the United States was $731.2 billion in 2007, whereas Asian countries with SWFs had a combined current account surplus of over $400 billion and oil-producing countries with SWFs had a combined surplus of about $338 billion (see fig. 1). These current account surpluses have led to a buildup of foreign currency reserves in some countries. Since 1995, currency reserves in industrial economies have more than doubled and currency reserves in developing economies have increased sevenfold. Foreign currency accumulation has been especially large among oil-producing countries and Asian countries with large trade surpluses, especially with the United States. China, Korea, Japan, and Russia hold the largest quantities of foreign currency reserves. Asian exporting countries’ combined current account surpluses grew from $53 billion in 2000 to $443 billion in 2007. Currency reserves have accumulated in SWF countries (see fig. 2). The U.S. dollar accounted for slightly less than two-thirds of total central bank foreign reserve holdings of all countries as of the first quarter of 2008. For oil-exporting countries with SWFs, which include some nations in the Middle East, as well as Norway and Russia, oil revenues remained relatively stable from 1992 to 1998. But in 1999, oil prices—as measured by the annual weighted world average price per barrel—began to rise (see fig. 3). Consequently, oil revenues have increased 561 percent for the major exporting nations from 1992 to 2006, the year for which the latest data were available. These revenue increases have occurred as the price of oil per barrel has increased from $23 in January 2000 to well over $100 in the first half of 2008, including over $137 in July 2008. An interagency group is responsible for reviewing some foreign investment transactions in the United States. CFIUS, initially established by executive order in 1975, reviews some foreign investments in U.S. businesses, including some investments by SWFs. Section 721 of the Defense Production Act authorizes the President to suspend or prohibit mergers, acquisitions, or takeovers that could result in foreign control of a U.S. business if the transaction threatens to impair national security. The President delegated his section 721 authority to review individual transactions to CFIUS. CFIUS and its structure, role, process, and responsibilities were formally established in statute in July 2007 with the enactment of the Foreign Investment and National Security Act (FINSA). FINSA amends section 721 of the Defense Production Act to expand the illustrative list of factors to be considered in deciding which investments could affect national security and brings greater accountability to the CFIUS review process. Under FINSA, foreign government-controlled transactions, including investments by SWFs, reviewed by CFIUS must be subjected to an additional 45-day investigation beyond the initial 30-day review, unless a determination is made by an official at the deputy secretary level that the investment will not impair national security. CFIUS reviews transactions solely to determine their effect on national security, including factors such as the level of domestic production needed for projected national defense requirements and the capability and capacity of domestic industries to meet national defense requirements. If a transaction proceeds to a 45-day investigation after the initial 30-day review and national security concerns remain after the investigation, the President may suspend or prohibit a transaction. According to Treasury, for the vast majority of transactions, any national security concerns are resolved without needing to proceed to the President for a final decision. The law provides that only those transactions for which the President makes the final decision may be disclosed publicly. Information about SWFs publicly reported by SWFs, the governments that control them, international organizations, and private researchers provides a limited picture of their size, investments, and other descriptive factors. Our analysis found that the amount and level of detail that SWFs and their governments report about their activities vary significantly, and international organizations that collect and publish various statistics about countries’ finances do not consistently report on SWFs. As a result, some of the available information about the size of certain of these funds consists of estimates made by private researchers. Based on a combination of data or estimates from these various sources, SWFs currently hold assets estimated to be valued from $2.7 trillion to $3.2 trillion. Several researchers expect SWFs to grow substantially in the coming years. In our analysis of the publicly available government sources and private researcher lists of SWFs, we identified 48 SWFs across 34 countries that met our criteria. These include funds from most regions of the world. Of the 48 SWFs we identified, 13 were in the Asia and Pacific region. Ten were located in the Middle East, with the remaining 25 spread across Africa, North America, South America, the Caribbean, and Europe. Some countries, such as Singapore, the United Arab Emirates, and the Russian Federation, have more than one entity that can be considered an SWF. Some SWFs have existed for many years, but recently a number of new funds have been created. For example, the Kuwait Investment Authority and the Kiribati Revenue Equalization Reserve Fund have existed since 1953 and 1956, respectively. The Kuwait Investment Authority was founded to invest the proceeds of natural resource wealth and provide for future generations in Kuwait, and the Kiribati Revenue Equalization Reserve Fund was formed to manage revenues from the sale of Kiribati’s phosphate supply. However, since 2000, many commodity and trade- exporting countries have set up new SWFs. These funds have grown as a result of rising exports of commodities such as oil, whose prices have also risen. Of the 48 funds we identified, 28 have been established since 2000 and 20 of these can be classified as commodity funds that receive funds from selling commodities such as oil (see fig. 4). Based on our review of public disclosures from SWFs on government Web sites, we determined that the extent to and the level of detail at which SWFs publicly report information on their sizes, specific holdings, or investment objectives varied. Based on these reviews, we found that 29 of 48 funds publicly disclosed the value of assets since the beginning of 2007. According to documents published by the countries, 17 of these 29 report asset figures that are subject to an annual audit by either an international public accounting firm or by the country’s national audit agency. In total, 36 of the 48 funds provided publicly reported size estimates, though some date back to 2003. While most provided a specific value, 2 reported only a minimum value. Among the largest 20 funds, 13 publicly reported total assets. Of the funds in our analysis, 24 of 48 funds disseminated information on fund-specific sites and 21 used other government Web sites, such as those belonging to the finance ministry or central bank. Thirty funds reported at least some information on their investment activities. Of the largest 20 funds, 12 reported this information. Only 4 of the 48 funds fully disclose the names of all the entities in which they have invested. The level of detail reported by others’ funds varied. For example, 21 funds reported information about some of their investments, such as the names of their significant investments, while others disclosed only the regional breakdown of their holdings or gave only general statements about the types of assets and sectors in which they invested or planned to invest. These assets usually included equities, bonds, real estate, or other alternative investments. We found that about 77 percent of the 48 SWFs publicly reported the purpose of their funds, with 13 of the largest 20 funds doing so. In many cases, fund purposes included using the country’s financial or natural resources to earn investment returns intended to benefit the country in various ways, including providing income for future generations, balancing the government’s budget, or preserving foreign currency purchasing power. The information publicly reported about SWFs varies in part because of different disclosure requirements across countries. The nature, frequency, and content of any SWF information are reported at each country’s discretion. Some countries may restrict the type of reporting that can be released. For example, according to documents published by the government of Kuwait, Kuwaiti law requires the Kuwait Investment Authority to submit a detailed report on its activities to the Kuwaiti government authorities, but prohibits it from disclosing this information to the public. In contrast, according to Norwegian government documents, Norwegian law requires that the country’s SWF publicly release comprehensive and detailed information regarding its assets, objectives, and current holdings on a quarterly and annual basis. Some funds that are not required to disclose information have begun to do so voluntarily. For example, Temasek Holdings, an SWF located in Singapore, is not required by Singapore law to release financial performance reports, but it began doing so in 2004, according to a Temasek Holdings official. This official told us that each SWF operates in a different environment and must decide on the appropriate amount of transparency. The official said that since Temasek Holdings began publishing its annual review in 2004, it has disclosed more information each year. The extent to which other large private classes of investors disclose information about their assets and investments also varies. For example, investors such as hedge funds and private equity funds that are not investment companies under U.S. securities laws have as their investors primarily institutions or high net worth individuals and are generally not required to publicly disclose information about their investment portfolios. In contrast, U.S. mutual funds are generally required to disclose certain information about their investment portfolios and portfolio transactions. While some SWFs disclose holdings information, officials of other SWFs expressed concerns that disclosures could be used by other investors in ways that could reduce the funds’ investment earnings. International organizations collect and publish various statistics about countries’ finances, but report only limited information on SWFs. Until recently there has been a lack of guidance in macroeconomic statistical standards on the treatment of SWFs and no systematic review of whether the assets of these funds are included in the data reported. IMF officials have initiated an effort to increase the amount and specificity of SWF activities in IMF documents. Currently, IMF members are expected to disclose a range of fiscal and macroeconomic statistics, including countries’ balance of payments and their international investment position, that are made public in various IMF reports and IMF’s World Economic Outlook and Global Financial Stability Report. The data that countries report may include the level of reserves and the amount of external assets they hold. However, the coverage of SWFs in these statistics is not uniform, not least because SWFs can be included in different accounts depending on specific statistical criteria. According to IMF staff, the countries themselves determine whether to include the value of their SWF assets in their reserve assets or separately as external assets. In some cases, countries do not report any information about their SWF. Further, some member countries do not submit data on their international investment position to IMF. Analyzing a selection of 21 countries with SWFs, IMF staff found that only 11 included the value of their SWFs’ assets in either their balance of payments or international investment position data. IMF staff noted that members are not required to report the value of the SWF holdings as a separate line item and no member currently does so. In addition to information from required data reporting, IMF staff also collected some information about SWFs through their consultations with individual countries. IMF staff periodically hold policy discussions—called Article IV consultations from the section of the IMF rules that requires them—in member countries to monitor their economic developments and outlook and assess whether their policies promote domestic and external stability. According to IMF staff, Article IV staff reports are expected to focus strictly on those issues. We reviewed publicly available Article IV reports, or summary reports in several cases, for the 34 countries that we identified as having SWFs. Based on this analysis, we found information about the size of a country’s SWF in the Article IV reports or public summaries for 13 of these countries. The extent of the information on SWFs publicly reported from the Article IV consultations varied, with some documents only noting that the country had such a fund and others providing the current level of assets in the SWF and country officials’ expectations for growth of the SWF through revenues or fiscal transfer. IMF is implementing changes to its reporting that could expand the official data available on SWF activities. IMF officials have stated that collecting additional data on SWFs is important because of the fiscal, monetary, and economic policy impacts that the funds could have for IMF member countries and for the global economy, given their increasing prevalence and growth. IMF expects to implement new reporting guidance in 2009 that would call for countries to separately report their SWF holdings on a voluntary basis. While this is a positive development that could further expand the official information available, its success depends on the degree to which countries participate. In addition, IMF is including guidance on how to properly classify SWF assets in its latest version of the balance of payments manual, which it expects to publish in late 2008. The current version of this manual was last updated in 1993 and does not address SWFs. In recognition of the growing number of SWFs, IMF officials told us that they began to address the methodological issues related to a definition of SWFs and SWF assets in 2005 and subsequently initiated an international consultation on the issue. IMF expects that this additional detail will provide an understanding of the location of SWF assets, which, depending on certain criteria, can be reported in either the reserve assets or the external accounts or in other accounts of a country’s financial accounts data. IMF notes that this new reporting item will help to facilitate proper identification of SWFs and may contribute to greater transparency and to a better understanding of their impact on the country’s external position and reserve assets. Treasury staff told us that they were involved in the group that considered these changes, and while the United States and some other countries would have preferred that the proposed SWF reporting be mandatory, the group chose the voluntary option. In addition, IMF is facilitating and coordinating the work of the International Working Group of Sovereign Wealth Funds that is deliberating on a set of Generally Accepted Principles and Practices relating to sovereign wealth funds. These are intended to set out a framework of sound operation for SWFs. The specific elements are likely to come from reviews of good SWF practices. These principles are also aimed at improving the understanding of SWFs in both their home countries and recipient countries. Other organizations involved in monitoring international financial developments do not regularly report on SWF activities. For example, the Organisation for Economic Co-operation and Development (OECD), an organization of 30 countries responsible for supporting global economic growth and trade, collects data on foreign direct investment inflows and outflows and national accounts information from its member countries and some selected nonmember countries. These data, however, do not specifically identify SWFs. Further, only 9 countries included in OECD’s surveys are known to have SWFs. Many countries with SWFs are not members of OECD. A recent document from OECD indicates that it is using other publicly available sources of data to estimate the size and asset allocation of selected SWFs. Recognizing the growing importance of SWFs, Group of Seven finance ministers and central bank governors, including officials from Treasury, proposed that OECD develop investment policy guidelines for recipient countries of SWF investment. According to recent OECD publications, the organization is focusing its SWF work on these guidelines. Because not all SWFs disclose their activities publicly, information about the size of some SWFs comes from estimates published by private researchers, including investment bank researchers and nonprofit research and policy institutions. Though their methodologies and data sources vary, these researchers generally begin by reviewing publicly available data and statements from national authorities of SWF countries, press releases, and other research reports. In some cases, they use confidential data, such as information provided to their firms’ trading staffs by SWF officials or private conversations with their firms’ investment managers. However, they are usually prohibited from publicly disclosing the sources of information received confidentially. At least one researcher we spoke with also used certain IMF balance of payments and World Economic Outlook statistics as proxies for SWF outflows and purchases. The researchers often make projections of the level of foreign reserves, commodity prices, amounts of transfers from reserves to the SWF, and the assumed rate of return for the fund to develop judgmental estimates of the current size of assets held by SWFs. These estimates have been published publicly by these researchers as part of their analysis of trends affecting world financial markets. The researchers we spoke with acknowledged that the accuracy of their estimates is primarily limited by the sparse official data on SWFs. Researchers also cited other limitations, including difficulty in verifying underlying assumptions, such as the level of transfers from SWFs or the projected rates of return of the funds, and questionable accuracy and validity of data they use from secondary sources to support their models. By analyzing the information reported by individual SWFs, IMF data, and private researchers’ estimates, we found the total assets held by the 48 SWFs we identified are estimated to be from $2.7 trillion to $3.2 trillion (see app. II for the list of funds). Many of these estimates were published in the last year prior to the significant rise in oil prices in the first half of 2008. The largest funds held the majority of these assets, with the largest 20 funds representing almost 95 percent and the largest 10 having more than 80 percent of the total SWF assets. The largest 20 funds had assets estimated to range from $2.5 trillion to $3.0 trillion, as shown in figure 5. Although SWFs have sizeable assets, their assets are a small portion of overall global assets and are less than the assets of several other large classes of investors. The estimated total size of the SWFs we identified, $2.7 trillion to $3.2 trillion, constituted about 1.6 percent of the estimated $190 trillion of financial assets outstanding globally as of the end of 2006. The estimated SWF holdings we identified likely exceed those of hedge funds, which most researchers estimated to be about $2 trillion. However, according to an estimate by the consulting firm McKinsey Global Institute, assets in pension funds ($28.1 trillion) and mutual funds ($26.2 trillion) exceed those of SWFs by a large margin. SWF assets are expected to grow significantly in the future. SWFs are predicted to continue to grow significantly, to between $5 trillion and $13.4 trillion by 2017. IMF staff estimated that in the next 5 years assets in SWFs will grow to between $6 trillion and $10 trillion. The variation in estimates largely reflects researchers’ use of different methods and assumptions about future economic conditions. Though their methodologies varied, each of these researchers generally used several common factors, the most common being changes in oil prices. Several researchers stated that if oil prices rise higher than their projections, revenues going to oil-based SWFs will likely increase and the assets could grow beyond currently estimated levels. Other factors include growth in foreign exchange reserves, amount of transfers from surpluses to SWFs, persistence in trade imbalances, the rate of return or performance of an SWF, and variation in exchange rate regimes across SWF countries. BEA and Treasury are charged with collecting and reporting information on foreign investment in the United States, but the extent to which SWFs have invested in U.S. assets is not readily identifiable from such data. Only a few SWFs reported specific information on their U.S. investments. Some individual SWF investments in U.S. assets can be identified from reports filed by investors and issuers as required by U.S. securities laws, but these filings would not necessarily reflect all such investments during any given time period. Further, some private data collection entities also report information on specific transactions by SWFs, but these also may not capture all activities. Two U.S. agencies, Treasury and Commerce’s BEA, collect and report aggregate information on foreign investment in the United States that includes SWF investments. To provide information to policymakers and to the public, Congress enacted the International Investment Survey Act of 1976 (subsequently broadened and redesignated as the International Investment and Trade in Services Survey Act [International Investment Survey Act]), which authorizes the collection and reporting of information on foreign investment in the United States. The act requires that a benchmark survey of foreign direct investments and foreign portfolio investments in the United States be conducted at least once every 5 years. Under this authority, BEA collects data on foreign direct investment in the United States, defined as the ownership of 10 percent or more of a business enterprise. BEA collects the data on direct investment in the United States by both public and private foreign entities, which by definition would generally include SWFs, by surveying U.S. companies regarding foreign ownership. The data are used to calculate U.S. economic accounts, including the U.S. international investment position data. Treasury collects data on foreign portfolio investment in the United States, defined as foreign investments that are not foreign direct investments. Treasury collects the data through surveys of U.S. financial institutions and others. These surveys collect data on ownership of U.S. assets by foreign residents and foreign official institutions. Officials from these agencies use these data in computing the U.S. balance of payments accounts and the U.S. international investment position and in the formulation of international economic and financial policies. The data are also used by agencies to provide aggregate information to the public on foreign portfolio investments, including reporting this information periodically in the monthly Survey of Current Business. SWF investment holdings are included in the foreign investment data collected by Treasury and BEA, but cannot be specifically identified because of data collection limitations and restraints on revealing the identity of reporting persons and investors. BEA’s foreign direct investment data are published in the aggregate and do not identify the owner of the asset. BEA also aggregates the holdings of private and government entities for disclosure purposes. As a result, the extent to which SWFs have made investments of 10 percent or more in a U.S. business, while included as part of the foreign direct investment total, cannot be identified from these data. Treasury’s portfolio investment data collection and reporting separates foreign official portfolio investment holdings, which include most SWFs, from foreign private portfolio investment. However, the information that is reported to Treasury does not include the specific identity of the investing organization; thus the extent of SWF investment within the overall foreign official holdings data cannot be identified. In addition, Treasury officials reported that some SWF investments may be classified as private if the investments are made through private foreign intermediaries, such as investment banks, or if an SWF is operated on a subnational level, such as by a state or a province of a country, as those types of organizations are not included in Treasury’s definition of official government institutions. Both BEA and Treasury stated that the data published do not include the identity of specific investors and are aggregated to ensure compliance with the statutory requirement that collected information not be published or disclosed in a manner in which a reporting person can be identified. Figure 6 illustrates how data on SWF investments are included but are not specifically identifiable in the data collected or reported by these agencies. The data BEA and Treasury collect include the total amount of foreign direct investment and the total amount of portfolio investment by foreign official institutions, but the extent of SWF investments in either category cannot be determined. The data collected on both direct and portfolio investments are used by BEA in computing the U.S. international investment position, published annually in its July issue of the Survey of Current Business. The U.S. international investment position data show that foreign investors, including individuals, private entities, and government organizations, owned assets in the United States in 2007 valued at approximately $20.1 trillion. As shown in figure 6, foreign direct investment, which includes all direct investments by SWFs, totaled $2.4 trillion in 2007 (shown in the line “Direct investment at current cost”). This is up from $1.4 trillion in 2000. Foreign official portfolio investment holdings, which include SWF investments, totaled $3.3 trillion in 2007 (shown in the line “Foreign official assets in the United States”). This is up from $1 trillion in 2000. BEA officials stated that within this total many of the SWF portfolio investment holdings are classified as “Other foreign official assets,” since this subcategory reflects transactions by foreign official agencies in stocks and bonds of U.S. corporations and in bonds of state and local governments. These reported holdings totaled roughly $404 billion in 2007. This shows an increase from $102 billion in 2000. To the extent that SWFs are invested in U.S. government securities, those holdings are included in the “U.S. Treasury securities” or “Other” subcategories of “U.S. government securities” under “Foreign official assets in the United States.” Bank accounts or money market instruments held by SWFs are included in “U.S. liabilities reported by U.S. banks” under “Foreign official assets in the United States.” While BEA and Treasury data cannot be used to identify the total extent of SWF investment, these data show that the United States has been receiving more investment over time from countries with SWFs and from foreign official institutions. BEA data on foreign direct investment by country show an increase in foreign direct investment in the United States in recent years from countries with SWFs. These investments would include those made by private individuals and businesses and by any government entities in those countries—including their SWFs. As figure 7 illustrates, foreign direct investment holdings from countries with SWFs have increased from $173 billion in 2000 to roughly $247 billion in 2006. Although the exact extent cannot be determined, some of this increase is likely from SWF investments. Similarly, Treasury data show that portfolio investment from foreign official institutions, which could include SWFs, has increased in all asset classes, including equities. Treasury’s portfolio investment by asset class data show that in 2007, approximately $9.1 trillion of all U.S. long-term securities were foreign owned, and of that, $2.6 trillion were held by official institutions. The $2.6 trillion in foreign official securities holdings includes almost $1.5 trillion in U.S. Treasury debt and $0.3 trillion in U.S. equities. While official institutions owned only a small share of the total amount of foreign-owned U.S. equities, their investment has grown dramatically, increasing from $87 billion in 2000 to $266 billion in 2007. (See fig. 8.) Treasury officials reported that the recent rise in U.S. equity ownership by foreign official institutions in the United States may reflect investments from SWFs, since SWFs are intended as vehicles to diversify a country’s reserves into alternative assets. Although BEA and Treasury may be able to adjust their data collection activities to obtain more information about SWF investment, such changes may not result in more detailed public disclosure and will entail increased costs, according to agency officials. BEA officials told us that they may be able to use the information they currently collect to differentiate between foreign official and private owners of direct investment holdings in the data collected by utilizing the ultimate beneficial owner codes assigned to transactions in their surveys. They stated that they are considering reporting the information in this manner. This breakout would help to narrow the segment of foreign direct investment that contains SWF investments, but it would not identify SWF investment specifically since it would still combine SWF investments with other official government investments. Regarding portfolio investment, Treasury officials reported that while more detailed information would allow them to report SWF investment separately from that of other foreign official institutions, Treasury would be prohibited from releasing such information publicly in cases where it would make the foreign owners easily identifiable. According to a Treasury official, some business groups advised Congress during the initial passage of the International Investment Survey Act that disclosure of their transactions and holdings in foreign countries would adversely affect their companies. If foreign companies in the United States share that view, then removing the disclosure restriction might make foreign investors less likely to invest in the United States, and they might seek investments in countries with less stringent disclosure requirements. In addition, the official said that collecting additional information on foreign investment would increase costs for Treasury as well as for reporting entities. A representative of one financial institution that is a reporting entity told us that the institution would need to make changes to its internal reporting systems in order to provide more identifying information on investors, but the official could not estimate the total costs of doing so. Some information on specific SWF investments in U.S. assets can be determined from disclosures made by SWFs themselves and from private data sources. A limited number of SWFs publicly disclose information about their investment activities in individual countries, including the United States. Based on our review of disclosures made on SWF or national government Web sites, we found that 16 of the 48 SWFs we identified provided some information on their investment activity in the United States. The amount of detail varied from a complete listing of all asset holdings of a fund to only information about how investments are allocated by location. For example, Norway’s SWF publishes a complete list of its holdings, which indicated that as of year-end 2006, its fund held positions in over 1,000 U.S. companies, valued at over $110 million. In contrast, the disclosures made by Kuwait’s fund did not identify its investments, but stated that the fund invests in equities in the United States and Canada; the fund did not provide information on total asset size or dollar values or identify specific investments. One of Singapore’s SWFs disclosed the identity of a few of its key holdings, including U.S.-based investments. Seven other funds also reported information on their investment holdings; however, none of these noted any U.S. investments. Disclosure reports required to be filed with SEC by both investors and issuers of securities are a source of information on individual SWF transactions in the United States, but only for those transactions that meet certain thresholds. Any investor, including SWFs, upon acquiring beneficial ownership of greater than 5 percent of a voting class of an issuer’s Section 12 registered equity securities, must file a statement of disclosure with SEC. The information required on this statement includes identifying information, including citizenship; the securities being purchased and the issuer; the source and amount of funds used to purchase the securities; the purpose of the transaction; the number of shares acquired, and the percent of ownership that number reflects; and the identification of any contracts, arrangements, understandings, or relationships with respect to securities of the issuer. When there are changes to the level of ownership or amount of securities held, the investor is generally required to file an amendment. Investors taking passive ownership stakes in the same equities, meaning they do not intend to exert any form of control, may qualify to file a less-detailed statement. SEC also requires disclosure for investors, including SWFs, whose beneficial ownership of a class of voting equity securities registered under Section 12(b) or 12(g) of the Securities Exchange Act of 1934 exceeds 10 percent. Under Section 16(a) of the Securities Exchange Act of 1934, within 10 calendar days of becoming a more than 10 percent beneficial owner, an investor must file an initial report disclosing the amount of the investor’s beneficial ownership of any equity securities of the issuer, whether direct or indirect. In addition, for as long as investors remain more than 10 percent beneficial owners, they must file reports of changes in beneficial ownership with the SEC within two business days of the transaction resulting in the change in beneficial ownership. This requirement applies to sales and additional purchases of any equity securities of the same issuer. Certain beneficial ownership filings required under U.S. securities laws offer some information about SWF activities, but cannot be used to determine the full extent of SWF transactions in the United States. For example, any transaction involving a purchase resulting in total beneficial ownership of 5 percent or less of a voting class of an issuer’s Section 12 registered equity securities would not have to be disclosed under the federal beneficial ownership reporting rules and otherwise may not necessarily have to be disclosed. Thus, the SEC data would most likely not include information on SWF investments under this threshold. In addition, although the filing of these reports is mandatory for all investors who meet the requirements, SEC staff told us that without conducting a specific inquiry, their ability to determine whether all qualifying investments have been disclosed, including any by an SWF, may be limited. To identify nonfilers, the SEC staff told us that they sometimes use sources such as public comments and media reports. SEC has not brought a public action against an SWF for violating these beneficial ownership reporting requirements. Finally, given that these filings are primarily used to disclose ownership in the securities of specific issuers, the information is not compiled or reported by SEC in any aggregated format. Thus, identifying SWF transactions requires searching by issuer or SWF name, and SEC staff noted, for example, that identifying such transactions can be difficult because some SWF filers may have numerous subsidiaries under whose names they might file a report. Information about some SWF investments in U.S. issuers can be identified in certain filings made under the federal securities laws. As a result of the recent interest in SWF activities, SEC staff analyzed such filings to identify transactions involving SWFs. To identify specific transactions, they searched for filings by known SWFs and also reviewed filings from countries with SWFs to identify SWF filings. According to their analysis, since 1990 eight different SWFs have reported to SEC ownership of over 5 percent, covering 147 transactions in 58 unique issuers. SEC staff told us that their analysis likely reflects only some of the SWF investments in U.S. issuers. The federal securities laws also require U.S. public companies to file reports on their financial condition, which can also reveal data on some SWF investments in the United States below the greater than 5 percent threshold for investor disclosures. Companies with publicly traded securities are required to publicly disclose events they deem material to their operations or financial condition, such as the acquisition of assets or the resignation of officers. In some cases, U.S. companies have made these filings to announce that they have received investments from an SWF, including investments that did not exceed the 5 percent threshold and require a beneficial ownership filing by the SWF. For example, Citigroup filed a report outlining a transaction involving an SWF investment that was below the 5 percent ownership stake but the SWF investment was still deemed material to the financial condition of the company. Some of the U.S. companies that recently received SWF investments also included information about these transactions in their annual reports. Private data collection entities compile and report information on specific SWF transactions, including those captured by SEC filings, but do not capture all SWF transactions. A number of private firms collect and distribute information relating to financial transactions. For example, database companies, such as Dealogic and Thompson Reuters, collect information globally on financial transactions, including mergers and acquisitions, for users such as investment banks, rating agencies, and private researchers. To compile their databases, they use public filings (including SEC filings), press releases, news announcements, and shareholder lists, as well as information from relationships with large financial intermediaries, such as investment banks and attorneys. Therefore, information in these databases includes transactions that can be identified through SEC filings but also may include additional transactions that are not disclosed through U.S. securities laws but are identified in other ways, such as through company press statements or discussions with parties to the transaction. However, these data may not be complete, and the database companies cannot determine to what extent they capture all SWF transactions. For example, officials at these companies told us that they will most likely miss smaller transactions, consisting of acquisitions resulting in aggregate beneficial ownership of 5 percent or less, or unannounced relatively small dollar deals. Since many SWFs have historically taken noncontrolling interests in U.S. companies with total ownership often below 5 percent, the number of transactions not captured could be large. In addition, a transaction completed by a subsidiary of an SWF may not be identified as an SWF investment. We reviewed the information collected by Dealogic on investments made by SWFs in foreign countries, otherwise known as cross-border transactions. Based on this, the United States has, since 2000, attracted the largest volume of cross-border SWF investment, with announced deals totaling approximately $48 billion. Roughly $43 billion of this value reflects investment since 2007, largely consisting of deals involving financial sector entities. These deals comprised 8 of the top 10 announced SWF investments in the United States since 2000 (see table 1). These large SWF-led investments into financial sector entities came at a time when firms were facing large losses and asset write-downs due to the subprime mortgage crisis of 2007. The investments were seen as positive events by some market participants because they provided much-needed capital. (See app. III for a summary of some of these transactions.) According to Dealogic, Switzerland, China, and the United Kingdom were also major targets of cross-border SWF investments since 2000 (see fig. 9). Dealogic data also show that announced cross-border investments led by SWFs worldwide have risen dramatically since 2000, both in terms of number of deals and total dollar volume. (See fig. 10) Transactions targeting the United States have also risen sharply, due in part to favorable market conditions for foreign investors. In 2005, only one announced U.S. transaction totaling $50 million was reported by Dealogic. In contrast, nine transactions were reported in 2007, totaling $28 billion. As of June 2008, four transactions in the United States have been announced for the year, totaling almost $20 billion. As shown in figure 10, global cross-border SWF investment increased from $429 million in 2000 to almost $53 billion in 2007. However, these transactions, which have totaled about $119 billion since 2000, represent a small portion of the overall reported assets of SWFs, which, as noted previously, were estimated to be from $2.7 trillion to $3.2 trillion. This illustrates how much of the investment held by SWFs is not generally identifiable in existing public sources, unless SWFs themselves disclose comprehensive data on their asset holdings. We requested comments on a draft of this report from Treasury, Commerce, and SEC. In a letter, Treasury’s Deputy Assistant Secretary for International Monetary and Financial Policy indicated that our report was timely and valuable and underscored the importance of developing a better appreciation of the systemic role of SWFs, and also said that they generally agreed with the conclusions in the report. The Deputy Assistant Secretary’s letter stated that Treasury has been a leader in the international community’s efforts, including in the multi-lateral IMF- facilitated effort to develop generally accepted principles and practices for SWFs. Implementation of such practices will hopefully foster a significant increase in the information provided by SWFs. (Treasury’s letter is reproduced in app. V.) In Commerce’s letter, the Undersecretary for Economic Affairs stated that our report was a useful and timely contribution to the existing literature on this highly debated and complex subject. (Commerce’s letter is reproduced in app. VI.) In SEC’s letter, the Director of the Office of International Affairs reiterated that the disclosure requirements under U.S. securities laws regarding concentrations and change of control transactions apply equally to SWFs and to other large investors. (SEC’s letter is presented in app. VII.) Treasury, Commerce and SEC also provided technical comments, which we incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time we will send copies to other interested Members of Congress; the Secretaries of Treasury and Commerce; and the Commissioner of the SEC. We will also make copies available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact either Yvonne Jones at (202) 512-8678 or JonesY@gao.gov, or Loren Yager at (202) 512-4128 or YagerL@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Our objectives in this report were to examine (1) the availability of data on the size of sovereign wealth funds (SWF) and their holdings internationally that have been publicly reported by SWFs, their governments, international organizations, or private organizations, and (2) the availability of reported data by the U.S. government and other sources on SWF investments in the United States. To identify SWFs and to develop criteria for selecting the funds to include in our analysis, we reviewed the definitions of SWF and the lists of such funds that have been compiled by U.S. and international agencies, financial services firms, and private researchers. The funds of most interest to policymakers were those that are separate pools of capital without underlying liabilities or obligations to make payments in the near term. SWFs have raised concerns over their potential to use their investments for non-economic purposes. As a result, we chose to include in our analysis those funds that (1) were government chartered or sponsored investment vehicles; (2) invested, in other than sovereign debt, some or all of their assets outside the country that established them; (3) were funded through transfers from their governments of funds arising primarily from sovereign budget surpluses, trade surpluses, central bank currency reserves, or revenues from the commodity wealth of the countries, and (4) were not currently functioning as pension funds receiving contributions from and making payments to individuals. We included government-chartered or government-sponsored entities that invest internationally because such entities raise concerns over whether the controlling government will use their funds to make investments to further national interests rather than to solely earn financial returns. Entities that are funded primarily through trade surpluses or natural resources wealth would also seem to be more vulnerable to pressure to make noneconomic investments than entities funded through employee contributions or other nonwindfall sources. We excluded internationally active pension funds that are receiving contributions from or making benefit payments to individuals as these funds generally have specific liabilities, unlike SWFs that are not encumbered by such near-term obligations, and thus are not as likely to make noneconomic investments. We also excluded investment funds that invested only in the sovereign debt of other nations. Such an investment strategy is an approach that central banks have traditionally taken and, although widely debated, has not generally raised control issues. SWF investments in the equity securities of commercial firms in other countries may be viewed as creating the potential for actions of a noncommercial nature that could be detrimental to another country's economy. In order to determine our final list of SWFs, we independently examined each fund on a compiled list of unique funds that others claimed were SWFs. We verified that our above criteria were met using national authorities and International Monetary Fund (IMF) data sources. To begin our analysis, we reviewed lists of SWFs generated from seven different private data sources and IMF. These sources, included in publications from Deutsche Bank, Goldman Sachs, JPMorgan, Morgan Stanley, the Peterson Institute, RGE Monitor, Standard Chartered Bank, and IMF, were then used to prepare a comprehensive list of 258 possible SWFs. Since the names of the funds varied depending on the source, we manually matched the sources based on the fund name, inception year, and fund size to obtain a list of 81 unique funds. After compiling the list of unique funds, we attempted to verify the extent to which the funds met our four criteria. We did this by having at least two analysts reach agreement on whether the criteria were met upon review of several data sources. We used official governmental source data, such as information from the country’s central bank, finance ministry, or other government organization or from the fund’s Web site. If official government source information was unavailable, we used the country’s Article IV consultation report or other documents reported to IMF as a secondary source. Those funds that did not meet any one of our four specified criteria were excluded from our analysis. In cases where we could verify some but not all of our criteria from national authority or IMF sources, we attempted to validate the remaining criteria using private or academic sources. Analyst judgment to include or exclude a fund was employed in some cases where all criteria could not be validated. Of the 48 funds selected for our analysis, we were able to verify all four criteria in 60 percent of cases. For all of the 48 funds we were able to verify two or more of our four criteria using a mix of various sources. We encountered some limitations to our independent verification of national authorities’ source data. These limitations include Web searches only being conducted in English, Web sites being under construction, and Web sites being incompletely translated from the original source language to English. In these cases, we located speakers of the languages in question and assisted them in conducting searches for SWF information on the national government Web sites in the country’s language. If our team members who were knowledgeable of the foreign language found relevant information, we asked them to translate the information to English and provide us with a written copy. We used this translated information to verify our criteria for 10 funds. Languages needing translation were Arabic, French, Portuguese, and Spanish. If we found sources in English that were relevant for our purposes, we did not review the sources in their original languages. To determine the availability of data on the size and other characteristics of SWFs that were reported by SWFs, their governments, international organizations, or private sources, we reviewed documents produced by SWFs and Web sites sponsored by SWFs. We also reviewed studies of SWFs done by investment banks and private research firms. For some recently established funds, we reported the initial capitalization of the fund if the market value of the fund was not available. We used a private researcher estimate for one fund that only reported a minimum value and for two funds where private researcher data appeared to be more recent than those of national authorities. We interviewed officials from two SWFs, investment banks, finance and trade associations, a private equity group, IMF, and others. To determine the availability of data on SWF investments in the United States reported by the U.S. government and others, we reviewed the extent to which federal data collection efforts of the Departments of the Treasury (Treasury) and Commerce (Commerce) and the Securities and Exchange Commission (SEC) were able to report on SWF activities. We interviewed officials from Commerce and Treasury, SEC, the Federal Reserve Bank of New York, two financial data companies, a law firm, several private researchers, and other organizations. We analyzed data on SWF cross-border transactions in the United States and other countries obtained from Dealogic, which designs, develops, and markets a suite of software, communications, and analytical products. We assessed the procedures that Dealogic uses to collect and analyze data and determined that the data were sufficiently reliable for our purposes. To identify the transactions, we worked with Dealogic to develop a query that would extract all transactions with an SWF as the acquirer of the asset and where the asset resided in a country other than that of the SWF. However, Dealogic does not capture all SWF transactions. Because of its reliance on public filings, news releases, and relationships with investment banks, Dealogic may not capture low-value transactions that are not reported publicly. We also reviewed public filings, obtained through SEC’s Electronic Data Gathering, Analysis, and Retrieval (EDGAR) database, of selected U.S. companies that received major SWF investments in 2007 and 2008. Because acquisitions resulting in total beneficial ownership of 5 percent or less of a voting class of a Section 12 registered equity security will not be reported to SEC, these data sources capture only a proportion of the total U.S. SWF investments. We also spoke with officials from the Board of Governors of the Federal Reserve System, Commerce, the Department of Defense, the Department of State, the Federal Reserve Bank of New York, Treasury, IMF, SEC, and the U.S. Trade Representative. We attended hearings on SWFs before the Senate Committee on Banking, Housing, and Urban Affairs, the Senate Committee on Foreign Relations, the House Committee on Foreign Affairs, and the House Committee on Financial Services, the Joint Economic Committee, and the U.S.-China Economic and Security Review Commission. We met with officials representing investment funds or SWFs from Dubai, Norway, and Singapore. To better understand the context behind SWFs, we interviewed industry and trade associations, a legal expert, investment banks, private researchers, and others who have experience in international finance, trade, and foreign investment issues. We conducted this performance audit from December 2007 through August 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. odSize of fund (doll in illion) Government Penion Fnd--GlobaKit Invetment Athority (KIA) Size of fund (dollar in billion) Severnce Tx Permnent Fnd (New Mexico) Permnent Minerl Trust Fnd (Wyoming) Fond de Genertion (Qec) The IMF data contain information from Article IV consultation reports, IMF staff reports, and a memorandum of understanding. The date range for the IMF data is from September 2006 through 2007. Private researcher data contain information from reports from Deutsche Bank, Goldman Sachs, JPMorgan, Morgan Stanley, the Peterson Institute, Standard Chartered Bank, and RGE Monitor. The publication dates for these reports range from September 10, 2007 through May 22, 200. For Gabon, Singapore, and Vietnam, private researcher estimates were used instead of the national authority sources because the private researchers appeared to provide more up-to-date estimates of the size of the funds. São Tomé and Príncipe had a balance of $100 million as of September 1, 2006. We report a zero balance due to rounding. In 2007 and early 2008, SWFs, in conjunction with other investors, supplied almost $43 billion of capital to major financial firms in the United States. Citigroup was the major recipient of capital, receiving $20 billion in late 2007 and early 2008. The other recipients were Merrill Lynch, Morgan Stanley, the Blackstone Group, and the Carlyle Group. Below is a timeline of these transactions and their history. Some government organizations, an international financial institution, investment banks, and private research organizations have published reports on SWFs that offer explicit definitions of SWFs or lists of SWFs. Those that propose definitions of SWFs have not come up with one commonly accepted definition. Varying characteristics—ownership, governance, funding sources, and investment strategies, among others— are used to characterize SWFs and include or exclude funds from SWF lists. Treasury defines SWFs as government investment vehicles funded by foreign exchange assets that are managed separately from official reserves. They seek higher rates of return and may be invested in a wider range of asset classes than traditional reserves. Treasury says that SWFs generally fall into two categories based on the source of their foreign exchange assets: commodity and noncommodity funds. Treasury has not released a list of SWFs. IMF defines SWFs as government-owned investment funds set up for a variety of macroeconomic purposes. They are commonly funded by the transfer of foreign exchange assets that are invested long term and overseas. SWFs are a heterogeneous group and may serve multiple, overlapping, and changing objectives over time: as stabilization funds to insulate the budget and economy against commodity price swings; as savings funds for future generations; as reserve investment corporations established to increase the return on reserves; as development funds to help fund socioeconomic projects or promote industrial policies; or as contingent pension reserve funds to provide for unspecified pension liabilities on the government’s balance sheet. IMF researchers have published a list of SWFs. This relatively broad definition allows for inclusion of a Saudi Arabian investment fund managed from the central bank. Some investment bank reports have offered definitions of SWFs. One states that SWFs are broadly defined as special government asset management vehicles that invest public funds in a wide range of financial instruments. Unlike central banks, which focus more on liquidity and safekeeping of foreign reserves, most SWFs have the mandate to enhance returns and are allowed to invest in riskier asset classes, including equity and alternative assets, such as private equity, property, hedge funds, and commodities. This bank does publish a list of SWFs. It says that it is not always easy to differentiate between pure SWFs and other forms of public funds, such as conventional public sector pension funds or state-owned enterprises. Another investment bank defines SWFs as having five characteristics: (1) sovereign, (2) high foreign currency exposure, (3) no explicit liabilities, (4) high-risk tolerance, and (5) long investment horizon. Similar to SWFs are official reserves and sovereign pension funds. Another investment bank says that SWFs are vehicles owned by states that hold, manage, or administer public funds and invest them in a wider range of assets of various kinds. SWFs are mainly derived from excess liquidity in the public sector stemming from government fiscal surpluses or from official reserves at central banks. They are of two types—either stabilization funds to even out budgetary and fiscal policies of a country or intergenerational funds that are stores of wealth for future generations. SWFs are different from pension funds, hedge funds, and private equity funds. SWFs are not privately owned. This investment bank researcher does offer a list of SWFs. Some non-investment-bank private researcher reports have offered definitions of SWFs. One researcher says that SWFs are a separate pool of government owned or government-controlled assets that include some international assets. The broadest definition of an SWF is a collection of government-owned or government-controlled assets. Narrower definitions may exclude government financial or nonfinancial corporations, purely domestic assets, foreign exchange reserves, assets owned or controlled by subnational governmental units, or some or all government pension funds. This researcher includes all government pension and nonpension funds to the extent that they manage marketable assets. This researcher does publish a list of SWFs. Another private research group defines an SWF as meeting three criteria: (1) it is owned by a sovereign government; (2) it is managed separately from funds administered by the sovereign government’s central bank, ministry of finance, or treasury; and (3) it invests in a portfolio of financial assets of different classes and risk profiles, including bonds, stocks, property, and alternative instruments, with a significant portion of its assets under management invested in higher-risk asset classes in foreign countries. This researcher thinks of SWFs as part of a continuum of sovereign government investment vehicles that runs along a spectrum of financial risk, from central banks as the most conservative and risk averse, to traditional pension funds, to special government funds, to SWFs, and finally to state-owned enterprises, which are the least liquid and are the highest-risk investments. This research group publishes a list of SWFs. In addition to the contacts named above, Cody Goebel, Assistant Director; Celia Thomas, Assistant Director; Patrick Dynes; Nina Horowitz; Richard Krashevski; Jessica Mailey ; Michael Maslowski; Marc Molino; Omyra Ramsingh; and Jeremy Schwartz made major contributions to this report. | Sovereign wealth funds (SWF) are government-controlled funds that seek to invest in other countries. With new funds being created and many growing rapidly, some see these funds providing valuable capital to world markets, but others are concerned that the funds are not transparent and could be used to further national goals and potentially harm the countries where they invest. GAO plans to issue a series of reports on various aspects of SWFs. This first report analyzed (1) the availability of publicly reported data from SWFs and others on their sizes and holdings internationally, and (2) the availability of publicly reported data from the U.S. government and other sources on SWFs' U.S. investments. GAO reviewed foreign government disclosures, Department of the Treasury (Treasury) and Department of Commerce (Commerce) reporting, and private researcher data to identify SWFs and their activities. GAO also analyzed information from international organizations and securities filings. Treasury and Commerce commented that GAO's report provides timely and useful contributions to the SWF debate; SEC noted that U.S. securities requirements apply to all large investors, including SWFs. Future GAO reports will address laws affecting SWF investments, SWF governance practices, and the potential impact of SWFs and U.S. options for addressing them. Limited information is publicly available from official government sources for some SWFs. While some have existed for decades, 28 of the 48 SWFs that GAO identified have been created since 2000, primarily in countries whose foreign exchange reserves are growing through oil revenues or trade export surpluses. GAO analysis showed that about 60 percent of these 48 SWFs publicly disclosed information about the size of their assets since the beginning of 2007, but only about 4 funds published detailed information about all their investments--and some countries specifically prohibit any disclosure of their SWF activities. Although the International Monetary Fund (IMF) currently collects data on countries' international financial flows, GAO found that only 13 countries separately reported their SWF holdings in public IMF documents. IMF plans to issue new reporting guidance in 2009 that asks countries to voluntarily report the size of their SWF holdings in their international statistics. While this could increase the transparency of SWFs, its success depends on the extent to which countries participate. In the absence of official national or international public reporting, much of the available information about the value of holdings for many SWFs is from estimates by private researchers who project funds sizes by adjusting any reported amounts to reflect likely reserve growth and asset market returns. For the funds GAO identified, officially reported data and researcher estimates indicated that the size of these 48 funds' total assets was from $2.7 trillion to $3.2 trillion. Some researchers expect these assets to continue to grow significantly. U.S. government agencies and others collect and publicly report information on foreign investments in the United States, but these sources have limitations and the overall level of U.S. investments by SWFs cannot be specially identified. From surveys of U.S. financial institutions and others, Treasury and Commerce reported that foreign investors, including governments, private entities, and individuals, owned over $20 trillion of U.S. assets in 2007, but the amounts held by SWFs cannot be specifically identified from the reported data because either the agencies do not obtain specific investor identities or the agencies are precluded from disclosing individual investor information. GAO found that as many as 16 of the 48 SWFs reported some information on their U. S. investments. One reported all U.S. holdings, but others only identified a few specific investments or indicated that some of their total assets were invested in the United States. Some SWF investments can be identified in U.S. securities filings, under a requirement for disclosure of investments that result in aggregate beneficial ownership of greater than 5 percent of a voting class of certain equity securities. At least 8 SWFs have disclosed such investments since 1990. GAO analysis of a private financial research database identified SWF investments in U.S. companies totaling over $43 billion from January 2007 through June 2008, including SWF investments in U.S. financial institutions needing capital as a result of the 2007 subprime mortgage crisis. Additional U.S. reporting requirements would yield additional information for monitoring the U.S. activities of SWFs, although some U.S. officials have expressed concerns that they could also increase compliance costs for U.S. financial institutions and agencies and could potentially discourage SWFs from making investments in U.S. assets. |
Amtrak was established by the Rail Passenger Service Act of 1970, after the nation’s railroads found passenger service unprofitable. It operates a 22,000-mile passenger rail system over 43 routes. (See fig. 1.) Amtrak owns 650 miles of track, primarily in the Northeast Corridor, which runs between Boston, Massachusetts, and Washington, D.C. It operates the remainder of its routes over tracks owned by freight railroads and pays these railroads for this access. From fiscal year 1971 through 2002, the federal government provided Amtrak with about $25 billion in federal operating and capital assistance, or an average of $807 million annually (in nominal dollars). Amtrak’s financial condition has been deteriorating and, in 2001, it experienced its largest net loss (revenues less expenses) ever of about $1 billion. In fiscal year 2001, only one of Amtrak’s 43 routes made enough revenue to cover its operating costs—the Metroliner/Acela Express service on the Northeast Corridor ($51 million). The other 42 routes had operating losses ranging from about $600,000 on the Vermonter (service between Washington, D.C., and Vermont) to $71.5 million on the Northeast Direct (generally service between Virginia and Massachusetts). (See app. I for the financial performance of all Amtrak routes.) Amtrak has changed its general approach to route and service actions over time, from attempting to improve its financial performance by cutting service to attempting to achieve the same result by increasing service. For example, in 1995, Amtrak eliminated 9 routes, truncated 3 routes, and changed the frequency of service on 17 routes. These actions were intended to cut costs by about $200 million while retaining 90 percent of revenues and 85 percent of passengers. Amtrak said the presumption was that passengers would use other trains to meet their travel needs, allowing it to retain most of its ridership and revenue. Although initially the route cutting actions had some financial success, subsequent financial results were below expectations because (1) management did not cut costs as planned, (2) less-than-daily service caused less efficient equipment usage and other unforeseen problems, and (3) passengers were no longer adjusting their travel plans to fit Amtrak’s new less-than-daily schedules. In 1998, Amtrak switched its strategy to increase revenues by expanding service. It used a market-based approach to identify the market for intercity passenger rail service. To do so, it used market research and computer-based models to determine the potential ridership, revenue, and costs of proposed route and service actions. According to Amtrak, this approach constituted a significant improvement in its route evaluation process because it represented the first comprehensive analysis of Amtrak’s route system, and the first attempt to apply rigorous financial analyses and modeling techniques to the design of Amtrak’s national network. The Network Growth Strategy was the first product of the market-based network analysis project that Amtrak initiated in October 1998 to address route evaluation deficiencies. The intent of the market-based network analysis was to (1) develop the financial tools that Amtrak needed to perform reliable and objective analyses of route and service changes; (2) help Amtrak achieve operational self-sufficiency by December 2002 by identifying route and service changes that, if implemented expeditiously, would produce positive financial impacts before the statutory deadline; and (3) express Amtrak’s vision of how its national network could be enhanced and improved. In December 1999, Amtrak’s board of directors adopted the Network Growth Strategy as part of Amtrak’s strategic business plan. The strategy consisted of 15 planned route and service actions, the majority involving the expansion of service. (See app. III.) Amtrak predicated the growth strategy on the acquisition of significant new revenue from hauling mail and express cargo and estimated that it would result in $65.6 million in net revenue through fiscal year 2002. In February 2000, Amtrak announced to Congress that it was going to implement the 15 routes in the Network Growth Strategy. Amtrak has been unsuccessful in implementing its Network Growth Strategy. About 2 years after announcing the Network Growth Strategy, Amtrak has cancelled 9 of the 15 planned route actions without implementing them. Amtrak implemented three route actions, although it cancelled one of these in September 2001. Finally, Amtrak plans to proceed with 3 other route actions, although their implementation will be at least 1 or 2 years later than originally planned. (See table 1.) According to Amtrak, the capital funds for one of the projects in planning (Silver Service restructuring in Florida) were frozen on February 1, 2002, in a company-wide effort to reduce use of cash. (In all, Amtrak cancelled nine routes without implementing them. Some routes were cancelled for more than one reason.) Amtrak told us that it cancelled six of the Network Growth Strategy routes before they were implemented, in part, because it overestimated expected increases to mail and express revenue under the Network Growth Strategy. Amtrak estimated that this expected increase would improve Amtrak’s bottom line by $65.6 million through fiscal year 2002. Specifically, it estimated that mail and express revenues would exceed costs by $68.2 million, offsetting a loss of $2.6 million from expanded passenger operations. Most of the revenue increase was expected to come from new express business. This expanded mail and express traffic did not materialize and Amtrak’s revised plans have reduced expected Network Growth Strategy-associated mail and express revenue by about half—from $271 million to $139 million (a $132 million reduction). Amtrak said that there were several reasons why this overestimation occurred. The current president of Amtrak’s mail and express unit told us that Amtrak expected to substantially expand its route system to generate this revenue and to begin running longer trains mostly filled with express traffic. However, he said that at the time Amtrak made its mail and express revenue estimates, it gave little thought to whether such an expansion was feasible—that is, whether Amtrak could likely capture this business or whether freight railroads that own the tracks would agree to Amtrak’s expansion plans. According to Amtrak, it did not have a rigorous approach to estimating expected express business. Amtrak officials told us that, until recently, Amtrak estimated express revenue largely on the basis of an analysis of a database of commodities being shipped nationally. Amtrak estimated the portion of this business that it thought it could obtain. An Amtrak official said that it now focuses more on determining existing customers’ shipping needs, assessing these needs in light of current economic trends, and evaluating Amtrak’s ability to meet these needs given existing train capacity. Finally, Amtrak officials told us that express shippers were reluctant to enter into contracts for service that did not yet exist. Amtrak officials also told us that the company did not know route-by-route costs for its mail and express program when it announced its Network Growth Strategy. This is because Amtrak has never separately identified these costs. Rather, it has integrated these costs into the overall financial results of its intercity strategic business unit. Knowing these costs was important because Amtrak expected that the expansion of mail and express service would produce the revenue needed to make its route expansion profitable. Not until 2000 did Amtrak begin efforts to separately identify mail and express costs and develop separate mail and express financial information. According to Amtrak, in October 2001, it began producing separate profit and loss statements for its mail and express business. However, an Amtrak official said the corporation still has a long way to go in producing reliable mail and express financial information and in understanding the true cost of this business. Amtrak could not implement its Network Growth Strategy unless it reached agreement with freight railroads over funding for capital improvements (such as upgrading tracks and signals to improve safety and support higher-speed passenger operation) and access to freight railroads’ track. Quick agreement was necessary because Amtrak wanted to implement the new routes and services to help it reach operational self- sufficiency by December 2002. Amtrak encountered substantial difficulties in gaining freight railroad agreement to allow Amtrak to expand service over freight railroad tracks. This difficulty in reaching agreement contributed, in part, to Amtrak canceling six of its planned routes. Amtrak planned to operate the 15 Network Growth Strategy routes over freight railroad tracks, including the transportation of mail and express as authorized by law. However, Amtrak was largely unable to gain freight railroads’ agreement. Such agreement was critical to the implementation of Amtrak’s strategy. Freight railroads are required by law to allow Amtrak to operate over their tracks and to give Amtrak’s trains priority over their own. In addition, freight railroads are required to charge Amtrak the incremental cost—rather than the full cost—associated with the use of their tracks. These amounts are negotiated by Amtrak and the freight railroads. Federal law also gives Amtrak the authority to carry mail and offer express service. These mandates result in an ongoing tension between Amtrak and freight railroads for several reasons. One reason is that accommodating passenger trains affects freight railroads’ ability to serve their customers and earn a profit. Second, accidents involving passenger trains may create liability issues for freight railroads. Third, freight railroads believe that they are not fully compensated for providing this service. Finally, Amtrak’s express business may compete with freight railroads’ business and Amtrak may be able to offer lower rates than freight railroads, everything else being equal, because Amtrak only has to pay freight railroads the incremental, rather than the full cost, of operating on freight railroad tracks. According to Amtrak, for some proposed actions, such as increasing service to daily frequency, reaching agreement with freight railroads is not difficult because the freight railroads’ infrastructure can support additional trains and the host freight railroad may already be used to having Amtrak operate along certain routes. In other cases—such as where substantial capital improvements are needed or where service is to be initiated over tracks that are operating at or near capacity, reaching an agreement might be more difficult, especially where Amtrak expects freight railroads to pay for some or all of the improvements. Amtrak officials told us that they met with senior freight railroad officials in November and December 1999—before its board of directors approved the Network Growth Strategy—to tell them of Amtrak’s plan for expanded service. Amtrak officials stated that freight railroads did not then express opposition to proposed expanded routes and services. According to Amtrak, these were high-level discussions at the president/chief executive officer level, during which the railroad executives agreed to entertain more specific proposals. According to Amtrak, it met again with officials from each railroad, generally in January or February 2000, to outline specific route proposals. According to an Amtrak official, Amtrak discussed the proposed route and/or service actions and sought freight railroads’ overall reaction to the proposals. He said that, in some cases, freight railroads identified issues such as the need to upgrade track. However, generally freight railroads said that they needed to further analyze the proposals to determine their likely effect, with more detailed discussions to be held at later dates. Freight railroad officials told us that the initial and subsequent meetings focused primarily on the concept of providing new services rather than identifying whether there might be aspects of the proposals that would be easy or difficult to resolve. While Amtrak recognized that capital improvements would be needed on freight railroads’ tracks to implement eight Network Growth Strategy routes, it did not include capital investment requirements or the source of these funds in its route evaluations until after it had decided to implement the action. An Amtrak official said that considering capital investment requirements any earlier would not be useful since, if capital costs were factored in, route proposals would appear to be unprofitable and not be considered further. As a result, Amtrak limited its analysis to whether revenues are expected to exceed operating costs. Amtrak followed this approach despite the fact that some route actions cannot be implemented—and its operating losses reduced—unless capital is available. It was not until after Amtrak decided to implement the Network Growth Strategy in December 1999 and announced it to Congress that it began to develop an understanding of the capital investments needed to implement the route and service actions and other implementation issues critical to gaining freight railroad agreement. For example, it was not until spring 2000 that Amtrak learned from the Union Pacific Railroad that it might cost about $40 million to implement the Crescent Star (service between Meridian, Mississippi, and Dallas/Fort Worth, Texas). A Union Pacific official told us that his railroad was not willing to share the costs of this investment with Amtrak, nor was it willing to help Amtrak finance it over time. He said that capital investment had not been discussed with his railroad prior to this time. Freight railroads were also concerned about having a competitor on their tracks. All four of the freight railroads we contacted that would have been affected by the Network Growth Strategy generally acknowledged Amtrak’s statutory authority to operate mail and express business. However, all expressed concern about Amtrak’s becoming a competitor for their freight business. This concern was heightened by Amtrak’s plans to begin running large numbers of express cars on their trains as it expanded its mail and express business. This concern contributed to Amtrak’s decision to cancel the Skyline service. A Norfolk Southern official said his company did not want Amtrak to solicit business on this route that was similar to its own freight business. Other freight railroads we contacted were similarly wary of Amtrak’s plans to use its route and service expansion to increase express business that could potentially compete with their own. In addition, Amtrak did not identify potential operational problems that could be encountered, such as whether capacity constraints would be important. A good illustration is Amtrak’s planned Crescent Star service. This service, planned for implementation in summer 2000 over Union Pacific Railroad and Norfolk Southern lines, has not yet come to fruition. According to a Union Pacific official, the company could not reach agreement with Amtrak, in part, because the planned routing would have worsened congestion on the line. In addition, a Norfolk Southern official told us that the rail infrastructure in Meridian would not support passenger train switching operations without serious interference with freight trains. As a result of these operational problems and because of funding problems, the routing of this still-to-be-implemented service has since shifted to another railroad. The proposed Twilight Limited faced similar problems. According to CSX Transportation officials, this service could have encountered significant line capacity and scheduling problems west of Albany, New York. Finally, a Union Pacific official told us that the Aztec Eagle (service from San Antonio to Laredo, Texas) could have created capacity problems because it would have utilized Union Pacific’s primary route to Mexico. Amtrak officials agreed that routing of the Crescent Star was shifted to another railroad because of disagreements with Union Pacific. An Amtrak official said Union Pacific was initially receptive to proposed route and service actions but turned negative when plans became more specific. Amtrak officials also agreed infrastructure improvements were necessary in Meridian, Mississippi, but believed these were not insurmountable problems. Amtrak officials also did not believe there would be significant problems with the Twilight Limited because the proposed service was to replace existing trains in both New York and Michigan. In other instances, Amtrak was not able to reach agreement with freight railroads on compensation for track access, especially for trains with additional express traffic. Freight railroads often receive additional compensation for handling Amtrak trains over a certain length and/or for cars carrying express business. Issues of compensation contributed to the cancellation of at least one route action—the Skyline. This route— establishing service between Chicago and New York City via Pennsylvania—involved Norfolk Southern. Norfolk Southern officials said they were willing to work with Amtrak on establishing this service and had even reached agreement with Amtrak about the operating arrangements for this train. (The train was to be handled similarly to a regular freight train, including operating at 60 miles per hour—a speed closer to freight train speed.) However, Norfolk Southern largely attributed the demise of this route action to the inability to reach agreement with Amtrak over the compensation to be paid for track access and additional express business. Amtrak’s Network Growth Strategy has been unsuccessful because it overestimated (1) revenues expected from new mail and express service and (2) its ability to reach agreement with freight railroads over capital funding and other implementation issues. Amtrak said that it has improved it revenue estimation process. However, reaching agreement with freight railroads will always be a major challenge when Amtrak attempts to expand its business in areas that are operating at or near capacity, when the expansion appears to pose competition for freight railroads, or when freight railroads are expected to make capital investments to help implement the routes. We believe that, in any future major route and service expansions predicated on improving Amtrak’s financial condition, Amtrak’s decisionmaking process needs to more explicitly reveal the risks associated with successful implementation. We recommend that, for any future major route and service proposals, the president of Amtrak disclose to Amtrak’s board of directors any significant risks that could impair the successful implementation of the planned actions and its plans to ameliorate those risks. These potential risks include the expected ability to obtain capital funding and reach agreement with freight railroads to operate over their tracks. We provided a draft of this report to Amtrak and to the Department of Transportation for their review and comment. Amtrak disagreed with the conclusions we reached about the benefits that might have been achieved through discussing its strategy with its key partners more substantively before—rather than after—deciding to expand its operations over freight railroad tracks. Amtrak provided its comments during a meeting with the acting vice president for government affairs and others and in a subsequent letter. (See app. VI.) The department generally agreed with the report’s conclusions via an e-mail message. In commenting on a draft of this report, Amtrak agreed with our presentation of the reasons that it overestimated mail and express revenue. Amtrak also stated that a major theme of our report was that Amtrak should have delayed in communicating with Congress the route and service changes proposed in February 2000 to allow time for additional analysis and negotiations with freight railroads. By so doing Amtrak would have proposed considerably fewer new services and would have been more successful in implementing its proposals. We are not suggesting that Amtrak should have delayed announcing the Network Growth Strategy. Rather, our work clearly illustrates the need for Amtrak to perform due diligence to understand the likely positions of key stakeholders—whose cooperation is essential to successful route and service expansion—before, rather than after, committing itself to implementing them. However, we believe that Amtrak’s not examining more closely the capital improvements needed to implement their route proposals and whether freight railroads would likely agree to them were significant flaws in Amtrak’s strategy. We agree with Amtrak’s characterization of our opinion that, if it had a better understanding of the concerns of key stakeholders, it might not have proposed all of the resulting route actions. We would have viewed a decision to implement fewer or different route actions each with a greater likelihood of being successfully implemented, rather than a larger number of speculative proposals, as sound business judgment because it would have increased the likelihood that Amtrak could have realized operating profits and moved closer to the goal of reaching operational self-sufficiency. During our work, we received conflicting information about Amtrak’s early interaction with freight railroads. As a result of our meeting with Amtrak, we discussed this topic again with freight railroads and Amtrak and revised this report to better show this early interaction. Amtrak also stated that (1) it needed to act quickly to reach operational self-sufficiency within 3 years, (2) the purpose of the Network Growth Strategy was to implement route and service changes that would more than cover their operating costs and therefore contribute to achieving operational self-sufficiency, and (3) not every route and service change requires lengthy negotiations. Regarding Amtrak’s first point, we agree that there was immense pressure on Amtrak to become operationally self- sufficient. However, we believe that this pressure made it even more important for Amtrak to conduct the due diligence needed before it decided to move ahead. Without an understanding of the likelihood that freight railroads would be receptive to Amtrak’s plans and that Amtrak could find the capital funds needed to implement these changes, Amtrak had little basis to expect that the route and service proposals it made could actually be implemented expeditiously so as to help reduce Amtrak’s need for federal operating subsidies. Amtrak appeared to tacitly acknowledge the necessity of doing so, at least where capital funding is an issue, when it stated in its comments: “he growing capacity constraints on many key lines mean that freight railroads can, not infrequently, demand large infusions of capital from passenger train operators to accommodate additional trains.” Regarding Amtrak’s second point, we agree that Amtrak’s goal was to implement routes in which revenues exceeded operating costs. It was not our intention to suggest that Amtrak should have only decided to implement routes that covered their capital costs too. We have revised our recommendation to remove such an impression. Regarding Amtrak’s third point, we agree that some proposed route and service changes may be implemented easily and have revised our recommendation to more explicitly recognize this condition. In our meeting with Amtrak officials, Amtrak disagreed with the statement in our draft report that it had poor information on interconnectivity (revenues from passengers taking more than one train to reach their final destinations). Although this comment conflicts with statements made by Amtrak during our work, we acknowledge that Amtrak did have data on interconnectivity at the time it was performing its market-based network analysis. Accordingly, we have deleted references to interconnectivity in this report. Finally, Amtrak believes that we did not sufficiently recognize the market- based analysis framework was a significant step forward in Amtrak’s ability to analyze the market potential for its services. We agree that the market-based approach was a significant step forward for Amtrak. However, the approach’s usefulness was ultimately undermined by Amtrak’s reliance on speculative data on expected express business and unrealized assumptions that the route and service changes could be implemented quickly and easily. We have added information to this report to better portray the differences between the market-based analysis framework and Amtrak’s previous approach. We also made a number of other revisions throughout this report to better portray the extent of Amtrak’s interactions with freight railroads and where limits to the interaction led to implementation problems. We also made changes, where appropriate, to this report based on our meeting with Amtrak. The associate administrator for railroad development at the Federal Railroad Administration within the Department of Transportation stated that the department agreed that Amtrak needs better information on which to base its route and service actions. In particular, the department agreed with our fundamental conclusions that (1) Amtrak needs to undertake earlier negotiation over access-related issues for new services and (2) until recently, Amtrak senior management incorrectly assumed that it had credible information on mail and express revenues and, in particular, costs. Our work focused on route and service actions that Amtrak considered under its market-based approach and Network Growth Strategy. To understand the market-based approach and the Network Growth Strategy, including its approach to estimating mail and express revenues and collaborating with freight railroads, we reviewed documents describing the market-based approach, how it works, and the models used for financial evaluation. We also reviewed studies done by others to identify potential limitations to the market-based approach and discussed these limitations with Amtrak and Department of Transportation officials. We did not independently evaluate the market-based approach or its models. As part of our work, we identified route and service actions Amtrak has taken since 1995 and the current status of the Network Growth Strategy. Finally, we discussed Network Growth Strategy route and service actions with officials from Amtrak, four major railroads that would have been affected had the Network Growth Strategy been fully implemented (the Burlington Northern and Santa Fe Railway Company; CSX Transportation, Inc.; the Norfolk Southern Corporation; and the Union Pacific Railroad Company), and the state of Florida. We conducted our work from July 2001 to April 2002 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its comments earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to congressional committees with responsibilities for intercity passenger rail issues; the acting president of Amtrak; the secretary of transportation; the administrator, Federal Railroad Administration; and the director, Office of Management and Budget. We will also make copies available to others upon request. This report will also be available on our home page at http://www.gao.gov. If you or your staff have any questions about this report, please contact either James Ratzenberger at ratzenbergerj@gao.gov or me at heckerj@gao.gov. Alternatively, we may be reached at (202) 512-2834. Key contributors to this report were Helen Desaulniers, Richard Jorgenson, Sara Moessbauer, James Ratzenberger, and Edward Warner. The financial performance of Amtrak routes ranged from an operating profit of $51.3 million on the Metroliner/Acela Express to an operating loss of $71.5 million on the Northeast Direct. Now part of Acela regional service. Now part of Cascades service. This service is shown separately so as to not distort route and service actions affecting the Boston- Washington, D.C., spine of the Northeast Direct route. The following are the planned route and service actions included in the Network Growth Strategy announced by Amtrak in February 2000. Amtrak officials told us that route and service decisions primarily focus on whether the increased operating revenues from an action are expected to exceed the direct costs. Increased revenues can arise from adding passengers on the route, adding passengers on the route who can then transfer to other routes and vice versa (called interconnectivity) and from hauling mail and express. While Amtrak recognizes early in its planning process that it will incur costs for capital expenditures (e.g., to acquire equipment and facilities and to finance needed track and signal improvements) and, to a lesser extent, state financial support, it typically does not address these needs until after it has decided to implement a route or service action. Ideas for route and service actions are either generated internally or from those outside Amtrak seeking additional service. To provide a quick assessment of an idea’s reasonability, Amtrak informally examines issues such as the number and type of equipment (e.g., cars and locomotives) that might be needed, where the train might stop, and possible train schedules. If a proposal appears promising, Amtrak begins a more formal evaluation process. First, it estimates potential ridership, from which it derives passenger-related revenue estimates. To do so, Amtrak uses one of two models, depending on whether the action involves a long-distance route or a shorter-distance route serving a transportation corridor (such as that between Washington, D.C., and New York City). Of the two models, only the model for transportation corridors can assess the potential market share that a proposed route action will attract from among a corridor’s various transportation modes. For example, this model can estimate the impact of a shift from rail ridership to automobile usage prompted by a decline in gas prices. The ridership projections are, in turn, used to estimate a route’s passenger-related unit costs and operating costs. An Amtrak official said that cost estimates often increase later in the evaluation process as more amenities are added to a proposed service as a means to attract more riders. After it completes its initial assessments, Amtrak uses its market-based approach to model the expected financial impact of the route or service action. It models the proposed route and service action individually and as part of the whole route network. In some cases, Amtrak will model several variations of a proposed route action to see if one is more financially viable than others. Amtrak also estimates potential mail and express revenues associated with proposed route actions. Mail revenue estimates are largely based on contracts Amtrak has with the U.S. Postal Service, discussions with U.S. Postal Service officials, and U.S. Postal Service projections. Estimating express revenue is somewhat more difficult. According to Amtrak, until recently, it estimated express revenue largely on the basis of an analysis of a database of commodities being shipped nationally. Amtrak estimated the portion of this business that it thought it could obtain. An Amtrak official said that it now focuses more on determining existing customers’ shipping needs, assessing these needs in light of current economic trends, and evaluating Amtrak’s ability to meet these needs given existing train capacity. Amtrak relies on its mail and express unit to estimate mail and express costs. However, Amtrak officials told us identifying these costs has been difficult since Amtrak did not formerly identify these costs separately but rather incorporated them into other business units. Amtrak currently has a project under way to identify the specific costs of its mail and express business. When Amtrak management wants to proceed with a route action, it either seeks the approval of its board of directors or directs the affected business unit to implement the action. Amtrak policy requires board approval to initiate service on new routes or to discontinue service on routes. Amtrak said its strategic business units have the authority to make minor changes in the schedules and frequencies of their train service. Amtrak officials told us that the company also considers the cost of capital improvements that may be associated with route actions, the fees that freight railroads will charge for access to their tracks, and the likelihood states might be interested in financially supporting the routes. While Amtrak told us that it recognizes early in its planning process that capital costs may be incurred for routes other than its Northeast Corridor, it does not formally consider these costs under the market-based approach until after it decides to implement a route or service change. Amtrak officials said that they do not consider these costs earlier because Amtrak’s lack of investment capital would preclude further consideration of such proposals. Amtrak’s capital funds most often come from federal assistance and from freight railroads and states that might agree to contribute funds. Amtrak estimates track access payments based on operating agreements with freight railroads, or in the case of railroads it has not dealt with before, it is assumed access fees will not vary significantly from national averages. Obtaining state financial support for routes generally varies depending on the length of the route. An Amtrak official said the corporation aggressively pursues state support on short-distance, commuter-like routes. In fact, he said states often approach Amtrak about initiating or expanding this type of service on their own. However, on longer distance routes that go through many states, most states are not interested in providing financial support and Amtrak must assume the financial responsibility itself. In 2001, states provided financial support for 18 of Amtrak’s 43 routes. This support ranged from about $200,000 on the Ethan Allen Express route (service between New York and Vermont) to about $21 million on the Pacific Surfliner (service within California). The market-based framework includes a set of models used to predict changes in ridership, revenues, and costs likely to result from a planned restructuring of Amtrak’s route system or a variation in service levels on existing routes. A series of demand models estimate ridership and passenger-related revenues in response to variation in the stations served by each route, train departure frequencies or schedules, travel times, and fare structures. Then a series of financial models translate operating statistics that are based on the type of equipment and level of operations (e.g., level of onboard staffing) required into operating expenses. For high- speed rail scenarios, a capital cost model estimates the capital that is likely to be required (e.g., upgrade to track and additional rolling stock) to make changes in service. Amtrak uses separate models to estimate ridership and revenue for short- distance intercity routes (regional networks where frequent, higher-speed services are planned, called “corridors”) and long-distance intercity routes (typically at least 500 miles long). For short-distance intercity routes, the market-based approach includes a model specifically for the Northeast Corridor (Boston-Washington), as well as a generic model to predict ridership and revenue in corridors located in other regions. Each of the corridor models is designed to be a mode-split model. That is, these models predict the share of travel likely to take place by automobile, air, and rail, based on the projected level of total traffic for each market. Amtrak typically estimates total traffic between two areas on the basis of expected demographic and economic growth in the areas making up the corridor. Then it estimates the market shares for each mode on the basis of the costs of travelling by each mode and the level of service it provides in the corridor (departure frequency and total travel time). Finally, it uses the estimated ridership with the assumed fare structures to estimate the passenger-related revenue that would result from a proposed route. The long-distance demand model consists of two components. The first component predicts the total number of rail passengers traveling between each station-pair and the second component predicts the class of service (mainly sleeper versus coach). The ridership estimate is based on such factors as population and employment in the surrounding areas. Unlike the corridor demand models, this is not a mode-split model, but rather a direct rail model. As such, the model does not directly assess the amount of total traffic in the market or the amount of traffic that may be captured by alternative modes. Instead, it employs historic data from a sample of markets to assess the relationships between rail ridership levels and such factors as population, employment, travel time, and level of rail service. Amtrak then uses these estimated associations to estimate the level of rail traffic in a new market. After projecting the rail ridership, Amtrak uses the second component to estimate what fraction of passengers will choose each of four classes of service, based on such factors as frequency and timing of trains and the fares charged for each service level. Finally, Amtrak uses the projected levels of ridership and class of service to estimate the passenger-related revenue that would result from a proposed route. The market-based approach includes operating and, in the case of high- speed rail scenarios, capital cost models, to estimate the likely impact on Amtrak’s expenses from planned changes to its network. These models translate operating statistics into operating or “direct” expenses. The operating statistics are developed based on ridership patterns, schedule, train makeup, and staffing data for an individual route into estimates of the equipment fleet and train crew requirements, as well as the number of monthly train-miles. The operating model applies detailed unit cost to predict the changes in operating expenses for the route. The capital cost model estimates the capital investments necessary to upgrade existing track or construct new rights-of-way for routes in which Amtrak is considering improving travel times, increasing frequency, or introducing new services. The model also estimates costs for acquiring new rolling stock and other equipment as necessary. The requirements for alternative levels of service were developed from engineering studies of facility and equipment requirements necessary to upgrade a sample of route segments. The model also makes use of data on unit cost factors required for upgrading facilities and equipment. These data are based on past experience with upgrades in several markets. Using the estimated facility and/or equipment needs and the data on unit costs, the model calculates an estimate of required capital. According to Amtrak, this model is currently applied only to high-speed rail corridors. It is not used to determine potential capital costs on non-high-speed rail corridors. The following is our evaluation of aspects of Amtrak’s comments on our draft report. 1. We agree with Amtrak’s statement that a Senate committee report directed it to report on its conclusions regarding route and service changes before Amtrak issued its fiscal year 2000 strategic business plan. However, nothing in the Senate report required Amtrak to issue its strategic business plan by a certain date or earlier than it would have done otherwise. As such, the Senate report language did not create the sense of urgency that Amtrak implies. 2. Amtrak questioned our including a list of route profits and losses in appendix I of the draft report because the route profitability system used to generate these results does not produce accurate information for making route decisions. We are not suggesting that the route profitability statistics should have been used in making Network Growth Strategy decisions. We are also not suggesting that they should be used, by themselves, in making future route decisions. Other metrics should also be employed. However, the fact that Amtrak loses money on nearly every single route that it operates (for example, 20 of Amtrak’s 43 routes lost more than $20 million in 2001, even after including state support) was the basis for Amtrak deciding to contract routes in 1995 and expand them in 1998. As such, this route profitability information is contextually important in Amtrak’s quest to improve its financial condition. 3. Amtrak interpreted our conclusion on the need for early consultation with freight railroads before it announced its Network Growth Strategy to mean that we were advocating that it should have “engaged in lengthy ‘consultations’ with each of the affected 16 freight railroads” and conducted “expensive and time consuming studies of the physical characteristics of each line, and of the number, type, and schedules of the trains that operate over it.” We did not intend such an interpretation. We agree with Amtrak that some proposed route and service changes might be expected to be easier to implement than others—such as ones that could be expected to have little effect on freight railroads. We also agree with Amtrak that there is no model for how and on what timetable such issues should be resolved. We do not prescribe a level of specificity for these discussions, specific expected outcomes, or uniformity. We believe that discussions with freight railroads should be tailored to the complexity, expected difficulty, and risk associated with each proposed action. As discussed previously, we revised our recommendation to make it more useful to Amtrak. 4. Amtrak stated that “in many cases was able to implement significant route and service changes fairly quickly” and cited two examples. In reality, Amtrak has implemented only 3 of the 15 planned route and service actions. 5. Amtrak states that our draft report implied that it had no reason to expect Norfolk Southern would agree to the operation of the Skyline service. Amtrak disagreed with our draft report because it had implemented a similar service (an extension of the Pennsylvanian) over the same route about a year before the Network Growth Strategy was issued. Any implication about the potential success or failure in implementing the Skyline service was inadvertent. We have revised our report to state that Norfolk Southern was willing to work with Amtrak to establish this service and had reached agreement on how the train was to be handled. The report states that Amtrak largely cancelled this proposal because it and Norfolk Southern could not agree on compensation for Amtrak’s use of Norfolk Southern’s tracks. 6. Amtrak disagreed with the example we used to illustrate what occurs when early discussions with freight railroads do not occur. Amtrak stated that its Network Growth Strategy contemplated using either Union Pacific or Kansas City Southern tracks and Amtrak approached both railroads and that its decision to re-route the Crescent Star from Union Pacific Railroad to Kansas City Southern tracks represents a prudent business decision rather than a flaw in its decisionmaking. We agree that the Network Growth Strategy provided flexibility in routing and that deciding to re-route the Crescent Star might have been a prudent business decision. However, Amtrak did not learn until spring 2000 that (1) significant capital improvements were required to implement the service, (2) Union Pacific was not willing to share the capital investment costs needed to use this line, and (3) an alternative routing would be required. Since Amtrak planned to implement the Crescent Star in summer 2000, just a few months after it announced the Network Growth Strategy, having early knowledge of significant potential roadblocks would have been useful to Amtrak—for example, either in attempting to ameliorate the roadblocks or deciding earlier to concentrate on the alternative Kansas City Southern route. As Amtrak stated in its comments, it needed to implement the Network Growth Strategy quickly to help reduce its need for operating subsidies. 7. Amtrak commented that (1) the infrastructure investment required to add one or two train frequencies to a rail line is not easily quantifiable and (2) there are other ways to reach agreements to undertake capital projects other than by allocating costs between parties (e.g., the freight railroad might agree to bear the cost of the project if Amtrak agrees to something else). We agree with Amtrak’s statement. However, a recurring theme for Amtrak has been its dearth of capital to improve its service. We believe that it would have been prudent for Amtrak to factor into its decisionmaking the fact that capital issues, for some proposed routes, were crucial to Amtrak’s being able to implement the Network Growth Strategy, particularly as it recognized in its comments that “…freight railroads can, not infrequently, demand large infusions of capital from passenger train operators to accommodate additional trains.” 8. Amtrak states that our discussion of capital funding is out of context because the purpose of the Network Growth Strategy was to identify routes for which revenues would exceed operating costs. Amtrak stated that if a potential route or service change met this test then it made sense to pursue it, even if it was likely to require capital support. We agree with Amtrak that it made sense to pursue routes that were contemplated to make an operating profit even if capital investment would be needed to implement them. We did not intend to suggest that Amtrak should have pursued only route and service expansions that were likely to cover both operating and capital costs. Rather, we believe that, for some routes, capital investment was an important prerequisite to Amtrak being able to implement the routes quickly so that it could obtain the operating profits. 9. Amtrak commented that some Network Growth Strategy negotiations with freight railroads were stalemated not on the issue of implementation, but on price. We have revised our report to recognize this. 10. Amtrak criticized our suggestion that it should have had preliminary discussions with freight railroads over capital funding issues, saying that it is a poor negotiating technique to approach a freight railroad by telling it how much capital Amtrak is willing to contribute, because this figure sets a floor for Amtrak’s contribution. We agree that there are delicate business issues arising from Amtrak’s attempts to get freight railroads to allow it to expand operations over freight railroad- owned tracks and that different route and service proposals can raise different and sometimes complex issues. We are not suggesting that Amtrak “give away the store” in these discussions by disclosing in advance how much capital support it might be willing to contribute to the freight railroads. However, as discussed in the report, we believe that it would have been prudent to determine freight railroads’ expectations before deciding to implement the plan because freight railroads’ cooperation was imperative to the success of the Network Growth Strategy. Without an understanding of whether freight railroads’ expectations were similar to Amtrak’s—and the expected ease or difficulty in meshing these expectations—Amtrak had little basis to expect that the route and service proposals it made could actually be implemented expeditiously so that they could help reduce Amtrak’s operating losses. 11. On several bases Amtrak disagreed with our discussion of gaining an early understanding of whether states, such as Florida, might or might not be willing to provide the capital funds that Amtrak expected them to contribute. Because the focus of our work was Amtrak’s interaction with freight railroads, we have deleted references to the capital support that Amtrak expected from states such as Florida. 12. Amtrak stated that the Network Growth Strategy was not just “…an action plan based on rigorous financial analysis. It was a vision of how Amtrak’s national network could be reshaped so as to extend its reach and reduce operating losses…” . Amtrak suggested that we faulted it for pursuing an innovative approach and because it did not achieve “all its vision.” We are not criticizing Amtrak for pursuing a route expansion strategy. Rather, our report focuses on the aspects that might have made the vision more successful than it was, although perhaps at a more modest level than Amtrak originally envisioned. Amtrak’s Network Growth Strategy ultimately failed because the route system expanded marginally and Amtrak was not able to reduce its operating losses to the extent planned. In our opinion, an important contributor to this failure was Amtrak’s inattention to potential implementation problems before it announced a strategy. Attention to potential implementation problems was crucial because, as Amtrak stated, it needed to have the routes implemented quickly so as to reap the financial benefits that would result in a reduction of operating losses. We believe that the recommendation we offer, if adequately implemented, could help Amtrak be more successful in any future route expansion efforts. | In light of its continuing financial deterioration and its stated goal of eliminating federal operating assistance by December 2002, Amtrak undertook several steps to improve its financial condition, including changing in its routes and services. Amtrak has been unsuccessful in implementing its Network Growth Strategy to shift its route and service plans for new routes and expanded services on the freight tracks over which it operates. Two years after announcing the new strategy, Amtrak has only implemented three routes, one of which was later canceled. Amtrak still plans to implement the remaining three routes, although later than planned. Increased mail and express revenues were the cornerstone of the new strategy. However, Amtrak overestimated the mail and express revenue expected. According to Amtrak, this overestimation occurred because (1) it had no empirical basis for its revenue estimates and (2) express shippers were reluctant to enter into contracts for service that did not yet exist. Six of the planned route actions were canceled because Amtrak overestimated the revenues associated with them. Amtrak was unable to reach agreement with freight railroads because they were concerned about (1) Amtrak's plans to operate additional trains in already congested areas, (2) Amtrak's plans to carry express merchandise that might compete with their own business, and (3) compensation that Amtrak would pay for use of their tracks. |
The FSM, the Marshall Islands, and Palau are among the smallest countries in the world. In 2008, the three FAS had a combined resident population of approximately 179,000—104,000 in the FSM, 54,000 in the Marshall Islands, and 21,000 in Palau. Under the compacts of free association, citizens of the FAS are exempt from meeting the visa and labor certification requirements of the Immigration and Nationality Act as amended. The migration provisions of the compacts allow compact migrants to enter the United States (including all U.S. states, territories, and possessions) and to lawfully In the 1986 compacts’ enabling work and establish residence indefinitely.legislation, Congress stated that it was not its intent to cause any adverse consequences for U.S. territories and commonwealths and the state of Hawaii. Congress further declared that it would act sympathetically and expeditiously to redress any adverse consequences and authorized compensation for these areas that might experience increased demands on their educational and social services by compact migrants from the Marshall Islands and the FSM. The December 2003 amended compacts’ enabling legislation restated Congress’s intent not to cause any adverse consequences for the areas defined as affected jurisdictions—Guam, Hawaii, the CNMI, and American Samoa. The act also authorized and appropriated $30 million for each fiscal year from 2004 to 2023 for grants to the affected jurisdictions, to aid in defraying costs incurred by these jurisdictions as a result of increased demand for health, educational, social, or public safety services, or for infrastructure related to such services specifically affected by compact migrants resident in the affected jurisdictions. Figure 1 shows the locations of the FAS and the affected jurisdictions. The amended compacts’ enabling legislation provides for Interior to allocate the $30 million in grants to affected jurisdictions on the basis of their compact migrant population. Each affected jurisdiction is to receive its portion of the $30 million per year in proportion to the number of compact migrants living there, as determined by an enumeration to be undertaken by Interior and supervised by the U.S. Census Bureau (Census) or another organization at least every 5 years.the population to be enumerated as persons, or those persons’ children under the age of 18, who pursuant to the compacts are admitted to, or resident in, an affected jurisdiction. The amended compacts’ enabling legislation permits, but does not require, affected jurisdictions to report on compact migrant impact. If Interior receives such reports from the affected jurisdictions, it must submit reports to Congress that include, among other things, the governor’s comments and administration’s analysis of any such impacts. The combined data from Census’s 2005-2009 American Community Survey and the 2008 required enumerations in Guam and the CNMI estimated that approximately 56,000 compact migrants quarter of all FAS citizens—lived in U.S. areas, with the largest populations in Guam and Hawaii. An estimated 57.6 percent of all compact migrants lived in affected jurisdictions: 32.5 percent in Guam, 21.4 percent in Hawaii, and 3.7 percent in the CNMI, while nine mainland states each had an estimated compact migrant population of more than 1,000. (See fig. 2.) Census’s 2005-2009 American Community Survey and 2008 enumerations estimated the total number of compact migrants in U.S. states and territories as ranging from 49,642 to 63,048, with a 90 percent confidence interval; that is, Census is 90 percent confident that the true number of compact migrants falls within that range. For additional detail on these Census estimates, see pages 12 through 18 of GAO-12-64. Census suppressed the estimated values of remaining states to protect the confidentiality of individual respondents; we do not know how many of the remaining states contain migrants. On the basis of these combined data, we estimate that approximately 68 percent of compact migrants were from the FSM, 23 percent were from the Marshall Islands, and 9 percent were from Palau. Surveys conducted in affected jurisdictions from 1993 through 2008 show growth in the compact migrant populations in Guam and Hawaii. In the CNMI, from 2003 to 2008, the compact migrant population declined. Over the same period, the total compact migrant population in Guam and Hawaii grew as a percentage of their total populations. The estimated number of compact migrants in Guam increased from 9,831 in 2003 to 18,305 in 2008. In 2003, compact migrants represented approximately 6 percent of Guam’s total population, but by 2008 they had increased to approximately 12 percent. Compact migrants in Hawaii increased during the same period from an estimated 7,297 to 12,215 and represented approximately 1 percent of Hawaii’s total population in 2008. An analysis of 2010 decennial census race data also shows growth in the population of FAS-related persons throughout the United States, with the U.S. population of FAS-related persons more than tripling from 17,380 in 2000 to 55,286 in 2010. For 2004 through 2010, the affected jurisdictions’ reports to Interior show more than $1 billion in costs for services related to compact migrants. During this period, Guam’s annual reported costs increased by nearly 111 percent, and Hawaii’s by approximately 108 percent. The CNMI’s reported annual costs decreased by approximately 53 percent, reflecting the decline in the CNMI compact migrant population. During the same period, the amended compacts’ enabling legislation provided $210 million in impact grants—approximately $102 million to Guam, $75 million to Hawaii, and $33 million to the CNMI. Figure 3 shows compact impact costs reported by the affected jurisdictions for 1996 through 2010. The affected jurisdictions reported impact costs for education, health, public safety, and social services. Education accounted for the largest share of reported expenses in all three jurisdictions, and health care costs accounted for the second-largest share overall (see table 1). Several officials in Guam and Hawaii cited compact migrants’ limited eligibility for a number of federal programs, particularly Medicaid, as a key contributor to the cost of compact migration borne by the affected jurisdictions. While their parents may not be eligible for some programs, U.S.-born children of compact migrants are eligible as citizens for the benefits available to them as U.S. citizens. We identified a number of weaknesses related to accuracy, adequacy of documentation, and comprehensiveness in affected jurisdictions’ reporting of compact impacts to Interior from 2004 through 2010. Examples of such weaknesses include the following. Definition of compact migrants. For several impact reports that we examined, the reporting local government agencies, when calculating service costs, did not define compact migrants according to the criteria in the amended compacts enabling legislation. For instance, some agencies defined and counted compact migrants using the proxy measures of ethnicity, language, or citizenship rather than the definition in the amended compacts’ enabling legislation. Using ethnicity or language as a proxy measure could lead to overstating costs, since neither measure would exclude individuals who came to the jurisdiction prior to the compact, while using citizenship as a proxy measure could lead to understating costs, since it would exclude U.S.- born children of compact migrants. Federal funding. Guam, Hawaii, and the CNMI, among other U.S. states and territories, receive federal funding for programs that compact migrants use; however, not all compact impact reports accounted for this stream of funding and included costs in compact impact estimates for programs that federal funding had partially addressed. To the extent that federal revenue for programs in affected jurisdictions is based on population counts or data on usage, the presence of, and use of services by, compact migrants lead to federal offsets. For example, from 2004 to 2008, Hawaii developed its education impact costs by calculating a per-pupil expenditure multiplied by the number of compact migrant students enrolled each school year. However, federal funds received through several programs are included in these annual expenditures. If the federal funds component of per-pupil expenditures were subtracted from Hawaii’s education impact reporting, as well as a correction made to eliminate a data error that double-counted Marshallese students, it would reduce the total cost of services to compact migrants by approximately $61 million for 2004 through 2008 from $229 to $168 million. Revenue. Multiple local government agencies that receive fees as a result of providing services to compact migrants did not consider fees in their compact impact reports. Any exclusion of revenue may cause an overstatement of the total impact reported. Compact migrants also participate in local economies through their participation in the labor force, payment of taxes, consumption of local goods and services, and receipt of remittances. Previous compact migrant surveys estimated compact migrants’ participation in the labor force, but existing data on other compact migrant contributions such as tax revenues, local consumption, or remittances are not available or sufficiently reliable to quantify their effects. Capital costs. Many local government agencies did not include capital costs in their impact reporting. Capital costs entail, for example, providing additional classrooms to accommodate an increase in students or constructing additional health care facilities. In cases where compact migration has resulted in the expansion of facilities, agencies understated compact migrant impact by omitting these costs. Per person costs. A number of local government agencies used an average per-person service cost for the jurisdiction rather than specific costs associated with providing services to compact migrants. For example, one jurisdiction based the cost of providing health care services to compact migrants on the number of migrants served out of the total patient load instead of totaling each patient’s specific costs. Using the average cost may either overstate or understate the true cost of service provision. A number of local government agencies did not disclose their methodology for developing impact costs, including any assumptions, definitions, and other key elements, which makes it difficult to evaluate reported costs. Furthermore, some agency methodologies vary among affected jurisdictions. For those years when the affected jurisdictions submitted impact reports to Interior, not all local government agencies in the affected jurisdictions included compact impact costs for those years. For example, Hawaii did not provide estimated costs to Interior in 2005 and 2006, although it included partial costs incurred in those years in its 2007 and 2008 reports. Without comprehensive data in each year, the compact impact reports could understate total costs. In addition, compact impact reporting has not been consistent across affected jurisdictions. For example, Guam and the CNMI included the cost of providing police services, while Hawaii did not. Guidelines that Interior developed in 1994 for compact impact reporting do not adequately address certain concepts key to reliable estimates of impact costs. Developed in response to a 1993 recommendation by the the guidelines suggest that impact costs in Interior Inspector General,Guam and the CNMI should, among other concepts, (1) exclude FAS citizens who were present prior to the compacts, (2) specify omitted federal program costs, and (3) be developed using appropriate methodologies. However, the 1994 guidelines do not address certain concepts, such as calculating revenue received from providing services to compact migrants, including capital costs, and ensuring that data are reliable and reporting is consistent. Several Hawaii and CNMI officials from the reporting local government agencies we met with, as well as Interior officials, were not aware of the 1994 guidelines and had not used them. Officials at the Guam Bureau of Statistics and Plans, which possessed the guidelines, said that the bureau attempts to adhere to them when preparing compact impact cost estimates. However, we found some cases where the bureau and other Guam agencies did not follow the guidelines. In order to strengthen Interior’s ability to collect, evaluate, and submit reliable information to Congress on compact impact, we recommended in our November 2011 report that Interior disseminate guidelines to the affected jurisdictions on producing reliable impact estimates, and call for the affected jurisdictions to apply these guidelines when developing compact impact reports. Interior agreed with our recommendation. In March 2012, Interior convened a meeting of the Presidents of the FAS and governors and senior officials from affected jurisdictions to collaboratively develop strategies to address policy issues concerning the compacts. At the meeting, Interior stated that it would work directly with the affected jurisdictions regarding the feasibility of developing uniform reporting guidelines, with Guam and Hawaii having leadership roles in the effort. As of June 2013, Interior had not prepared any new guidance. We continue to believe that providing more rigorous guidelines to the affected jurisdictions and promoting their use for compact impact reports would increase the likelihood that Interior can provide reliable information on compact impacts to Congress. This concludes my statement for the record. If you or your staff have any questions about this statement, please contact me at 202-512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this statement include Emil Friberg, Jr. (Assistant Director); Ashley Alley; David Dayton; Martin De Alteriis; Keesha Egebrecht; Fang He; Reid Lowe, Mary Moutsos; Michael Simon; Sonya Vartivarian; and Monique Williams. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | U.S. compacts with the FAS permit those three countries' citizens to migrate to the United States and its territories (U.S. areas) without regard to visa and labor certification requirements. Thousands of FAS citizens have migrated to U.S. areas (compact migrants)—particularly to Hawaii, Guam, and the CNMI. In fiscal year 2004, Congress appropriated $30 million annually for 20 years to help defray affected jurisdictions' costs for migrant services. Interior allocates the $30 million as compact impact grants in proportion to the number of compact migrants living in each affected jurisdiction. Although not required, affected jurisdictions may report impact costs to Interior, which submits any reports it receives to Congress. This statement draws from GAO's November 2011 report on compact migrants and discusses challenges in identifying the impact of compact migrants on U.S. areas. For this statement, GAO assessed progress made by Interior to address the recommendation that it disseminate cost guidelines. Data from the U.S. Census Bureau (Census) show that migrants from the freely associated states (FAS)—the Federated States of Micronesia (FSM), the Marshall Islands, and Palau—reside throughout U.S. areas. GAO's 2011 report found that Census estimates that roughly 56,000 compact migrants—nearly a quarter of all FAS citizens—were living in U.S. areas in 2005 to 2009. About 58 percent of compact migrants lived in areas that Congress defined in the amended compacts' enabling legislation as affected jurisdictions: American Samoa, Hawaii, Guam, and the Commonwealth of the Northern Mariana Islands (CNMI). For fiscal years 2004 through 2010, Hawaii, Guam, and the CNMI reported more than $1 billion in costs associated with providing education, health, and social services to compact migrants—far in excess of the $210 million in compact impact grants over that time period. The affected jurisdictions reported impact costs for education, health, public safety, and social services to the Department of the Interior (Interior). Education accounted for the largest share of reported expenses in all three jurisdictions, and health care costs accounted for the second-largest share overall. However, assessed against best practices for cost estimation, these cost estimates contain a number of limitations with regard to accuracy, adequate documentation, and comprehensiveness, affecting the reported costs' credibility and preventing a precise calculation of total compact impact on the affected jurisdictions. For example, some jurisdictions did not accurately define compact migrants, account for federal funding that supplemented local expenditures, or include revenue received from compact migrants. Interior developed guidelines in 1994 for reporting compact impact. However, several officials from the reporting local government agencies, as well as Interior officials, were not aware of the guidelines and had not used them. Moreover, the 1994 guidelines do not address certain concepts that are essential for reliable estimates of impact costs, such as calculating revenue received from providing services. Providing more rigorous guidelines to the affected jurisdictions that address concepts essential to producing reliable impact cost estimates and promoting their use for compact impact reports would increase the likelihood that Interior can provide reliable information on compact impacts to Congress. Although Interior took initial steps to implement GAO's recommendation in 2012, it has not yet provided updated guidelines for estimating compact cost impacts. GAO is not making new recommendations in this statement. In its 2011 report, GAO recommended that Interior disseminate adequate guidance for estimating compact cost impacts and call for the affected jurisdictions to apply these guidelines, among other steps needed to assess and address the impact of the growing compact migration. Interior concurred with the recommendation on providing adequate guidance for estimating compact cost impacts. |
The Forest Service’s mission includes sustaining the nation’s forests and grasslands; managing the productivity of those lands for the benefit of citizens; conserving open space; enhancing outdoor recreation opportunities; and conducting research and development in the biological, physical, and social sciences. The agency carries out its responsibilities in three main program areas: (1) managing public lands, known collectively as the National Forest System, through nine regional offices, 155 national forests, 20 national grasslands, and over 600 ranger districts; (2) conducting research through its network of seven research stations, multiple associated research laboratories, and 81 experimental forests and ranges; and (3) working with state and local governments, forest industries, and other private landowners and forest users in the management, protection, and development of forest land in nonfederal ownership, largely through its regional offices. The nine regional offices, each led by a regional forester, oversee the national forests and grasslands located in their respective regions, and each national forest or grassland is headed by a supervisor; the seven research stations are each led by a station director. These offices, which we collectively refer to as field units, are overseen by a Washington, D.C., headquarters office, led by the Chief of the Forest Service. The Forest Service has a workforce of approximately 30,000 employees, although this number grows by thousands in the summer months, when the agency brings on seasonal employees to conduct fieldwork, respond to fires, and meet the visiting public’s additional needs for services. Forest Service employees work in geographically dispersed and often remote locations throughout the continental United States, Alaska, Hawaii, and Puerto Rico. Agency employees carry out a variety of field-based activities—including fire prevention and management, monitoring and maintenance of recreational trails, biological research, and habitat restoration—and have diverse skills, backgrounds, and experiences. Forest Service employees include a wide range of specialists, such as foresters, biologists, firefighters, administrative staff, research scientists, recreation specialists, and many others, all of whom play an important role in carrying out the agency’s mission. In the early 2000s, the Forest Service began efforts to centralize many of the business services needed to support its mission activities, including (1) budget and finance, (2) human resources management, and (3) information technology. Before centralization, according to the agency, more than 3,500 employees located in field units throughout the nation carried out business service tasks in these three areas for their respective field units. These business service employees were part of the field-unit organizational structure and typically reported directly to the managers of those field units. Each region or forest often had unique processes or systems for completing business-related tasks, such as varied processes for financial accounting and budgeting, personnel actions, and computer support. Faced with a number of internal and external pressures to change the way these business services were delivered, and to address growing costs of service delivery as well as operational problems, the agency began efforts to centralize its business services. For budget and finance and human resources management, the agency began re- engineering efforts for its business processes, which included preparing business cases outlining the agency’s intended approach to centralization, such as how the centralized structure would be organized and how it would provide services to its field-unit customers, as well as estimating the one-time investment costs and future costs of providing services each year once centralization was complete. Centralization of information technology, on the other hand, consisted of several efforts to consolidate servers and data centers, among other things, and was driven largely by competitive sourcing, whereby the agency and its employees competed with private-sector organizations to deliver certain information technology services. The Forest Service won this competition, and, beginning in 2004, the agency transferred some of its information technology employees to an “Information Solutions Organization” (ISO)—a separate information technology component established within the agency to provide technology support services, including computers, radios, and telecommunications to all employees. During 2008, however, the Forest Service terminated its competitive- sourcing arrangement with ISO, folding these services back into a single information technology organization. Centralization activities were carried out separately for each of the three business services over several years and—given the magnitude of its efforts and potential for significant cost savings—the agency undertook efforts to monitor and report on its results during this time. For example, centralization of budget and finance was implemented in 2005 and 2006 and involved the physical relocation of most finance-related positions to the Albuquerque Service Center, with these positions now reporting to the new centralized budget and finance organization. Some budget-related positions and tasks, however, such as budget formulation and execution, generally remained in the field units, and those positions continued to report to field-unit management. Similarly, centralization of human resources management began in 2006 and proceeded through a staged implementation over a period of several years, in which most human resources management positions were relocated to the Albuquerque Service Center (although some human resources liaison positions were developed to provide advice and counsel to managers across multiple field units). Under the new centralized organization, all human resources employees reported to human resources management rather than field- unit management. In contrast, although aspects of information technology centralization began as early as 2001, those related to transferring services to the agency’s new ISO occurred in 2004 and 2005. Some information technology positions were relocated to the Albuquerque Service Center, but many employees remained at field-unit locations and became “virtually centralized” employees, reporting to centralized management in Albuquerque. For each of the three business services, the Forest Service predicted that the transition from its largely decentralized field-based structure to the new centralized organization would take about 3 years, although full integration in some cases could take longer, given the significance of the changes. During this transition period, the agency took steps to assess and report on the status of, and results being achieved through, centralization and provided executive briefings to congressional stakeholders and agency leaders. These briefings provided an overview of implementation timelines, key milestones, and achievements, as well as agency estimates of projected and achieved cost savings resulting from centralization. For information technology, these estimates specifically focused on savings related to the agency’s ISO. The three centralized business services encompass a wide variety of activities to support field units’ mission work, ranging from making payments to partners for trail maintenance, to repairing radios used for communication in the field, to processing the paperwork to bring new employees on board (see table 1). Collectively, the budgets for the three business services were approximately $440 million in fiscal year 2011, which represents about 7 percent of the agency’s annual operating budget of more than $6.1 billion. There were 2,150 budgeted full-time equivalents (FTE) for the three services, or about 6 percent of the agency total of more than 35,000 FTEs. Table 2 shows the 2011 staffing and budget levels for each of the three business services. Centralization of Forest Service business services contributed to several agencywide improvements, such as improved financial accountability, standardization of information technology and human resources processes, and consistent development and implementation of related policies. Nevertheless, we found that the shift in how business services were delivered resulted in significant negative repercussions for field-unit employees, including increased responsibility for business service tasks. Although the effects of centralization on employees varied, cumulatively they have negatively affected the ability of these employees to carry out their mission work. By consolidating and standardizing the Forest Service’s financial systems and procedures, centralization helped alleviate some of the agency’s long-standing problems with financial accountability. For example, before centralization, the agency had difficulty reconciling data produced by the numerous financial systems used in field units across the agency. Throughout the 1990s, the Forest Service was unable to achieve clean financial statement audit opinions, and in 1999, we added financial management at the agency to our list of federal programs and operations at “high risk” for waste, fraud, abuse, and mismanagement. While the agency was able to achieve clean opinions during the early 2000s, doing so required substantial year-end financial adjustments involving significant time and resources. By consolidating and standardizing its finance, accounting, and budget processes through the centralization of budget and finance, the agency was able to improve its financial management and sustain clean financial statement audit opinions more easily and at a lower cost than before centralization, according to agency officials. Accordingly, in 2005, we removed the Forest Service from our high-risk list, citing the agency’s centralization efforts. Similarly, centralization made it easier to standardize and automate other business processes, which improved the agency’s ability to collect and review more-reliable agencywide data and make more-informed management decisions. For example, according to information technology officials, centralization has allowed them to more easily track major technology equipment and infrastructure issues and address them holistically, as well as to provide a more even distribution of technology services, among other benefits. According to agency officials, centralizing the three business services has also made it easier to monitor and assess the performance of business service delivery to field-unit customers, such as the timeliness of processing requests for service. Officials told us that this type of information is closely tracked, analyzed, and used to hold managers accountable for ensuring successful program delivery. Further, data collected through automated systems are now generally more reliable, according to agency officials, in part because they collect more-standardized information, have more built-in controls, and require fewer people to enter data. In addition, centralization of the three business services has allowed for more-consistent policy development and implementation, according to agency officials. Before centralization, for example, business services staff were located at hundreds of sites across the country and reported to individual field units, making it difficult to ensure consistent policy implementation. Now, with business service employees under a single management structure, agency officials told us, it is easier to develop and communicate policy procedures to help ensure their consistent implementation, as well as to provide field-unit employees with consistent access to services across the agency. Similarly, information technology officials told us that centralization has also benefited the agency in the face of increasing complexity and sophistication regarding information management needs, allowing for more coordinated and timely responses to continually changing needs. For example, officials said that centralization facilitated the implementation of security requirements across the multiple field units and improved the agency’s ability to ensure that all employees use compatible hardware and software. Further, under centralization, business service staff have been able to more easily specialize in certain areas, which has improved consistency and overall service quality, according to agency officials. For example, agency officials told us that before centralization, field-unit staff might process requests for specific services, such as retirements or transfers, only occasionally, and therefore might be unfamiliar with the correct procedures to follow or guidance to give to employees. Now there are dedicated groups of employees at the centralized business service centers who have specialized knowledge of each process, which has led to consistent implementation of policies and overall improvements, according to agency officials we spoke with. Even with these improvements, we found that centralization—particularly of human resources management and information technology—has had significant and widespread negative repercussions for field-unit employees. Centralization changed many processes for completing administrative tasks, placing greater responsibility on field-unit employees. From our interviews, site visits, and focus groups with a broad cross-section of Forest Service employees—as well as our reviews of multiple internal agency assessments—we found that centralization of budget and finance generally affected fewer employees and is viewed by employees as now working reasonably well, whereas changes in human resources management and information technology affected more employees and created more problems for them in carrying out daily tasks. This section describes the effects that centralization had on employees; the agency’s actions to address employee concerns are discussed in detail later in this report. Centralization changed the processes for completing most administrative tasks associated with the three business services, shifting a larger portion of the responsibility for these tasks to field-unit employees. This shift occurred because employees previously responsible for the task were eliminated, relocated, or reassigned, leaving the task itself behind, and because certain tasks became “self-service”—that is, field-unit staff were generally expected to initiate or carry out certain tasks that were previously handled by local business service specialists. Under the centralized self-service model, to complete many business service tasks, field-unit employees are generally responsible for accessing automated systems, locating and filling out automated forms, submitting information through these systems, and calling one of the three business services’ centralized help desks for assistance when they are not able to complete an action on their own. For example, before centralization, to complete retirement, health benefits, pay-related, or other personnel paperwork, field-unit employees would receive assistance from field-unit-based human resources specialists, who would also be responsible for processing the actions. Now, under the centralized self-service model, field-unit employees are to initiate or implement these actions directly through automated systems, with a centralized help desk available to offer advice on how to complete the action when questions arise. Similarly, for information technology-related tasks, before centralization, a field-unit employee would rely on a local field-unit-based technician to troubleshoot a computer problem, whereas under the self-service model, the employee is expected to seek self-help tools, such as guidance on the agency’s Web site, or to call or e-mail a help-desk representative to troubleshoot the problem. Among the three services that were centralized, we found generally fewer negative effects from centralizing budget and finance. Because many field-unit employees do not regularly perform tasks related to budget and finance, we found that difficulties associated with this centralization effort were not as widely felt as those associated with centralization of the other two business services. We consistently found that changes to budget and finance resulting from centralization were generally perceived positively after some early problems—such as the lack of clearly written policies and procedures, unclear or untimely communications to field units, and delayed payment processing—were corrected. Further, once it became clear to field-unit staff what tasks were not centralized, many of those duties were reassigned to budget or administrative staff in the field units. These tasks—such as overseeing the collection and tracking of campground fees—often required local presence or knowledge. A few field units also hired additional administrative staff: for example, one regional office established five new positions to carry out remaining budget and finance-related work, such as assisting individual field units within the region with tracking, managing, and overseeing various financial accounts. One of the crucial factors often cited for the success of the budget and finance centralization effort was the fact that the budget staff in the field units were not centralized and therefore continued to carry out budget and some finance-related responsibilities for the field units. They also often became liaisons with the budget and finance center in Albuquerque, providing critical information to the center and communicating information back to the employees who worked in their local field unit. Nevertheless, we found continuing concern about several aspects of budget and finance centralization. For example, a few field-unit officials told us they have lost the flexibility to efficiently deal with unique circumstances, such as the need for telephone service in certain field units that are active during only part of the year or paying for shared utilities in a building jointly occupied with another agency. Before centralization, officials said they had the authority to easily make needed arrangements. Under centralization, in contrast, because these circumstances are atypical and therefore standard processes or procedures may not be applicable, working with centralized budget and finance staff to make appropriate arrangements has proven cumbersome and time-consuming, according to the officials. In addition, according to many field-unit employees, natural resource project managers who manage agreements with external partners, such as other federal agencies and nonprofit organizations, have also had to take on significant additional administrative tasks. These managers have always been responsible for managing and overseeing agreements, but project managers are now also directly responsible for the steps associated with tracking and confirming agreement payments in an automated system. Many project managers we spoke with said they find these tasks confusing and very time-consuming to carry out, in part because the managers use the system infrequently and in part because the system is not intuitive or easy to use. In contrast to centralization of budget and finance, changes resulting from centralizing human resources management and information technology touched nearly all Forest Service employees and were often perceived as overwhelmingly negative, although the extent of the negative perception varied according to the task being performed and the employee performing it. Many employees we spoke with said that when these services were first centralized, significant and extreme breakdowns occurred, affecting a large number of employees, and while they have seen some improvements over time, significant concerns remain. Through our interviews and focus groups, as well as our reviews of recent internal agency assessments, including agency-led surveys and focus groups, we found that field-unit employees across all agency levels have continuing concerns with the increased administrative workload resulting from centralization of these two business services and with the tools available to carry out those tasks, including limitations with the automated systems and help-desk customer support or guidance available on service center Web sites. Field-unit employees consistently expressed frustration through agency feedback mechanisms and through our interviews and focus groups about the increased number of largely self-service tasks they are now responsible for as a result of centralization of human resources management and information technology—tasks often requiring a significant amount of time or expertise to complete. Several field-unit staff told us that this self-service approach has in fact resulted in a form of decentralization, as now all employees are expected to have the knowledge or expertise to carry out those specific self-service tasks themselves. Even carrying out simple tasks can prove to be difficult and time-consuming, according to many field-unit employees whom we spoke with. Because staff might do such tasks infrequently, and because the processes or procedures for carrying them out may change often, field- unit employees told us they must spend time relearning how to perform certain tasks every time they carry them out. For example, field-unit staff told us that before centralization, to put a seasonal employee on nonpay status they would simply inform their local human resources specialist, and the specialist would then make the necessary change. After centralization, field-unit supervisors became responsible for directly entering information into an automated system to initiate the change or calling the help desk for assistance. Because a supervisor may carry out such an action only once a year—and the procedure for doing it might have changed in the meantime—completing this action or other apparently simple actions can be difficult and time-consuming, according to officials. Further, many field-unit employees told us that many other tasks are not simple and in fact require detailed technical knowledge. As a result, they believe they have had to become specialists to complete work they were not hired or trained to do, putting them beyond their level of expertise and making it difficult for them to efficiently or effectively complete some tasks. For example, many staff expressed frustration that they do not have the knowledge or skills to quickly complete specialized tasks, such as updating or repairing computers or other office equipment like telephones or printers. Yet under the self-service model, all agency staff are expected to do so—requiring them to read, understand, and implement technical instructions or contact the help desk, which can take hours or days, depending on the nature of the issue, whereas a specialized technician might be able to carry out the task in minutes. Moreover, many field-unit staff told us that their lack of familiarity with completing such tasks makes them prone to making errors, requiring rework, and adding to the time-consuming and frustrating nature of the process. Centralization of human resources management and information technology entailed greater reliance on numerous automated systems, yet through our interviews, focus groups, and reviews of recent internal agency assessments, we found widespread agreement among field-unit staff that many of the agency’s systems are not user-friendly and have not helped employees carry out their work. In the case of human resources management, for example, centralization was predicated on successful deployment of an automated system that was to process multiple human resources-related actions, such as pay, benefits, and personnel actions. When it became clear that this system—known as EmpowHR—did not work as intended, the agency implemented several separate systems to perform its functions, including one for tracking personnel actions, called 52 Tracker. However, we heard from staff across the field units who have to process these kinds of personnel actions, that the 52 Tracker system has been slow, cumbersome to use, and counterintuitive, often leading to mistakes and delays in processing important personnel actions like pay raises. We also found that the automated system used to carry out various steps in hiring—known as AVUE—has been difficult to use and navigate by both field-unit managers and external candidates trying to apply for a position within the agency. Although AVUE was in use by the agency before centralization, field-unit managers previously relied on human resources specialists who used the system frequently and were therefore familiar with it, according to managers we spoke with. In contrast, under centralization, field-unit managers are expected to undertake more hiring-related tasks in addition to their other duties, and managers repeatedly told us that creating appropriately targeted job postings within AVUE was an arduous process, frequently resulting in situations where highly qualified candidates were wrongly eliminated from consideration or unqualified candidates were listed along with qualified candidates. We found consistent widespread dissatisfaction, through the interviews and focus groups we conducted, as well as documentation of reviews conducted by the agency, with the responsiveness and support provided by the help desks and Web sites operated by human resources management and information technology. Specifically, field-unit staff identified the lack of timely and quality assistance from the help desks, which has hindered their ability to complete tasks correctly or on time, although many field-unit employees said they recognized that help desk agents were courteous and were trying to be as helpful as possible. We repeatedly heard that interactions with the help desks were often time- consuming because staff were passed from one customer support agent to another, needed to make several calls before a knowledgeable agent could be reached, or had to wait hours or days for a return call. Many employees told us they often found themselves talking to two or three agents about a given problem, and with each new agent, they had to explain the problem and its context from the beginning. Even with significant explanation, many staff noted that a lack of understanding and context on the part of the help desk customer service agents has been a problem. For example, one employee told us that when he called the help desk for assistance with a failed radio component, the help desk agent had a difficult time understanding that the radio system did not have an actual address where the agent could send a replacement part but was instead located on a remote mountain, where a technician would be needed to install the new component. In contrast, when information technology-related computer problems were simple or routine, many employees we spoke with said the information technology help desk was responsive and generally able to resolve their problems. In fact, we spoke with several employees who said that it was very helpful when a help desk agent could remotely access and control their computers to fix certain software problems. Conversely, field-unit staff seeking help may be unfamiliar with the concepts, language, or forms related to human resources management or information technology—such as knowing what form to submit to acquire hand-held radios or the various technical aspects of computers or radios—that help desk staff expect them to be familiar with. Thus, field- unit staff may not know what questions to ask or may be unable to frame their questions in a way that elicits the correct or most helpful response from the help desks. Many employees we spoke with indicated that because they have little confidence in the information help desk agents provide, they instead often seek help first from local co-workers or sometimes simply ignore problems such as nonfunctioning computer software or hardware components. Many told us they call the help desks only as a last resort. Many field-unit staff were also unhappy with the business services’ Web sites because it was often difficult and time- consuming to find needed information, and in some cases the information on the Web site was outdated, conflicted with guidance acquired elsewhere, or was inaccessible because the Web links did not work. Changes resulting from centralization of human resources management and information technology were consistently perceived negatively by field-unit staff across the Forest Service, according to our interviews, focus groups, and documented agency assessments, but we also found that employees’ experience, skill levels, and responsibilities within the agency—such as whether their work was primarily field based or office based or the extent to which they supervised others—often factored into the severity of the problems they described. In general, we found that employees of different experience and skill sets frequently had different abilities or willingness to carry out self-service tasks, navigate automated systems, or communicate with help desks. For example, some field-unit employees told us they could easily and independently carry out some computer-related tasks, such as updating computer software with remote guidance, while others said they did not feel comfortable carrying out such tasks independently. We also found that field-unit staff whose work requires them to spend significant portions of their time outdoors rather than in the office (field-going staff)—appeared to be more severely affected by centralization than primarily office-based staff. For example, office-based employees may not have lost productivity waiting for a help- desk agent to call back, but a field-going employee may have had to choose between going into the field—potentially missing a help-desk return call—and forgoing fieldwork to wait, sometimes several days, for such a call. Also, because under centralization many tasks rely on the use of automated systems accessed through computers and some field- going staff are not issued computers by the agency, finding an available computer to carry out the task can present an added challenge. We also found that staff in supervisory positions were particularly affected by centralization. Under centralization, for example, supervisors are now responsible for completing multiple administrative actions for the staff they supervise, such as processing personnel actions; calling the help desk to resolve issues on behalf of their field-going staff (enabling field staff to go into the field); or ensuring that new staff have working computers, telephones, and access to agency systems and that they take key training upon their arrival. Before centralization, on the other hand, local human resources staff or other support staff would have provided direct assistance with these tasks, according to officials. Taken individually, changes associated with centralization may seem no more than minor inconveniences or inefficiencies. Cumulatively, however, they have had widespread negative effects on employees and on the agency as a whole, including a reduced amount of time employees can devote to their mission work, increased reliance on workarounds to complete work, increased frustration and lowered morale, and increased safety concerns, as follows: Less time for mission work. The substantial time and effort needed to complete administrative tasks has in many cases limited the ability of field units to conduct mission work, in many instances fieldwork, according to many field-unit employees. For example, because some field-based activities, such as trail maintenance or river restoration activities can be done only during relatively short seasons dictated by biology and weather, delays may make it difficult to accomplish mission goals in any one year. Delays of a few weeks in hiring, for example, could result in much longer delays in getting the work done, and we heard numerous examples of work being delayed or scaled back because of hiring complications attributed to centralization. In one instance, a manager told us that after spending significant time and effort to hire a fuels specialist to carry out fuels management work (such as thinning potentially flammable vegetation that could feed a wildland fire), he was unable to hire anyone who qualified because of problems encountered working with human resources management staff—and, as a result, essentially a year’s worth of work was lost. Many senior field-unit managers, including regional foresters and forest supervisors, reported that because the help desks generally follow a first-come, first-served priority scheme, they often feel powerless to set a high priority for certain actions that may be critical to staff at the forest level. For example, before centralization, managers could influence which positions might be advertised or filled most quickly, but now hiring actions go through the centralized organization, generally without regard for how quickly a manager believes he or she needs to fill a position. work. For example, we commonly heard that employees rely on local, knowledgeable co-workers to help them with their computer problems or provide advice on completing human resources-related actions. Although this practice may greatly benefit the employees in need of assistance, it could take time away from the other employees’ regular work duties, and if accurate and up-to-date information is not given, it could also result in unintentional errors. We also often heard from field-unit employees that given repeated problems with accessing network drives or other databases, they may store agency data on their hard drives, rather than on central servers, or may share their computers or passwords with others who lack ready access, such as seasonal field staff or visiting research fellows. Such workarounds, however, may result in the loss of information if a hard drive fails, and they are in violation of the agency’s computer security policies. Increased frustration and lowered morale. Field-unit employees’ frustrations over their perceived loss in productivity, as well as problems that have directly affected employees’ careers with the agency, have often lowered employees’ morale. We commonly heard that spending more time on administrative tasks that are often confusing and complex, and spending less time on mission work, has resulted in significant employee frustration and has often directly lowered employee morale. We also heard numerous examples where employees’ benefits, pay, position, or other personnel-related actions were negatively affected as a result of a mistake made by or a miscommunication with, a help-desk agent or other business service staff, which has often greatly affected employee morale, according to those we spoke with. For example, problems cited ranged from confusion over leave balances and appropriate pay levels to promotions that were initially approved by human resources management officials but then later rescinded. Several employees told us that such errors have become so frequent that an “expectation of failure” has generally taken root with many employees, which also contributes to their low morale. Increased safety concerns. In some cases, field-unit employees told us that problems or delays in getting business service tasks accomplished have increased safety risks for Forest Service employees or the public, for example by distracting employees from important safety work or by delaying needed equipment repair or replacement. We commonly heard concerns that centralization has caused employees to, as one senior official put it, “take their eye off the ball”—that is, reduce their focus on efficiently and safely carrying out their assigned tasks—because of the increased workload and distractions associated with centralization. We also frequently heard about delays or problems with technical assistance for radios—a key communication tool for firefighting and fieldwork. For example, before centralization, field units would have relied on local technicians to conduct needed repairs, but under centralization, the field-unit staff now first contact the help desk to make such a request. In one case a field-unit official told us that he needed a simple repair on a radio but had to wait for a technician to drive from a neighboring state to make the 10-minute repair. In a few other cases, field-unit staff told us they were without full use of their radio system for a significant amount of time while waiting for requests for repair to be addressed by the help desk. For example, one forest-unit official told us that, in place of their radios, the unit had to use cell phones with limited service to communicate for multiple days during the summer, when fire danger was particularly high, putting the staff at increased risk. The Forest Service has undertaken a number of actions to assess its delivery of centralized business services, in part because of the significant change centralization brought to employees across the Forest Service. These actions, however, have focused largely on assessing the quality of service provided through the service delivery framework established by the agency and have not included a more fundamental assessment of the extent to which, and for which tasks, the self-service approach taken by the agency may be most effective and efficient. Recognizing the concerns raised by many employees of the negative consequences resulting from centralization, the agency has also made significant efforts to address identified shortcomings in the business services provided to field-unit employees. In particular, human resources management and information technology managers are undertaking initiatives to change their overall approach to delivering business services. As a part of these efforts, agency officials told us they are reviewing the experiences of other agencies that have undertaken similar organizational changes for lessons learned and best practices that might apply to the Forest Service. The impact of human resources management’s and information technology’s initiatives, however— including the extent to which these business services will modify their largely self-service-based delivery approach—is not yet clear because many of the changes are still in progress. Recognizing the significant change centralization brought to employees across the Forest Service, the agency has undertaken multiple actions to assess business service delivery. These actions include ongoing efforts such as the monitoring of service delivery by a customer service board, service level agreements outlining services to be delivered and specific performance measures to be tracked, and various mechanisms to capture feedback from customers and assess business service delivery. The agency has also conducted targeted reviews and established several short-term review teams to assess particular aspects of its centralized business services. These actions have mainly aimed to assess the quality of service provided by each of the centralized business services and have generally not included a more fundamental assessment of those aspects of business service delivery typically carried out in a self-service manner—including an assessment of how effectively and efficiently self- service tasks are completed by field-unit staff—and therefore the extent to which a self-service approach may be most appropriate. In 2006, the Forest Service established a 15-member Operations Customer Service Board—chaired by a regional forester and composed of employees representing varied levels and geographic locations within the agency—to monitor the efficiency and effectiveness of the three centralized business services. The board carries out a number of activities to assess business service delivery. For instance, it meets on a monthly basis to, among other things, discuss current issues and projects, hear from board members on detailed oversight activities they are doing, and interact with representatives of the business services to learn about the status of efforts aimed at improving service delivery. The board has also established specific teams to evaluate particular aspects of business service delivery. For example, a budget team annually reviews detailed budget information from the three business services to identify any concerns, questions, or issues, which the board may then discuss with the business service managers or agency leadership. Similarly, another team annually reviews service-level agreements— contracts established by each business service to define the services they are to deliver and performance measures associated with doing so— to ensure that the performance measures are meaningful and achievable within established budgets. In addition, in 2010 the board established a radio review team to, among other things, assess current and future customer needs regarding radios because of its concerns that the lack of an updated radio plan was seriously affecting employee safety and productivity. The customer service board also holds annual meetings with managers from the three business services to learn about improvements and challenges in business service delivery and to make recommendations for further improvements. During these meetings, the board assesses detailed information developed by the budget team and reviews the service-level agreements proposed by each business service for the coming year. On the basis of its reviews, including the information presented and discussed throughout the year and during annual meetings with the three business services, the board develops recommendations for the managers of the business services and the Chief of the Forest Service, generally aimed at improving service delivery to field-unit employees. Specific recommendations from the board have often centered on improving or clarifying business service budget information and service- level agreements. The board’s chair told us the board has not directly examined or recommended that the business services systematically examine or modify the extent to which they rely on a self-service delivery approach for completing tasks, but she did say the board recognizes that the approach has resulted in a significant shifting of responsibility for completing business service tasks to field-unit employees. The business services are not required to implement recommendations made by the board, but several board members we spoke with, including the current chair, told us the business services have generally been responsive to the board’s recommendations; they also acknowledged that the business services have been slow to respond in some instances. For example, in 2007 the board recommended that each business service develop or update business plans to contain accurate budget information, including its linkage to program goals and priorities and performance measures, for board assessment. By 2009, budget and finance had prepared budget information that allowed the board to track costs and budget proposals from year to year. In contrast, according to the board, the business plan submitted by information technology in 2009 needed better linkages between budget requests and stated priorities and discussions of trade- offs under various budget alternatives; information technology submitted an updated business plan in June 2011. Human resources management submitted its first business plan to the board in March 2011. Each business service has developed service-level agreements, which are reviewed by, and often developed in collaboration with, customer service board members. These agreements outline services to be delivered and specific performance measures to be tracked, including defining acceptable levels of performance. In general, the business services’ performance measures capture operational aspects of their service delivery, such as the length of time to process specific actions, and customer satisfaction with service delivery. Few of the measures capture the performance of actions completed by field-unit employees when those employees are responsible for completing a portion of certain tasks, such as initiating a payment to a partner. Monthly or quarterly scorecards indicate the extent to which each business service is achieving acceptable levels of performance across its performance measures. However, the three business services have varied considerably in their development of performance measures that fully and accurately capture their performance, as well as their ability to achieve acceptable levels of performance consistently, with budget and finance generally outperforming the other two services. Specifically: Budget and finance. Budget and finance has 17 performance measures to capture critical elements of its service delivery. Although small adjustments to the measures have been made over the past several years, the measures have largely remained the same since they were developed in 2006. Metrics have focused on the performance of business service operations, the budget and finance help desk, and actions taken in conjunction with field units. For example, one performance measure tracks the number of days to approve certain travel authorizations, one tracks how quickly customer service agents respond to and resolve customer calls, and another monitors customer satisfaction with the support provided by the help desk. Several performance measures track the timeliness of actions completed by field-unit staff, because some budget and finance processes depend upon actions that must be initiated in a field unit. For example, one performance measure tracks the percentage of certain invoices received from field units on a timely basis (so that these invoices can then be processed by budget and finance staff). Over the last few years, budget and finance has consistently achieved mostly acceptable levels of performance (as defined in the service-level agreements), with the exception of customer satisfaction with its internal Web site and the actions that must first be completed by field-unit staff. Budget and finance officials told us that several changes have been implemented recently to improve performance in these areas, such as increasing the training provided to field-unit managers and monitoring invoices to better identify trends and problems. Budget and finance officials further told us they will assess the effects of these changes in the future, as well as continue their collaborative efforts with the board to regularly assess the strength of their performance measures in capturing how well services are delivered. Human resources management. Human resources management officials, and board members we spoke with about human resources management, agreed that performance measures in place over the past several years have not fully or accurately captured all important aspects of service delivery performance. For fiscal year 2010, human resources management had 20 performance measures intended to capture various aspects of internal operational performance, including its responsiveness to requests for customer service, how quickly specific actions such as retirement applications were processed, and customer satisfaction when a service was completed. Monthly scorecards produced for fiscal year 2010 indicated that human resources management was not achieving acceptable levels of performance for most of its measures, but human resources management officials told us the measures did not accurately reflect the service being provided and that in some cases performance data could not be easily measured or validated. Because of such problems, during fiscal years 2010 and 2011, human resources management staff gradually stopped reporting results for almost half their performance measures. In fiscal year 2011, the staff began working with board members to re-examine and revise the human resources performance measures. In March 2011, human resources management submitted to the board eight draft performance measures, recognizing that several more may need to be developed in the future. Information technology. Information technology officials, and board members we spoke with about information technology, likewise told us they recognize the need to continue to revise and develop measures to better capture the quality of service delivery to customers. For fiscal year 2011, information technology had more than 30 performance measures, with almost half tracking internal processes, such as the percentage of internal plans or invoices completed and submitted in a timely manner, and the remainder tracking aspects of service delivery to customers or customer satisfaction. Service delivery measures include the time frames for resolving customer requests for assistance, such as computer software or hardware problems submitted to the help desk, and the number of days to create computer accounts for new hires. Customer satisfaction measures include some incorporating the results of an annual customer satisfaction survey sent to all agency employees and one capturing customer satisfaction upon completion of a service requested from the help desk. Across the performance measures, quarterly scorecards for fiscal year 2010 indicated mixed results: information technology consistently met its target for customer satisfaction upon completion of a service but was consistently unable to achieve acceptable levels of performance in several other areas, including resolving customer incidents within targeted time frames. Information technology officials said they plan to continue developing additional measures to better capture the value and quality of service they are providing to customers. Officials from all three business services also told us they use customer feedback obtained through various mechanisms to assess their business service delivery. For example, each of the three business service help desks offers customers the opportunity to give direct feedback about their experience with each request for service. Each business service also provides opportunities for staff to send electronic comments through links on its Web site. In some instances, according to agency officials, focus groups have been put together to solicit feedback from employees. For example, in 2010, an internal team conducted 20 focus groups with small groups of field-unit employees to obtain their perspectives on ways the three business services could improve the support they provide to customers. Officials from each service said they closely monitor the feedback that comes in through these various mechanisms to identify issues and trends they may need to address. For instance, human resources management officials told us that feedback they received from field-unit employees has led them, among other actions, to hold specific, online training sessions before the general hiring period for seasonal staff, to improve the information they make available to field-unit employees. The Forest Service has also conducted targeted reviews to help identify the causes of continuing problems with human resources management and delivery of information technology services and to help develop recommendations or potential approaches for improvement. In 2008, for example, Forest Service leadership commissioned a review by a private consultant to assess problems in delivering human resources management services, underlying causes of those problems, and potential solutions. The consultant identified a number of factors contributing to problems, including flawed assumptions about the types of human resources-related transactions that could easily be automated or made self-service; inadequate information systems that either did not work as designed or were not intuitive or user-friendly; and the significant loss of human resources expertise, resulting in skill gaps at the centralized business service center. The consultant concluded that efforts undertaken to date would not resolve all underlying problems and that, instead, a fundamental redesign of the service delivery model was needed to fully address deficiencies. The consultant recommended that the agency set up two project teams, one to identify ways to improve existing human resources management processes and one to examine longer-term service delivery options. On the basis of this recommendation, agency leadership developed two such teams to identify priority issues and options for action. The results of the teams’ work were presented to Forest Service leadership in December 2009, and actions the Forest Service has taken in response are discussed in greater detail later in this section. Similarly, in 2009, on the basis of a recommendation by the customer service board, an internal agency review team was developed to assess the effectiveness of information technology in managing the agency’s information resources. The review team, led by a regional forester and composed mostly of senior managers, concluded that there were several fundamental problems with the service delivery model in place and that aggressive action to address these problems was warranted. The review team found widespread confusion about the information technology organization’s relationship to the Forest Service’s mission. For instance, the review team found that agency executives were not fully engaged in defining and managing the information technology function as a vital part of the agency’s mission and that the connections among the organization, agency leadership, and the field units were limited. In response, the review team recommended that the agency develop a strategic framework to clearly identify and explain how the information technology organization is linked to the agency’s mission. The review team also found confusion surrounding information technology’s system for setting priorities and allocating funding, and it recommended improvements to clarify and provide more transparency to these areas. In addition, the review team recommended changes to the organizational structure of information technology to improve customer support, concluding that increased service in some areas may be needed. The recommendations of the review team are being considered by the Forest Service as part of the ongoing reorganization efforts discussed below. In part following recommendations made in various assessments of its business services, the Forest Service has taken, and continues to take, steps to improve performance in each of these services. Budget and finance has efforts under way aimed at continuous improvement, but human resources management and information technology are making more-significant changes to their overall service delivery approach. It is unclear, however, to what extent additional changes will correct remaining shortcomings—or to what extent changes will alter the agency’s reliance on a self-service delivery approach for many tasks—in part because these changes are still in progress. Although its centralization efforts have largely been considered successful by agency leadership, budget and finance continues to make efforts to improve its business service delivery. For instance, budget and finance recently implemented an automated tool to allow employees to electronically submit requests for miscellaneous obligations, which will eliminate manual data entry into the financial system—thereby reducing the potential for error, improving processing times, and allowing employees to check the status of their requests in real time. Officials reported they are also working to streamline processes and information sharing for tracking unspent monies and closing out some partner agreements. To improve communication and collaboration with field-unit staff, budget and finance officials reported they have begun placing their monthly conference notes—which contain information about such things as new systems, processes, or procedures being put in place—on their Web site for relevant staff to review. In addition, to be more responsive to customers, officials said they are currently working toward electronic tracking of help-desk requests, so that customers can easily see the status of these requests in real time as well. Over time, human resources management has undertaken various efforts to improve specific aspects of its services in response to identified shortcomings—for example, by improving the operations of its help desk and payroll system. More broadly, recognizing that centralization has continued to pose serious and persistent problems, the Forest Service began a substantial effort to more comprehensively address performance shortcomings. This effort includes (1) an initiative to redesign human resources management’s structure, (2) replacement of several key automated systems, and (3) improvements to the customer service provided by the help desk. Regarding structural redesign, Forest Service leadership in December 2009 decided, after examining several options, on an approach aimed at, among other things, restoring relationships between field-unit management and the human resources management program by establishing regional service teams to assist field-unit managers with certain functions. Under this approach, the Forest Service’s regions would be assigned teams of 9 to 64 human resources management staff, depending on the size of the region. To this end, Forest Service leadership gave human resources management the authority to hire up to 208 additional full-time staff to make up the regional service teams; these staff members may be physically located in the regions or at the Albuquerque Service Center. During 2010 and early 2011, the agency established these teams, which are to assist managers in field units with four specific services: position classification, hiring, employee relations, and labor relations. The service teams remain within the human resources management organization, and, according to the agency, the goal is that the service teams will develop a relationship of shared accountability with regional leadership, so that regional leadership will have more influence on certain aspects of human resources management work. Human resources management officials explained that the redesign was being implemented using an “adaptive management approach,” under which field-unit leadership will have the flexibility to influence the work carried out by the service team assigned to their region. Many Forest Service field-unit staff we spoke with expressed optimism about changes being made under the human resources management redesign initiative, but it remains uncertain to what extent such changes will result in significant improvements. Because regional service teams were established only recently, and because some aspects of the service teams’ roles and responsibilities have yet to be clearly defined, staff said it was too early to comment on resulting improvements. For example, while certain aspects of position classification will be the responsibility of regional service teams, it is not clear to what extent service teams will directly assist supervisors with completing technical and procedural tasks associated with position classification. According to human resources management officials, classification specialists have been assigned to the regional service teams to work more closely with regional managers on several tasks related to position classification, but initiating and completing a classification action request generally remain with field-unit supervisors. Several field-unit staff we spoke with expressed concern that if supervisors continue to be responsible for carrying out classification work requiring detailed technical and procedural knowledge, then redesign will do little to reduce the burden placed on supervisors for completing these tasks. Further, many field-unit staff we spoke with remained concerned that, even after the redesign initiative is fully implemented, they may not see a reduction in the time needed to complete human resources-related tasks, especially self-service tasks, because processes and responsibilities for those tasks have stayed unchanged under redesign. Human resources management officials told us that many of the field-unit staff’s frustrations stem from increased responsibilities placed on supervisors. They explained that before centralization, local administrative staff sometimes assisted with certain supervisory-related tasks, such as helping track employee performance, but that under centralization, that support may no longer be there. Human resources management officials said that tasks that are supervisory in nature should be the responsibility of supervisors, although they also acknowledged that no clear agreement prevails across agency leadership on what types of administrative tasks supervisors should be responsible for, and they recognized the need to more clearly identify and define supervisory tasks. One agency official added that a 2010 presidential memorandum directs supervisors with responsibility for hiring to be more fully involved in the hiring process, including engaging actively in identifying the skills required for the job and participating in the interviewing process when applicable. Human resources management officials told us they also recognize the need to re-examine which business service tasks best lend themselves to self-service and which tasks may need greater expertise or direct support by human resources specialists; they told us they plan to revisit this issue after the regional service teams are fully established. They could not, however, provide us with any concrete plans or target time frames for this effort. Without a systematic re-examination, the agency risks continuing to burden its field- unit staff with tasks they cannot perform efficiently. In addition to the organizational redesign initiative, human resources management officials told us, they have efforts under way to replace and make more integrated, flexible, and user-friendly several key automated systems that both human resources management staff and field units rely on to carry out human resources-related tasks. In particular, human resources management is embarking on a long-term effort to develop an integrated workforce system that ultimately is to consolidate and streamline human resources processing for all Department of Agriculture agencies, including the Forest Service. The effort to develop this system, called OneUSDA, is currently being co-led by the Forest Service. Human resources management officials said initial efforts are focused on the development of a system for benefits and pay processing; eventually they expect the system to be expanded to other actions, such as hiring. By aligning efforts across the department, human resources management officials said, they will be better positioned to standardize and share information across agencies. This initiative is still in early stages of development, and agency officials said that, although they recently determined all necessary requirements across the department’s agencies, it could take at least 5 years to establish basic system functionalities. In the meantime, human resources management has had efforts in progress to improve several of its current systems—many of which were put in place after the EmpowHR system, deployed when the agency first centralized, proved inadequate—but these efforts have themselves been problematic. For instance, human resources management has been working to replace 52 Tracker, one of the personnel tracking systems it put in place of EmpowHR, which has been widely cited as slow and difficult to use. According to agency officials, the Forest Service hired a contractor to develop a replacement system for 52 Tracker, which was expected to provide improvements such as automatically populating certain fields. In January 2011, however, after 2 years of work, the agency discontinued the effort, concluding that what the contractor developed would not meet the agency’s needs. Instead, human resources management officials said they are now building an in-house system, which they expect to be deployed in 2012. In addition, human resources management officials said they have taken steps to mitigate known weaknesses with their AVUE hiring system, such as manually going through some candidate lists to make sure candidates are not inadvertently put on an incorrect list; the officials told us they will be revisiting the use of AVUE altogether over the next year. Human resources management has also undertaken several actions to improve customer service provided to employees through its help desk. For example, human resources management staff conduct monthly focus groups with 40 field-unit employees, representing a diverse range of positions, to seek input on help-desk initiatives and other performance issues or concerns raised by customers in field units. Also, during 2010, human resources management made enhancements to its help-desk ticketing system, which allowed employees to track the status of their requests in real time and identified help-desk staff assigned to employees’ cases, so employees could call the help-desk person directly if needed. It is also developing a comprehensive training program to enhance the technical knowledge and skills of its service providers, has added specialists to handle certain issues and developed troubleshooting guides to assist help-desk staff in diagnosing issues brought to their attention, and has reported reducing telephone wait times significantly for employees calling the help desk. In addition, human resources management recently developed or updated its standard operating procedures for a number of human resources-related areas, including benefits, pay and leave, performance and awards, labor relations, hiring, and temporary employment. These operating procedures have been made available on human resources management’s Web site, and managers are hopeful the procedures will improve the consistency of information provided to and used by field-unit employees. Because some of these initiatives are relatively new, their impact on field-unit employees has not yet been assessed. Information technology managers have recently undertaken several actions to improve service delivery to field-unit employees and, for some tasks, provide more direct assistance to those field-unit employees who might need it. For example, in 2010 information technology developed “strike teams” consisting of information technology specialists who traveled to sites across the agency giving employees hands-on help with transferring their electronic files to new servers. Information technology also recently provided customer service training to the majority of its staff and has been working to raise awareness among field-unit staff—through efforts such as posting additional information on its Web site—of the existence of customer relations specialists who serve as local liaisons and are available as local resources for field-unit employees. Nevertheless, it is unclear to what extent these efforts have been effective, because they were not mentioned by the employees we interviewed or those who participated in our focus groups. In addition, after the Forest Service folded its technology support services back into a single organization when its competitive sourcing arrangement was terminated in 2008, the information technology service began a reorganization initiative to significantly modify to its service delivery approach. Forest Service leadership, however, put the reorganization initiative on hold in 2009 until the agency could develop a strategic framework establishing high-level goals and objectives for managing its information resources and clarifying information technology’s role in decision making. Agency officials told us that, given the problems surrounding decision making and priority setting under the centralized model, the agency also needed to clarify its processes for making information technology resource decisions, including creating a system for setting priorities and allocating funding for new technology investments. With these efforts completed in 2010, a team led by senior Forest Service managers has been formed to assess the current organization and recommend changes by December 2011, according to agency officials. As part of these efforts, the agency has stated that improving customer service, and specifically addressing the level of self- service that will be expected of employees, will be a key focal area for the reorganization team. Information technology managers told us they recognize that under centralization they relied too extensively on a self- service approach and saw the need to seek alternatives to improve service delivery to employees, but they also recognize the need to be mindful of the higher costs that come with increased service. Given that the reorganization initiative is still in early stages, and specific plans and targets have yet to be documented, the extent to which the agency will alter its self-service approach—and whether the revisions will address identified shortcomings—remains unclear. Achieving significant cost savings was one of the key goals of the Forest Service’s centralization effort, with the agency estimating it would save about $100 million annually across the three business services—budget and finance, human resources management, and the ISO component within information technology. But because of limitations with the agency’s documentation supporting the data, assumptions, and methods used in developing its cost information both before and after centralization, we were unable to fully ascertain the reliability of its cost estimates for (1) baseline costs of providing each of the business services before centralization, (2) projected costs for providing those same business services after centralization was complete, or (3) actual costs of providing the business services after centralization. Nevertheless, despite these limitations, the Forest Service estimated that projected annual savings through fiscal year 2010 may have been achieved in budget and finance but in for the other two business services. With its centralization efforts, the agency projected it would achieve significant cost savings—about $100 million annually across the three business services—generally after a transition period, lasting around 3 years, in which it would incur one-time investment costs (see table 3). Investment costs generally comprised those to acquire and establish business service offices at the Albuquerque Service Center, transfer business service employees located in various field units to the new center, train these employees, and pay management and project consulting fees. Overall, projected annual cost savings were largely based on anticipated staff reductions for all three business services. For example, for budget and finance, the agency projected it would be able to eliminate 830 of the 1,975 FTEs it estimated went toward budget and finance-related activities before centralization, accounting for a significant portion of the projected annual cost savings. In addition, for information technology, the agency’s cost-savings estimates were tied specifically to savings it estimated it would achieve by shifting the support services portion of its business service to ISO. Information technology officials told us they expected to achieve additional savings related to other centralization efforts outside ISO, but these savings were not included in the agency’s projections. We found several limitations with the Forest Service’s estimates of its baseline costs, which calls into question whether the agency had an accurate starting point from which to measure any savings achieved from centralization. For example, the agency’s baseline costs for budget and finance and human resources management relied largely on estimates developed with the help of contractors during the centralization-planning process, because the agency otherwise did not have a means to readily distinguish and capture actual costs associated with the business service activities being done by staff located at hundreds of field units across the country. The Forest Service, however, did not maintain sufficient supporting documentation to indicate what data, assumptions, or methods were used to develop its baseline cost estimates, and therefore we were unable to determine what types of costs may have been included or excluded or to assess the reasonableness of the assumptions and methods behind the estimates. Without clear information on what baseline cost estimates consisted of, or on the reliability of such information, we are unable to assess whether the estimates serve as an accurate basis for comparing postcentralization costs to determine achieved savings. Similarly, although the agency took steps to measure savings achieved from centralization for fiscal years 2005 through 2007, agency officials could not provide supporting documentation, which limited our ability to assess the agency’s methods or determine the reliability of the underlying data. For example, according to its September 2007 estimate, the agency estimated that it achieved a savings of $85 million for fiscal year 2007 across the three business services, attributing the savings largely to staffing reductions. Agency officials, however, were unable to provide documentation on the information or methods used to determine reported staff reductions or the associated impact on operational costs. In addition, although the agency’s September 2007 estimate indicated that one-time investment costs for fiscal year 2006 totaled $68.6 million for budget and finance and human resources management, we found that an earlier estimate developed for that same period showed one-time costs of $34.3 million. After further review of the documentation, agency officials acknowledged that the September 2007 estimates appeared to reflect a double counting of costs contained in the earlier estimate. Potential errors such as this one raise questions about the accuracy of the data, but without supporting documentation detailing the agency’s specific methods and estimates, we were unable to assess the data’s reasonableness or reliability. The Forest Service terminated its efforts to measure the cost savings associated with centralization at the end of fiscal year 2007, although at our request it developed updated estimates through fiscal year 2010—but with those estimates, too, we were limited in our ability to assess the reasonableness or reliability of much of the information. Specifically, since limited information was available to understand the assumptions and methods the agency used to develop both its baseline cost estimates and its estimates of savings achieved through 2007, agency officials acknowledged they were unsure whether the methods used to produce the updated estimates were consistent with those used previously. For example, Forest Service officials were unable to confirm whether or to what extent certain technology and associated implementation costs were accounted for consistently across the agency’s various estimates of baseline costs, projected costs, or achieved savings. Similarly, it was unclear to what extent changes in the scope of work to be done by the centralized business services or unanticipated significant new requirements—such as new mandated information technology security requirements or an agencywide travel system—were incorporated into the agency’s estimates of cost savings. In addition, several field-unit officials we spoke with said that some of the projected cost savings relying on a reduction in field-unit facility costs may not have materialized because the facility costs did not decrease (e.g., because of long-term lease agreements or because space could not easily be configured to accommodate reducing just a few positions). Given the lack of detailed information supporting the Forest Service’s estimates, however, it is not possible to determine the extent to which the agency may have factored in updated information into its calculations of cost savings. Further, the estimates of savings for the business services likely do not account for the time now spent by field-unit employees on the whole range of business service-related tasks that these employees did not perform before centralization. Given the substantial shifting of responsibility to field-unit employees for many business service tasks after centralization, even a small amount of time that the agency’s more than 30,000 employees spend on such tasks could add up to significant associated costs that the agency’s estimates likely do not account for. If the agency estimated cost savings by, in part, calculating the number of business service-related staff it reduced but did not factor in the time spent by employees who picked up portions of the business service- related work, then the agency’s cost-savings estimates for the business services may be overstated. Complete and accurate information for pre- and postcentralization costs is essential to accurately determine the extent of achieved cost savings and the reasonableness of key assumptions used to develop cost estimates. Standards for Internal Control in the Federal Government highlights the importance of comparing actual performance data with expected results to determine whether goals are met for accountability for effective and efficient use of resources. It also calls for agencies to clearly document significant events, such as those involving major organizational changes, and to maintain documentation so it is readily available for examination. In addition, in March 2009, we issued a cost-estimating guide, which compiles cost-estimating best practices drawn from across industry and government. This guide notes the importance of sound cost-estimating practices, including to develop in-depth cost-estimating models that actively address risks by estimating costs associated with potential delays, workarounds, or other key risks and to properly document cost estimates so they can be independently validated, updated, or re-created. Specifically, the guide explains that documentation describing the methods and data behind estimates not only allows others to understand how an estimate was developed and to replicate it, but also facilitates updating the estimate as key assumptions change or more information becomes available. In addition, the guide indicates that well-supported and well-documented cost estimates can serve as a reference to support future estimates. As the Forest Service moves forward with its initiatives to redesign and reorganize its human resources management and information technology services, neither it nor others will be able to fully assess the cost-effectiveness of these initiatives or track updates as assumptions or other information changes without complete and accurate cost-estimating information. Despite limitations in the information it provided, the Forest Service estimated that, through fiscal year 2010, it achieved intended annual savings in budget and finance but was not able to achieve intended savings for human resources management or the ISO component within information technology. Selected aspects of the agency’s estimates of achieved savings for the three business services are described below, along with limitations that raise further questions about their reliability. The Forest Service estimated that from fiscal year 2006 through fiscal year 2010, it reduced its annual budget and finance costs by about $47 million per year, on average—exceeding its cost-savings goal by more than $8 million annually. According to agency documents, it incurred one-time investment costs totaling $54 million, about $9 million more than the initially projected amount of $45 million. According to agency estimates, a large portion of the cost savings was attributable to staff reductions. For example, agency data suggest that in 2010, 377 fewer FTEs than before centralization were assigned to positions most closely associated with budget and finance work. We found, however, that the agency’s estimate of postcentralization costs was based in large part on estimates of the costs of field-based budget and finance activity that agency officials told us had not been validated— raising questions about the reliability of these cost estimates and therefore about the agency’s reported cost savings. Specifically, estimates of postcentralization costs included costs for both the centralized budget and finance organization and the budget and finance activities that largely remained in the field units. Over half these estimated annual costs, however, were for field-based activities, and they were derived from estimates stemming back to the agency’s centralization planning documents. According to agency officials, cost estimates developed for the field-based activities were based on the number of field-based FTEs that the agency projected would continue to do budget and finance-related work after centralization. The officials said they have not taken steps to assess the accuracy of this portion of their cost estimates because they lack readily available data on these specific costs from the agency’s accounting system and because the additional steps to validate actual FTEs and associated costs would take significant time and resources. Many field-unit staff we spoke with said they continue to devote significant resources to performing budget and finance activities, and in some cases field units have hired additional staff to carry out the work. Regardless, without sufficient data to compare the agency’s initial projections of field-based budget and finance costs before centralization with actual postcentralization costs, the ability to assess the extent of achieved cost savings is limited. The Forest Service estimated that from fiscal years 2006 through 2010, it reduced its annual human resources management costs by about $11 million per year, on average—falling far short of its projection of $31 million in annual savings. In fact, by fiscal year 2010, the Forest Service estimated that annual human resources management costs were almost $1 million more than the agency estimated they would have been without centralization. The agency estimated that one-time investment costs totaled $76 million, $15 million more than projected. According to agency officials, higher-than-expected annual costs were largely due to increases in staffing and technology costs for new automated systems. By 2010, for example, the agency reported that staffing exceeded 650 FTEs, compared with the fewer than 400 FTEs estimated in its initial projections. In addition, agency officials also stated that in fiscal year 2008, the Forest Service retained a contractor to assist in processing the extensive seasonal hiring the agency undertakes each year. They explained that the contractor was necessary to process personnel actions for the approximately 15,000 to 18,000 staff temporarily hired each year because human resources management does not have the staff to process these transactions in a timely fashion. The agency’s current redesign initiatives and other efforts are likely to further significantly affect the costs of providing human resources management services, but the nature and extent of those effects are unclear because the Forest Service has not evaluated the long-term financial impacts of its planned changes. In the short term, costs are likely to rise substantially, given the agency’s planned increases in staffing in connection with redesign of human resources management. Specifically, during fiscal year 2011 human resources management planned to increase staff by up to 208 additional positions over fiscal year 2010, according to agency documents, which would bring the new total to 970 positions—more than twice the number of FTEs estimated in initial agency projections. Agency officials attributed some of the increases to additional unanticipated work requirements, such as activities related to time-and-attendance reporting and unemployment compensation, which human resources management continued to perform after centralization. In addition, although the agency is actively pursuing OneUSDA to serve as its comprehensive human resources management system, it has not yet projected the costs to develop and implement this system. The agency developed a business plan for fiscal years 2011 through 2013, which estimated some costs for its human resources management service for those years, but this plan did not specify costs, if any, related to its OneUSDA effort. The plan also did not clearly explain how future staffing would change to achieve a forecasted 10 percent reduction in salary costs by fiscal year 2013, especially in light of current redesign efforts and their associated increase in staffing levels. Furthermore, the plan did not contain any discussion of the potential long-term financial impact of these efforts on future human resources management costs. The Forest Service’s estimates of cost savings for centralization of information technology generally focused on its ISO, which, according to the agency, resulted in annual savings of about $22 million from fiscal year 2005 through fiscal year 2008—falling short of the agency’s goal of $30 million in annual savings. The agency estimated that it also incurred about $12 million in investment costs as part of these centralization efforts. As part of its savings estimate, the agency reported that it had reduced information technology-related staffing by 554 positions. Agency officials also stated that, anticipating significant savings resulting from centralization, the Forest Service in fiscal year 2005 dissolved the portion of its working capital fund related to computer hardware and software, allowing it to spend the approximately $60 million balance elsewhere in the agency. The agency, however, did not provide sufficient documentation for us to determine how this action specifically related to, or may have affected, the agency’s estimates of the savings that resulted from ISO centralization. In addition, because the Forest Service’s efforts to measure cost savings focused on ISO, any savings associated with centralizing information technology services outside of ISO (such as those related to replacing computing and telecommunications hardware, software, and radio systems) were not included in agency estimates. During fiscal year 2008, the Forest Service terminated its competitive sourcing arrangement with ISO, folding these service activities back into one information technology organization, which limited the agency’s ability to consistently measure cost savings because ISO-specific costs were no longer tracked separately. Regardless, the cost of providing information technology services overall has grown steadily over the last several years: the agency estimated that total costs have increased about 8 percent per year, on average, from fiscal year 2006 through fiscal year 2010. The agency’s lack of supporting documentation for several of its information technology cost estimates raises questions about the reliability of this information. Specifically, a business case was not prepared for the information technology centralization effort, and, although agency officials indicated that projected annual cost savings were derived from competitive sourcing documentation (i.e., from the agency’s bid under the competition for providing services using agency employees), they were unable to demonstrate how such documentation supported the estimate of baseline costs or projected yearly costs after centralization. Also, agency officials were unable to specify how their estimates of achieved savings, including those attributed to reported staffing reductions, were derived, noting, among other things, that they were unable to locate documentation supporting their estimates because many information technology employees who may have been familiar with these efforts had left the agency. These limitations echo concerns we raised in 2008 about the reliability of Forest Service efforts to measure information technology-related cost savings. Specifically, in January 2008 we reported that the agency was unable to provide sufficient information to substantiate the approximately $35.2 million in savings it reported to Congress as part of its ISO competitive sourcing arrangement for fiscal years 2005 through 2006. We noted that, in addition to the lack of complete and reliable cost data, the agency had failed to include in its report $40 million in transition costs. As with human resources management, the reorganization effort within information technology is likely to significantly affect the future costs of providing information technology services, but the nature and extent of those effects are unclear because the long-term financial impacts and other aspects of this initiative have yet to be fully evaluated. Although the agency has taken steps to assess information technology costs, a March 2009 internal assessment of ISO performance and cost results highlighted the need for an in-depth, realistic cost model among its recommendations for additional analysis in connection with future information technology reorganization. For both human resources management and information technology, information on the future costs and intended benefits associated with efforts to reorganize and improve service delivery will be important in assessing the overall impact of these key initiatives, as well as trade-offs that may be necessary if resources are not available to fully implement the initiatives. Further, evaluating the initiatives’ success will depend, in part, on the agency’s ability to develop appropriate measures of cost-effectiveness and a methodologically sound approach for measuring and documenting results, which includes a realistic, in-depth cost-estimating model and appropriate, reliable cost data that takes into account the initiatives’ potential long-term impact. Without such an approach, the Forest Service risks being unable to demonstrate, or even to determine, the cost-effectiveness of future efforts to deliver business services. The need for effective and efficient government operations has grown more acute in light of the federal deficit and the long-term fiscal challenges facing the nation, prompting government agencies, including the Forest Service, to consider new models for accomplishing their missions. For the Forest Service, consolidating business services formerly located across the nation, and increasing the reliance on sophisticated automated technologies, offered the promise of providing key business services in a more coordinated and streamlined fashion and at a lower overall cost to the agency. Although centralization of budget and finance services had to overcome short-term obstacles typical of institutional changes of this magnitude, centralizing these services generally worked well to bring greater coordination and consistency to many financial activities. But poor implementation hampered human resources management and information technology services over a longer period. For these services in particular, overreliance on a self- service model for tasks requiring specialized knowledge, automated systems that did not work as intended or were not user-friendly, and inconsistent support from customer-service help desks had unintended consequences, particularly on field-unit employees—with resulting impacts on the efficiency and effectiveness with which they could perform their mission-related activities. As the agency moves forward with its initiatives to redesign and reorganize its approach to delivering human resources management and information technology services, it will be critical for the agency to re-examine the extent to which a self-service approach is most efficient and effective for providing needed services. In doing so, the agency will need to better understand both the benefits and the investment required under alternative approaches for delivering business services. For those tasks and services where a self-service approach is discontinued in favor of direct provision by specialists, higher levels of service are likely to mean higher costs; for those tasks and services where a self-service approach is continued, potential cost savings may be partially offset by investment in more-effective and more user-friendly automated systems, help-desk support, and other tools essential to carrying out self-service tasks. In addition, although the Forest Service reported cost savings from centralization (albeit less than expected in the case of human resources management and ISO), the agency was unable to clearly demonstrate how its reported savings were determined and whether they were in fact fully realized. The agency is now devoting significant resources to its redesign and reorganization initiatives. The extent of additional resources needed to fully implement these initiatives remains unclear, however, in part because selected aspects of the initiatives—including their costs— have not been fully developed. Moreover, without complete and accurately prepared and maintained cost information to allow the agency to assess the cost-effectiveness of its efforts, including measures to be used to monitor actual results achieved, neither the Forest Service nor Congress can be assured that the initiatives’ costs can be objectively monitored or that decisions about how to provide business services in the future will produce cost-effective solutions. To maintain and strengthen the Forest Service’s delivery of business services and help ensure customer satisfaction and cost-effectiveness, and in conjunction with its current initiatives to redesign and reorganize the agency’s approach to delivering human resources management and information technology services, we recommend that the Secretary of Agriculture direct the Chief of the Forest Service to take the following three actions: Complete a systematic examination of the tasks associated with these two business services to determine (1) which tasks can be efficiently and effectively carried out under a self-service approach and (2) which tasks may require more direct support by specialists. In doing so, officials should assess the costs and benefits associated with each approach and consider the views of field-unit employees. On the basis of the results of this systematic examination, (1) document actions and implementation time frames for providing these business services in the most appropriate manner, and (2) ensure that the tools essential to carrying out any self-service tasks—including automated systems and help-desk support—are effective and user-friendly. Prepare and maintain complete and accurate cost-estimating information to (1) thoroughly assess the potential short- and long-term agencywide costs of implementing the current redesign and reorganization initiatives, and (2) develop and document methodologically sound measures to monitor the initiatives’ cost- effectiveness, so that results can be conclusively determined and objectively evaluated. We provided the Secretary of Agriculture with a draft of this report for review and comment. In response, the Forest Service generally agreed with the report’s findings and recommendations and stated that the agency is committed to the continual improvement of its business services delivery and recognizes that changes may be needed to improve performance. The Forest Service did not, however, specify the steps it will take to address our recommendations or the time frames for doing so. The Forest Service also provided technical comments, which we incorporated as appropriate. The agency’s written comments are reproduced in appendix II. We are sending copies of this report to the appropriate congressional committees, the Secretary of Agriculture, the Chief of the Forest Service, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report examines the (1) types of effects centralization has had on the Forest Service and its employees, particularly in field units; (2) actions the Forest Service has taken to assess its delivery of centralized business services and to address identified shortcomings; and (3) extent to which the Forest Service can demonstrate that it achieved centralization’s intended cost savings. To examine the effects of centralization on the Forest Service and its employees, we reviewed guidance and policy documents, including early planning documents prepared before centralization for each of the three centralized business services: (1) budget and finance, (2) human resources management, and (3) information technology. We also examined numerous formal and informal reviews and assessments of centralization prepared by Forest Service staff and contractors, as well as past GAO reports on Forest Service operations, including reports on Forest Service budget and finance operations. In addition, we reviewed the results of various surveys and focus groups of Forest Service employees, conducted by Forest Service teams during 2010, as well as all customer comments provided through each of the business service help desks during 2010. We interviewed officials from Forest Service headquarters and the three business services at the Albuquerque Service Center to determine how centralization changed business service delivery, as well as to obtain their perspectives on positive and negative outcomes resulting from centralization. To gain field-unit perspectives, we interviewed—through site visits and by telephone—more than 200 agency officials from all nine regional offices, 12 national forests, 11 ranger districts, four research stations, four science laboratories, and the State and Private Forestry program. Our interviews included employees in a wide range of positions within the Forest Service, including forest supervisors, district rangers, fire management officers, budget officers, staff scientists, administrative officers, biologists, and recreation specialists, among many others. During these interviews, we obtained both general views and perspectives on the effects of centralization and specific examples, for which, in some instances, we also obtained supporting documentation. In addition, to systematically obtain information on the experiences of a geographically diverse and broad cross-section of Forest Service field- unit employees, we conducted 10 focus groups with a total of 68 randomly selected employees. These focus groups were structured small-group discussions, which were designed to gain in-depth information on the effects of centralization more systematically than is possible during traditional single interviews. The focus groups ranged from 4 to 11 participants in size, and all were conducted by telephone. To select participants, we drew a random sample of individuals from a database of all full-time Forest Service employees. We excluded employees with less than 5 years of Forest Service experience to ensure that the focus groups were composed of employees with pre- and postcentralization experience. We then stratified this population into six groups according to supervisory status (nonsupervisory and supervisory) and general schedule (GS) levels (GS-2 through GS-15), so that each focus group consisted of employees with broadly similar levels of experience; we drew a total of 10 random samples from these six groups. For representation in approximate proportion to the total number of full- time employees in the agency, our 10 focus groups consisted of the following categories: one focus group of supervisory GS-2 through GS-8 employees, two focus groups of supervisory GS-9 through GS-11 employees, two focus groups of supervisory GS-12 though GS-15 employees, two focus groups of nonsupervisory GS-2 through GS-8 employees, two focus groups of nonsupervisory GS-9 through GS-11 employees, and one focus group of nonsupervisory GS-12 through GS-15 employees. Focus group discussions lasted 90 minutes to 2 hours and were guided by a trained moderator, who used a structured set of questions, generally asking participants to share their experiences regarding how centralization of each business service affected their work. In addition to the moderator, two GAO analysts recorded the information provided during the discussions. Ground rules were established so that participants limited their comments to experiences they had had personally, and we asked them to limit their discussion to experiences with business service delivery over the previous 12 months (the focus groups took place during February and March 2011). The moderator used a set of consistent, probing questions designed to ensure that all participants had an opportunity to share their views and to react to the views of the others. These questions also helped ensure that topics were covered comprehensively; for instance, separate questions were asked about both positive and negative aspects of centralization for each business service. We also asked for specific examples and details to increase our confidence that the participants’ broader assessments of the effects were well founded. Our focus groups generated in-depth information that was consistent with the information we obtained through our reviews of formal and informal assessments of centralization and our interviews with field-unit employees. Although participants were randomly selected and represented a broad cross-section of employees, the results are not statistically generalizable. To systematically assess the information we obtained during the focus groups, we analyzed its content using content- analysis software, which allowed us to categorize the information into various categories and themes. From this content analysis, we developed a model of employee experiences with centralized business services based on categories of participant responses. All information was initially coded by one GAO analyst and then reviewed separately by a second GAO analyst. We coded participants’ responses by splitting them into a series of categories, including categories corresponding to current conditions, perceived causes, and effects on day-to-day work. We established these categories by identifying natural clusters of employee responses. Our model of the employees’ experiences with centralization thus highlights the most common elements identified by employees in our focus groups, with each element in the model distinct from the other elements. The specific elements resulting from our content analysis of participants’ responses included the following: Characteristics of systems and processes included comments regarding the ease or difficulty of using automated systems, the clarity of forms, and the complexity of processes under centralization. Quality of customer support included comments regarding help-desk support, online guidance, or other support. Characteristics of individuals included comments regarding the nature of individual employees, including their prior experience, training, and job responsibilities. Characteristics of tasks included comments regarding the nature of the tasks being carried out, including the complexity and technical nature of the tasks. Quality of solutions included comments regarding the accuracy or completeness of the service provided by customer service help desks. Timeliness of solutions included comments regarding the speed with which tasks are completed. Effect on mission work included comments regarding what the changes have meant for on-the-ground work, such as firefighting, stream restoration, and research activities. Morale included comments regarding what the changes have meant for employees’ job satisfaction. Policies and procedures included comments regarding what the changes have meant for how well policies and procedures are being followed for carrying out business service tasks under centralization. To determine what actions the Forest Service has taken to assess its delivery of centralized services and address identified shortcomings, we interviewed senior agency officials responsible for managing and overseeing the business services, including the Deputy Chief and Associate Deputy Chief of Business Operations, and senior officials from each of the three business services. We reviewed documentation prepared by Forest Service staff and contractors assessing various aspects of business service delivery, including one-time program reviews, surveys of field-unit employees, and results of employee focus groups. We also reviewed a variety of ongoing assessment mechanisms developed by the business services, including service-level agreements and performance measures established for each business service and methods to solicit feedback from field-unit employees, such as customer help desks and business service Web sites. In addition, we interviewed several members of the agency’s Operations Customer Service Board, which monitors the performance of the Albuquerque Service Center, including the board’s chair and several members serving on specific board review teams, such as those tasked with overseeing service-level agreements and business service budgets. We reviewed documentation developed by the board, including its monthly meeting notes for 2010, annual meeting notes and related documentation for 2010 and 2011, and recommendation letters provided to the Chief of the Forest Service and the business service directors from 2006 through May 2011. To further assess steps the Forest Service is taking to address identified shortcomings, we reviewed documentation prepared by each business service, such as annual accomplishment reports and information developed and submitted to the Operations Customer Service Board. We also interviewed officials on the human resources management redesign and information technology reorganization teams and reviewed documentation related to those efforts, such as implementation plans. In addition, during our interviews with field-unit staff, we learned about agency efforts to address identified shortcomings and the results of steps taken to date. To examine the extent to which the Forest Service could demonstrate that it achieved centralization’s intended cost savings, we reviewed available documentation on the baseline costs of providing each of the business services before centralization, the projected costs for providing those same business services after centralization was complete, the actual costs of providing the business services after centralization, and estimates of cost savings contained in financial analyses comparing these data; we also reviewed internal and external assessments of the financial impact of centralization. Specifically, we reviewed the following: Available Forest Service documentation on the underlying data, assumptions, and methodologies for developing estimates of baseline costs and projected annual cost savings. For budget and finance and human resources management, these estimates generally came from business cases prepared as a part of early centralization-planning efforts; for information technology, from documentation developed through its competitive sourcing effort. Agency estimates of cost savings contained in congressional and agency leadership briefings on the status and results of centralization efforts from fiscal year 2005 through fiscal year 2007. Updated estimates of cost savings from fiscal year 2006 through fiscal year 2010, prepared by the agency at our request. Available documentation on actual costs, staffing changes, and other factors used by the agency to support its estimates of cost savings. Budget reviews by the agency’s Operations Customer Service Board. Status reports, business plans, strategy documents, and other related information prepared by each of the three business services. Assessments performed by Forest Service staff and external organizations, such as the National Academy of Public Administration, assessing human resources management and information technology centralization efforts. Prior GAO reports. In addition, to gain further information on the Forest Service’s efforts to measure cost savings associated with business service centralization and to assess their reliability, we interviewed senior officials responsible for managing and overseeing the business services, including the Deputy Chief and Associate Deputy Chief of Business Operations, the Chief Financial Officer, and the directors of each of the three business services, as well as others from Forest Service headquarters, the three business services, and select field-unit offices. Agency officials, however, could not always provide sufficient documentation supporting the estimates contained in the information they made available to us, re-create or substantiate the methods used to calculate cost savings, or resolve inconsistencies in reported results. Because of these limitations, we were unable to verify the reliability of all cost estimates the agency provided to us. Moreover, given these limitations, we were unable to determine what steps, if any, the agency took to adjust its estimates for inflation. As a result, we were unable to consistently adjust all dollar values to constant dollars, and we therefore report all dollar amounts as provided to us by the agency. We conducted this performance audit from June 2010 to August 2011, in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Anu K. Mittal, (202) 512-3841 or mittala@gao.gov. In addition to the individual name above, Steve Gaty (Assistant Director), Mark A. Braza, Ellen W. Chu, Elizabeth Curda, Kay Daly, Sandra Davis, Alyssa M. Hundrup, James Kernen, Michael Krafve, Michael LaForge, Mehrzad Nadji, Jackie Nowicki, David Powner, Jeanette Soares, and William Woods made key contributions to this report. | In the early 2000s, the Forest Service, within the Department of Agriculture, centralized the operations of three major business services: (1) budget and finance, (2) human resources management, and (3) information technology. The agency's goals in centralizing these services, which were previously delivered by staff in field units throughout the country, were to streamline and improve operations and reduce costs. Congressional committees directed GAO to independently analyze whether centralization had achieved intended efficiencies and cost savings. Accordingly, this report examines the (1) types of effects centralization has had on the Forest Service and its employees, particularly in field units; (2) actions the agency has taken to assess its delivery of its centralized business services and to address identified shortcomings; and (3) extent to which the agency can demonstrate that it achieved intended cost savings. GAO examined agency reports, performance studies, cost estimates, and other documentation and interviewed and conducted focus groups with employees across the agency. The Forest Service's centralization of business services contributed to several agencywide improvements, but it has also had widespread, largely negative effects on field-unit employees. For example, centralization consolidated and standardized agency financial systems and procedures, which helped alleviate some of the agency's long-standing problems with financial accountability, and helped it sustain clean financial statement audit opinions more easily, according to agency officials. Nevertheless, GAO found that centralization of human resources management and information technology services had many negative repercussions for field-unit employees. Under centralization, the agency relies on a self-service approach whereby employees are generally responsible for independently initiating or carrying out many related business service tasks. According to field-unit employees, these increased administrative responsibilities, coupled with problems with automated systems and customer support, have negatively affected their ability to carry out their mission work and have led to widespread employee frustration. The Forest Service has undertaken a number of actions to assess its delivery of centralized business services, but it is unclear whether proposed remedies will fully address identified shortcomings. For example, the agency established a customer service board to continually monitor service delivery and recommend improvements. The agency has also undertaken initiatives to redesign and reorganize its human resources management and information technology services to improve service delivery in these areas. For example, human resources management hired additional staff and established regional service teams, and information technology developed a strategic framework and is in the early stages of a significant reorganization. Nevertheless, the agency has not yet systematically assessed which types of services are best suited to a self-service approach, and because many of the agency's other initiatives are in their early stages, it is unclear to what extent they will address identified shortcomings. The Forest Service could not reliably demonstrate cost savings resulting from centralization, but the agency estimated that anticipated savings may have been achieved in budget and finance. Achieving significant cost savings was one of the key goals of the agency's centralization effort, and the agency estimated it would save about $100 million annually across the three business services. (This estimate applied to budget and finance, human resources management, and a component within information technology known as the Information Solutions Organization, which was established to provide technology support services.) But because of limitations with the agency's documentation supporting the data, assumptions, and methods used in developing its cost information both before and after centralization, GAO was unable to fully ascertain the reliability of the cost estimates for (1) baseline costs of providing each of the business services before centralization, (2) projected costs for delivering those same business services after centralization was complete, or (3) actual costs of providing the business services after centralization. Nevertheless, the Forest Service estimated that anticipated annual savings through fiscal year 2010 may have been achieved in budget and finance but not in human resources management or the Information Solutions Organization, where the agency estimated that savings fell far short of its cost-savings goals. GAO recommends that the Forest Service systematically examine business service tasks to determine which ones can best be carried out under a self-service approach, take related steps to improve service delivery, and adequately document and assess the costs of current initiatives and business service delivery. The Forest Service generally agreed with GAO's findings and recommendations. |
Under the Stafford Act, FEMA may provide temporary housing units (such as travel trailers and mobile homes) directly to disaster victims who are unable to make use of financial assistance to rent alternate housing accommodations because of a lack of available housing resources. The act limits this direct assistance to an 18-month period, after which FEMA may charge fair market rent for the housing unless it extends the 18-month free-of-charge period due to extraordinary circumstances. To manage this post-disaster housing, FEMA typically has in place a contingency technical assistance contract. However, when Katrina made landfall in August 2005, FEMA was in the process of competing this contract—bids had been solicited and evaluated, but no contract was in place. Therefore, FEMA awarded “no-bid” contracts to four major engineering firms (Bechtel Corporation, Fluor Corporation, the Shaw Group Incorporated, and CH2M Hill Incorporated) for, among other things, the support of staging areas for housing units, installation of housing units, maintenance and upkeep, site inspections and preparations, site restoration, group site design, group site construction, site assessments, property and facility management, as well as housing unit deactivation and rehabilitation. In total, FEMA made almost $3 billion in payments to Bechtel, Fluor, Shaw, and CH2M Hill from September 2005 to January 2007. After much public criticism and investigations of the costs claimed by the four contractors, FEMA solicited proposals for new contracts for the maintenance and deactivation (MD) of mobile homes and trailers and for group site maintenance (GSM). Mississippi Maintenance and Deactivation Contracts: In November 2005, FEMA posted two solicitations indicating its intent to award multiple contracts for the maintenance and deactivation of manufactured homes and travel trailers. One solicitation was set aside for small businesses and the other was designated for 8(a) business development concerns (small businesses owned by socially and economically disadvantaged individuals). The solicitations for the small business and 8(a) awards were essentially the same, with each requiring prospective bidders to submit a technical and a business proposal listing their price for each of 37 contract line items. Additionally, in order to provide preference to local businesses, FEMA notified bidders that the proposed total price for any nonlocal business would be increased by 30 percent for price evaluation purposes. In May 2006, FEMA awarded five contracts to small businesses and five to 8(a) business development concerns. Each award was an indefinite delivery/indefinite quantity fixed price type contract with a 5-year term and each had a guaranteed minimum of $50,000 and a maximum funding limitation of $100 million. In total, nine businesses received these awards because one business received two awards–one as a small business and one as an 8(a) business concern. In addition, of the 10 awards, 8 went to businesses classified as local for price competition purposes and 2 went to companies that FEMA deemed nonlocal. FEMA also awarded similar maintenance and deactivation contracts in Louisiana, Alabama, and Texas. In May 2006, following award of the Mississippi MD contracts, FEMA issued two task orders to each of the 10 awardees. The initial task order for each contractor initiated a phase-in period for contract ramp-up. The cost of each contractor’s phase-in period was based on the amount agreed to in their contract. FEMA obligated the amount for the initial phase-in cost proposed by each MD contractor, which ranged from a low of $23,220 to a high of $6,111,000. The second task order provided an estimated quantity and projected dollar amount for each of the contract line items for the first 11 months of performance. Those task orders stated that the estimated usage was a “good faith estimate on the part of the government and was developed solely to arrive at an estimated total for the task order.” The amount obligated for each of those “good faith estimates” was between $19.2 million and $20.6 million, for a total obligation amount of over $200 million. FEMA elected not to compete the task orders among the 10 contractors nor did they consider price or cost under each task order as a factor in their source selection decision. However, both the MD contract and the FAR state that a contracting officer must provide each contractor with a fair opportunity to be considered for each order issued under multiple task order contracts. The FAR further states that the contracting officer may exercise “broad discretion” in developing task order issuance procedures, as long as these procedures are fair, included in the solicitation, and factor in price or cost. Mississippi Group Site Maintenance Contracts: In May 2006, FEMA posted its intent to award multiple contracts for group site maintenance. These contracts were set aside exclusively for service disabled veteran- owned small businesses and were further limited to proposing firms residing in or primarily doing business in Mississippi. The solicitation required each submitter to provide a price for maintaining group sites at various threshold sizes, including sites with less than 50 trailer pads, 51 to 100, 101 to 300, 301 to 600, and 601 or more. FEMA awarded these contracts in September 2006 and also awarded similar group site maintenance contracts in Louisiana. Temporary Housing Occupancy Extension: In April 2007, FEMA extended the temporary housing assistance program for hurricane victims living in trailers and mobile homes until March 2009. Beginning in March 2008, individuals residing in these units will pay a portion of the cost for rent, which will begin at $50 per month and incrementally increase each month thereafter until the program concludes on March 1, 2009. FEMA also began allowing residents of its mobile homes and travel trailers to purchase their dwellings at a fair and equitable price; however, on August 1, 2007, FEMA temporarily suspended sales while the agency works with health and environmental experts to assess health-related concerns raised by occupants. FEMA wasted as much as $16 million because it did not allocate task orders under the MD contracts to the companies with the lowest prices. Despite extraordinary pricing differences for the same services among the 10 MD contractors, FEMA issued task orders to all 10, spending $48.2 million from June 2006 through January 2007 on the five contract line items that generate the most cost. If FEMA had instead issued task orders to only the five contractors with the lowest overall bid prices, it would have only spent an estimated $32.5 million on these five line items. The scope of the work under the MD contracts primarily covered monthly trailer preventative maintenance, emergency repair, and unit deactivation and removal. Further, as stipulated in the contracts, each company receiving an award “must be prepared to perform th work anywhere in the region.” In response to FEMA’s solicitations, the contractors provided a wide range of price proposals for identical services—from about $90 million to $300 million—as shown in table 1. FEMA issued task orders to all 10 contractors for the first year of the contract, assigning each about 3,000 trailers. FEMA paid these 10 contractors about $51.2 million from June 2006 through January 2007, spending 94 percent of that amount—-$48.2 million—-on just five of the 37 line items in the contract. These line items include monthly preventative maintenance, contractor phase-ins, deactivations, emergency after-hours repairs, and septic cleaning services. The contractors’ bids for these specific line items also varied widely. Table 2 shows the high and low bids for each line item. Despite these extreme price variances, FEMA did not establish procedures for the most cost-efficient distribution of work. Both the MD contract and the FAR state that a contracting officer must provide each contractor with a fair opportunity to be considered for each order issued under multiple task order contracts. The FAR further states that the contracting officer may exercise “broad discretion” in developing task order issuance procedures, as long as these procedures are fair, included in the solicitation, and factor in price or cost. According to the MD solicitation and contract, FEMA considered “geographic locations and transportation concerns” when assigning work, but FEMA did not include procedures for factoring in cost in either of these documents. We asked FEMA to provide us with more detail about their task issuance procedures, but they did not respond, except to reiterate during an interview that it was were primarily concerned with who was already performing the work (some of the MD contractors had previously subcontracted with the original four firms) and the contractors’ transportation issues and office locations. Absent any other information from FEMA regarding the procedures it used to issue task orders to the 10 MD contractors, we concluded that FEMA did not adequately consider cost, resulting in as much as $16 million in waste. As shown in figure 1, if FEMA had instead issued task orders to the five contractors with the lowest overall bid prices, it would only have spent about $32.5 million on the five most expensive line items. Because FEMA did not reassign task orders under the MD contracts until June 2007—the second year of the contract, it likely wasted millions more on these line items from February through May 2007. As detailed in the figure, had FEMA made contract awards to only the five lowest bidders, it could have saved as much as $10.2 million in preventative maintenance costs. FEMA spent about $28.5 million for preventative maintenance on all the units in Mississippi from June 2006 through January 2007. If FEMA had awarded the MD contracts to the five companies with the lowest overall bid price, the cost for trailer and mobile home maintenance would have been approximately $18.3 million. $3.2 million on phase-in costs. FEMA spent $6.5 million on one-time phase-in costs for all 10 MD contracts. However, if FEMA used only the five companies with the least expensive bids, the total cost for phase in would have been over $3.2 million. $930,000 on unit deactivations. FEMA spent just over $7 million on about 10,000 deactivations from June 2006 through January 2007. If FEMA had awarded the MD contracts to the least expensive companies, the cost for these deactivations would have been approximately $6.1 million. $620,000 in after-hours emergency repairs. FEMA spent almost $2.2 million on emergency after hour service calls. If FEMA awarded the contract to the five most inexpensive companies, it would have spent approximately $1.6 million. $690,000 in septic cleaning costs. FEMA spent almost $4 million on septic cleanings from June 2006 through January 2007, but would have spent about $3.3 million if it had awarded the contracts to the less expensive companies. In addition to having the lowest prices, these five contractors also had the ability to maintain more than the 3,000 trailers they were originally assigned. Specifically, FEMA required companies to submit bids for the MD contracts based on the premise that they could each be assigned about 6,700 units that could have been located throughout the entire state. Prior to awarding the contracts, FEMA determined that each of these five companies did in fact have the technical ability to maintain at least 6,700 temporary housing units. Therefore, these five would have been capable of collectively performing maintenance for the estimated 30,000 trailers and mobile homes in Mississippi at the time of the award. From June 2006 through January 2007, we estimate that FEMA made approximately $16 million in improper or potentially fraudulent payments to the MD contractors based on invoices that should not have been approved, according to its own payment process. This amount includes about $15 million in payments made for preventative maintenance—which includes a required monthly inspection—and over $600,000 in payments for emergency after-hours repairs. With regard to preventative maintenance, we estimate that FEMA paid the MD contractors about $15 million even when the trailers being inspected could not be located in FEMA’s own databases, the supporting inspection documentation required by the contract did not exist, or the documentation showed that the contractor did not perform a complete inspection. This $15 million includes $2.2 million identified through a review of contractor billing records and $13 million identified through estimates calculated from a statistical sample. With regard to emergency after-hours repairs, we found that FEMA spent over $600,000 on these repairs even though the invoices should not have been approved because the housing units do not exist in FEMA’s inventory. We could not conduct any additional tests concerning the validity of payments FEMA made for these emergency repairs because the data we received were incomplete. Because of FEMA’s failure to adequately review inspection documentation submitted by the MD contractors, we estimate that about 50 percent of the $28.5 million in payments FEMA made for preventative maintenance were based on improper or potentially fraudulent invoices that should not have been approved. Specifically, based on a review of contractor billing records, we found that FEMA spent $2.2 million for preventative maintenance even though there was no documentation to support that the required monthly inspections had occurred. Further, as a result of our testing of a statistical sample of inspection documentation associated with the remaining $26 million in payments, we estimate that FEMA spent an additional $13 million based on invoices that should not have been approved. We also confirmed allegations that contractors received payments for monthly preventative maintenance even though their inspectors falsified inspection documentation. According to the terms of the contract and inspection forms provided by FEMA, MD contractors are responsible for routine repairs and for inspecting interior and exterior unit components. These components include the plumbing, electrical, and heating and cooling systems; panels, siding, windows, screens, and doors; and all appliances. According to FEMA, MD contractors must perform one preventative maintenance inspection per month in order to submit a valid invoice for unit maintenance. Furthermore, as specified by the terms of the contract, contractors must maintain records to document that the inspection was performed. After the contract awards, FEMA provided the contractors with a temporary housing unit inspection sheet (see app. II). Once completed, this inspection sheet should contain the following: The trailer’s FEMA-issued barcode (noted as “temporary housing unit no.” on the form). It should be noted that MD contractors told us that the barcode information they received from the original contractors was incomplete and they had trouble figuring out which trailers they were assigned. A checklist of interior components inspected. A checklist of exterior components inspected. The trailer occupant’s signature verifying that both interior and exterior inspection occurred. According to our discussions with FEMA, if a unit occupant is not home to sign the inspection sheet (and therefore the inspector does not have access to the interior components of the unit), the inspector is required to make at least two additional attempts to conduct a complete inspection. If the occupant is still not available to sign the inspection sheet or allow access to the interior of the unit, the inspector must note on the sheet that three attempts were made to complete the work in order to submit a valid invoice for payment. All of the contractors confirmed that FEMA told them to make three attempts to inspect a unit prior to submitting an invoice for payment, even though this requirement is not stated in the contract. As shown in figure 2, FEMA’s payment process is well designed and, if followed, provides reasonable assurance that payments are being made for work actually performed. As detailed in the figure, the Contracting Officer’s Technical Representative (COTR) is supposed to check the accuracy of both the contractors’ calculations and the supporting documentation associated with a “random sample” of barcodes. If the COTR finds any errors as a result of this sample, he or she must conduct accuracy checks on all of the invoices submitted by the contractor for that particular line item. Prior to submitting the invoice to FEMA’s Disaster Finance Center for processing, the COTR is to check for duplicate billings and verify that work was not performed on trailers that had been deactivated. During the course of our investigation, we found instances where FEMA’s COTRs adhered to this process and did not approve payments because they identified inaccurate calculations or duplicate invoices. However, our review of contractor billing records and testing of a statistical sample of inspections also shows that FEMA paid the MD contractors even though there was insufficient documentation that work had been performed, making it difficult to believe that the COTRs were consistently conducting the accuracy checks specified in figure 2. From June 2006 through January 2007, available records indicate that FEMA made about $28.5 million in preventative maintenance payments for over 180,000 inspections. Based on our initial analysis of billing records related to 12,000 of these inspections, we confirmed that FEMA should not have approved about $2.2 million in payments. Specifically, we reviewed approximately 90 preventative maintenance invoices submitted by the MD contractors from June 2006 through January 2007. Most of these invoices contained approximately 1,000 to 3,000 monthly inspection billings. As a result of this review, we identified billings for about 12,000 inspections that did not contain any documentation to support that an inspection had actually occurred. Despite this lack of supporting documentation, FEMA paid the contractors for these inspections. Using the contractors’ pricing information, we determined that the payments for these 12,000 inspections totaled approximately $2.2 million. Based on our testing of a statistical sample of the remaining $26 million in preventative maintenance payments, we estimate that FEMA made $13 million in payments even though the trailer barcode listed on the inspection sheet did not match a barcode listed in FEMA’s tracking system or the required inspection sheet did not exist. This amount also includes payments for incomplete inspections, i.e., when the inspection sheet did not contain the trailer occupant’s signature to document that an interior and exterior inspection had been performed or the sheet showed no indication that the contractor had made three attempts to perform a complete inspection. We analyzed a statistical sample of 250 from a population of about 170,000 inspections submitted by the MD contractors and paid for by FEMA from June 2006 through January 2007. Table 3 shows the results of our sample. Even if payments were supported by proper inspection documentation, we found indications that the paid-for inspections were not always performed. As shown by the following three cases, we confirmed allegations that inspectors performed impossibly large numbers of inspections in 1 day or otherwise falsified maintenance inspection documentation. We have referred all three of these matters to the Department of Justice and the DHS IG for further investigation and we have notified the Katrina Fraud Task Force about our findings. Case 1: We confirmed that inspectors for one contractor billed and were paid for excessive numbers of inspections that supposedly took place during the course of 1 work day. As previously stated, MD contractors are responsible for interior and exterior unit inspections. These inspections include checking the plumbing, electrical, and heating and cooling systems; panels, siding, windows, screens, and doors; and all appliances. According to several contractors we interviewed, the number of inspections that an inspector can reasonably complete during the course of 1 day is about 25—-approximately 1 every 20 minutes during an 8-hour work day. This number assumes that the units are in good condition, located fairly close together, and that the inspector does not have to make any repairs or experience any other delays related to occupant issues. However, we identified numerous cases where individual inspectors billed for around 50 inspections during the course of 1 day. In order to complete 50 inspections during an 8 hour work day, these inspectors would have had to perform one inspection every 10 minutes, without factoring in driving time, meals, or restroom breaks. In another case, an inspector claimed to have conducted 80 inspections in 1 day, or the equivalent of 1 inspection every 6 minutes. When we interviewed the contractor, he acknowledged that that were “many problems” with the subcontractor who performed these excessive inspections and he also stated that he fired this subcontractor. At the time of our interview, this contractor had not returned to FEMA any of the payments he received for these inspections. Case 2: Another MD contractor’s inspectors falsified inspection reports by signing for work they had not completed. Three inspectors employed by this contractor told our investigators that their supervisor asked them to fill out or sign blank inspection forms. According to the inspectors, their supervisor told them that the inspections had actually been performed, but that the paperwork documenting the inspections needed to be redone. However, the inspectors told our investigators that they had not performed the work on any of the inspections. When we spoke with the attorney representing the contractor about these claims, he stated that there were about 30 trailers that were inspected but no documentation had been filled out at the time of the inspection. He then admitted that some inspectors had been asked to recreate this documentation. During the course of our interview with the attorney, he also claimed that FEMA instructed his client to bill for the number of trailers that they had been assigned, regardless of whether an inspection had been performed. None of the other contractors stated that they billed for units assigned instead of work performed. When we asked the contracting officer in charge of the Mississippi MDs about this issue, she told us that a contractor must perform at least one preventative inspection per month on each trailer that it has been assigned in order to submit a valid bill for preventative maintenance. Case 3: An inspector employed by a different MD contractor told our investigators that she left the company after finding several maintenance inspections that had her name signed to them by another employee. The inspector provided our investigators with three inspection sheets that she insisted she did not sign. When our investigators confronted the supervisor with these allegations, she admitted that she had forged the inspection sheets. Although we initially intended to test the $2.2 million in payments FEMA made for after-hours emergency repairs, we could not conduct this work because the data we received concerning these calls did not contain complete information. However, we were able to determine that FEMA spent over $600,000 for emergency repairs even though the invoices for these repairs should not have been approved because the housing units do not exist in FEMA’s databases. FEMA’s records show that it paid for 12,045 after-hours emergency calls on 7,310 housing units from June 2006 to January 2007, for a total of $2.2 million in emergency repair payments. As part of our work, we attempted to test whether these payments were made for valid emergencies. To qualify as an emergency during the period of our review, a call had to have been received by FEMA’s call center between 5:00 p.m. and 8:00 a.m. Monday through Friday or on weekends. In addition, according to the FEMA call center instructions, emergency maintenance involves, but is not limited to, requests to repair gas leaks, major water leaks, sewage leaks, major electrical malfunctions, lack of heat when the outside temperature is under 50 degrees, or lack or air conditioning when the outside temperature is over 85 degrees. The call center was supposed to document relevant requests, verify the emergency, and then forward the request to the MD contractor responsible for the unit. However, when we reviewed the call center data, we found that the records related to emergency calls were not complete and therefore we could not determine whether the contractors submitted billings for valid emergency calls or whether FEMA made payments for calls that met its emergency criteria. Specifically, FEMA’s database did not identify the time and date the call was received. Although FEMA’s call center received 46,000 emergency calls from June 2006 through January 2007, over 21,000 of these call records lacked a time designation. Therefore, we could not ascertain whether calls should have been billed and paid for as emergency repairs. which contractor was assigned the call and which calls resulted in billable services. Although FEMA’s call center received 46,000 emergency calls, data we received from the contractors show that they only billed FEMA for about 12, 045 emergency repairs. Therefore, although we have FEMA’s records on calls received and payments made, we cannot reconcile this payment information with the contractors’ invoices. Despite these discrepancies, we were able to determine that FEMA spent over $600,000 for emergency after-hours repairs on units that cannot be found in FEMA’s inventory. As previously stated, FEMA paid for 12,045 after-hours emergency calls on 7,310 housing units from June 2006 through January 2007. When we compared the unit barcodes associated with these 7,310 units with the barcodes listed in FEMA’s main database for tracking the assignment and location of mobile homes and trailers, we were unable to identify records for 1,732 of the 7,310 units. Records show that FEMA made 2,780 improper or potentially fraudulent emergency repair payments related to these 1,732 trailers. Using the contractors pricing information, we calculated that these 2,780 payments totaled over $600,000. Our four case studies show that FEMA’s placement of travel trailers at group and commercial sites can lead to excessive costs. FEMA placed the temporary housing units on private properties to shelter individuals who were rebuilding their homes; at FEMA-constructed group sites at leased locations, such as stadium grounds and school fields; and at preexisting commercial sites (e.g., trailer parks). With regard to the private sites, FEMA only has to pay for installation, maintenance, and deactivation; the trailer can be hooked up to the property’s existing utilities, so no trailer pad is required. With regard to the group sites, FEMA understandably has had to pay extra for site construction and maintenance, security, leases, and utilities. However, our case studies show that these expenses are exacerbated by the fact that FEMA did not allocate work at the sites in a cost-effective manner and has not reevaluated this allocation since the sites were established. With regard to the commercial sites, FEMA has not incurred the same operational expenses that it has at the group sites because FEMA did not have to pay for pad construction and design and does not have to pay the GSM contractors for site maintenance. However, we found that FEMA’s mismanagement of the commercial site we investigated has lead to substantial waste. The majority of FEMA housing units in Mississippi are located on private properties where individuals are rebuilding their homes. According to FEMA, almost 14,000 of the 17,608 units currently in Mississippi are located on private sites, while the remainder are located at group or commercial sites. We estimate that, on average, FEMA will spend approximately $30,000 for the life cycle of a trailer placed at one of these private sites. As shown in figure 3, FEMA paid about $14,000 to purchase each 280 square foot trailer and $12,000 to haul the trailer to the site and install it, and will spend an additional $4,000 to maintain a private site trailer through the March 2009 temporary housing occupancy extension. Our estimate is likely understated because we did not have access to the trailer maintenance and group site maintenance payments made to the original four contractors. We also could not calculate MD phase-in costs, nor could we project deactivation expenses because it is not certain which of the current MD contractors will be responsible for deactivating the trailers in 2009. In contrast, as shown in table 4 and the subsequent figures, FEMA could spend from about $69,000 to $229,000 for trailers at the three group sites we investigated, when factoring in all known expenses, including costs incurred by the original four contractors for site design and construction and unit installation. Part of the reason for these extreme expenses is that FEMA failed to efficiently allocate work at the sites. For example, FEMA wasted about $800,000 by inefficiently allocating trailers and pads and also could not explain why it spent over $204,000 per year to lease one group site when most of the other parks only cost about $30,000 per year to lease. However, because data provided by FEMA contained numerous discrepancies, we could not account for all the expenses incurred at these sites. In particular, although we were able to determine the number of trailer pads at each site, FEMA could not provide us with an accurate trailer count. For purposes of our analysis, we assumed that the parks were operating with a trailer on each available pad. We also did not have accurate information about utility payments FEMA made for these specific sites and the trailers. As with the trailers at the private sites, our estimate is likely understated because we did not have access to the trailer and site maintenance payments made to the original four contractors and because we could not calculate MD phase-in and deactivation expenses. In addition, we do not know how much it will cost to return the group sites to their original condition, as required by the terms of the group site leases. Port of Bienville Industrial Park in Hancock County: Figure 4 shows the breakdown of expenses per trailer at this park through March 2009. Because there are only eight pads at Bienville, FEMA will spend about $229,000 for each trailer at the park through the March 2009 occupancy extension. Group site maintenance costs are dependent on the size of the site—”small” sites contain 50 trailer pads or less. In other words, FEMA wastes money by operating sites with very few pads because the GSM costs will be the same if a park has 1 trailer pad or 50. In this case, FEMA spends over $576,000 per year—$72,000 per trailer—for site maintenance. To save on this expense, FEMA could have assigned this park to the GSM contractor with the lowest bid price to service a small park. This contractor would only have charged FEMA about $76,000 per year to service Bienville—$9,500 per trailer. When we asked FEMA officials about the distribution of work at the sites, they told us that they “grasped” what pads they could get in the aftermath of the storm. FEMA did not indicate that it has reevaluated the distribution of work at the sites since that time. Sunset Ingalls Park in Jackson County: Figure 5 shows the breakdown of expenses per trailer at this park through March 2009. Sunset Ingalls has 102 trailer pads and is therefore classified as a large park (101 to 300 pads) for GSM purposes. FEMA pays the GSM contractor about $500,000 per year for maintenance at a large park, as opposed to $244,000 to service a medium sized park with 100 pads or less. Therefore, the additional two pads increase the GSM costs for this park by almost $260,000 per year. To save on this yearly cost, FEMA could have originally placed these two pads at another site with available space—-there are five group sites and one commercial site located near Sunset Ingalls. When we asked FEMA officials about the distribution of work at the sites, they told us that they “grasped” what pads they could get in the aftermath of the storm. FEMA did not indicate that it has reevaluated the distribution of work at the sites since that time. Ellzey Parcel in Harrison County: Figure 6 shows the breakdown of expenses per trailer at this park through March 2009. FEMA pays the landowner $17,000 per month, or $204,000 annually, to lease the property for this large group site, which contains 170 trailer pads. This lease amount is significantly higher than at the other 38 group sites, which typically range in cost from $250 to $7,500 per month. We asked FEMA why they were spending so much to lease this property in comparison to the other sites, they told us that did not evaluate costs associated with group site leasing because the General Services Administration (GSA) set up the leases. When we asked representatives from GSA about the Ellzey lease, they told us that $204,000 per year was a reasonable price because the site was located on industrial property, but they could not tell us if a less expensive option was considered. With regard to the commercial sites, table 5 shows the estimated cost per trailer at one commercial park in Mississippi. FEMA could have saved $1.5 million at this site if it had exercised an option to reassign or contract separately for septic cleaning services. McLeod Water Park in Hancock County: Figure 7 shows the breakdown of expenses per trailer at this park through March 2009. The MD contractor at this park charged FEMA $245 per septic service, or more than 500 percent of what FEMA could have paid, to provide septic cleanings to the approximately 61 trailers at the park. In total, FEMA paid the contractor about $1.8 million for this service because the cleanings were provided 3 times per week per trailer over the course of a year. However, this contractor made a profit of almost $1.5 million on these cleanings because it paid a subcontractor just $45 per cleaning to actually perform the work. FEMA could have saved this $1.5 million by awarding a separate contract for the septic cleaning services with the less expensive subcontractor; the septic bladder line item specifies that “FEMA reserves the right to use other sources to complete the work.” However, FEMA did not exercise this option. When we asked the MD contractor about this high profit margin, he said that officials from FEMA were aware of the situation but told him they “did not care about the profit margin.” According to an August 2007 report, FEMA’s current “exit strategy” for residents at the group and commercial sites involves partnering with the Department of Housing and Urban Development (HUD) to assist in locating rental properties for applicants through HUD’s National Housing Locator System (NHLS). In addition, Congress has provided $400 million for the Alternative Housing Pilot Program (AHPP) to develop and evaluate alternatives to travel trailers and mobile homes. However, it is still uncertain what will happen to those residents who continue to need housing assistance beyond the March 2009 trailer and mobile home occupancy extension. During the course of our work on the MD and GSM contracts, we found that FEMA awarded GSM contracts to two companies that did not appear to have submitted independent bids and also made false statements on proposals submitted to FEMA. We also found that a FEMA contracting officer may have improperly awarded the UFAS contract to make the housing units accessible to individuals with disabilities, resulting in $3 million in unnecessary expenses. We have referred both of these matters to the Department of Justice and the DHS IG for further investigation and we have notified the Katrina Fraud Task Force about our findings. FEMA awarded GSM contracts to two companies that did not appear to have submitted independent bids and that also made false statements on proposals submitted to FEMA. As previously discussed, FEMA awarded five GSM contracts in Mississippi. In reality, FEMA awarded one business two contracts: one contract as a “single entity” and one as part of a “joint venture” with another firm. Although making this type of award is not prohibited, the circumstances surrounding this case merit further investigation. Specifically, both the “single entity” and the “joint venture” are required to adhere to the Certificate of Independent Price Determination, as set forth in the contract solicitation. By signing the certificate, each bidder affirms that it has arrived at its price independently and has not disclosed its bid to competitors. Despite the fact that the single entity and the joint venture both signed this certification, our evidence shows that the companies may not have been truly independent, as might be expected given their common employees and business relationships. We also found that key personnel at both companies admitted to misrepresenting their job titles and functions in final offers submitted to FEMA, a potential violation of the False Statements Act, 18 U.S.C. §1001. Details of the case follow: Both proposals contained identical language. We found that both companies hired the same individual to prepare their proposals. This individual admitted that he “cut and pasted” language between the two submissions and also that he provided the single entity a copy of the joint venture’s bids prior to the submissions to FEMA. In addition, the joint venture’s chief operating officer admitted that he discussed the joint venture’s bids with the president of the single entity prior to submission. The single entity and the joint venture submitted line items bids that were frequently identical or within a few hundred dollars. In their initial proposals, the single entity and the joint venture provided organizational charts with nearly identical personnel. For example, both companies had the same president, executive vice president, and accountant. After FEMA received the initial proposals, the contracting officer told both companies that he was concerned with the overlapping personnel and the similar pricing in the submissions. In their best and final offers, the companies submitted new organizational charts on which the president and executive vice president roles were now filled by different people. However, the president of the single entity admitted that she was president of both companies, despite being removed from the joint venture’s initial organizational chart. In addition, the individual listed as “operations manager” for the single entity admitted that he does not really act in that capacity and then remarked to our investigator that, with regard to the new organizational structure, “it’s obvious that we just reshuffled the deck.” The contracting officer stated that the submission of the new organizational charts in the best and final offers submitted by the companies allayed his concerns about whether the companies were operating independently. He also indicated that it is not FEMA’s job to “police” whether organizational charts are accurate or to investigate whether companies adhered to the certificate of independent price determination. In response to our referral, Justice has decided to open an investigation of this matter. We found that one of FEMA’s contracting officers may have improperly awarded the UFAS contract to lay asphalt to make the travel trailers accessible to individuals with disabilities, leading to over $3 million in unnecessary expenses. FEMA was required to make the trailers accessible as part of a September 2006 settlement agreement stemming from a lawsuit brought by disabled trailer occupants. Unlike the MD and GSM contracts, the FEMA contract officer set aside this UFAS contract for sole- source negotiation with a local 8(a) firm. At the time of the UFAS award process, 8(a) contracts could be awarded without competition if the anticipated total value of the contract was less than $3 million. According the Federal Acquisition Regulations (FAR), an 8(a) contract may not be awarded if the cost to the agency exceeds a fair market price. Further, the FAR provides that prior to making sole-source 8(a) awards, a contracting officer must estimate and justify the fair market value of the contract, using cost analyses or other available data. The FAR also states that the appearance of conflicts of interest in government-contractor relationships should be avoided. Given these criteria, the contracting officer may have improperly awarded the contract, costing taxpayers over $3 million in unnecessary expenses. The government estimate to complete the UFAS asphalt work for about 150 trailers was $2.99 million, just under the $3 million threshold for awarding 8(a) contracts noncompetitively. In response to our request for additional information, FEMA said that it was not able locate any documentation to support how this estimate was derived. Therefore, we asked GAO engineers with over 30 years experience to estimate the costs associated with laying asphalt at the sites. Although they did not visit these sites, the engineers used the information available from the contractor’s price proposals, to estimate that, in the Biloxi, Mississippi, region, this work should have only cost about $800,000. The company’s initial bid, submitted on October 4, 2006, was around $3.2 million, just over the 8(a) competitive threshold and four times the expert estimate of what the work should have cost. FEMA awarded the contract the very same day for $2.9 million; it appears that the contracting officer deleted 4 of the 33 bid items in order to keep the award amount under $3 million. Then, on November 1, 2006, less than a month after the award, the contracting officer modified the contract to add back one of the dropped line items and to increase the total award by almost $750,000, 25 percent of the total value. Two more modifications followed, on December 21, 2006, and January 31, 2007. The total value of the contract ultimately reached just over $4 million, five times the expert estimate to perform the work. Figure 9 shows the timeline for the initial award and subsequent modifications. Due to the unprecedented nature of the disasters resulting from the 2005 gulf coast hurricanes, it was understandable that FEMA did not immediately have effective systems in place to efficiently allocate work or to track the invoices submitted by the contractors for maintaining thousands of mobile homes and travel trailers. However, over 2 years have passed since the storms and FEMA is still wasting tens of millions of taxpayer dollars as a result of poor management and ineffective controls. It is critical that FEMA address weaknesses in its task order issuance and invoice review processes so that it can reduce the risk for wasteful and potentially fraudulent expenses and provide assurance that the government is getting what it pays for. Finally, while the placement of travel trailers at group and commercial sites might be necessary in the immediate aftermath of a disaster, going forward, FEMA needs to minimize the expenses associated with this type of temporary housing and to develop strategies to transition disaster victims into more permanent housing. We recommend that the Secretary of Homeland Security direct the Director of FEMA to take the following six actions. With regard to the 10 MD and 5 GSM contracts in Mississippi that we investigated for this report, FEMA should assess whether the contractors were overpaid and, if so, establish procedures to collect overpayments or offset future payments. For the current MD and GSM contracts in Mississippi and for any temporary housing unit contracts arising from future disasters, FEMA should place a greater emphasis on issuing task orders to the companies with the capability to perform the most work at the lowest cost. conduct a complete inventory of mobile homes and trailers, create a comprehensive database, and establish procedures to link work assigned to the contractors with specific unit barcodes to provide reasonable assurance that work is being performed on FEMA-owned housing units. design and implement internal control procedures to enforce the existing payment and invoice review process to provide reasonable assurance that payments are being made for work actually performed. To alleviate the excessive costs associated with maintaining travel trailers at group and commercial sites, FEMA should reevaluate the allocation of trailers and work at the sites to determine whether any savings can be achieved and explore creating permanent partnerships with other agencies, such as the current partnership with the Department of Housing and Urban Development, to determine whether there are less expensive housing options that meet the needs of disaster victims. As previously indicated, we have referred all the alleged criminal matters identified in our report to the Department of Justice and the DHS IG for further investigation and we have notified the Katrina Fraud Task Force about our findings. For these cases, FEMA should consider the suspension or debarment of any contractor found to have committed fraud or otherwise violated the law. FEMA provided written comments on a draft of this report in which it concurred with all six of our recommendations and outlined actions it has taken that are designed to address each of these recommendations. As part of its response, FEMA also provided background of the events leading up to the award of the MD and GSM contracts and detailed some of the overall improvements the agency stated it has made since Hurricane Katrina. These comments are reprinted in appendix III. Concerning our recommendation to collect overpayments from the contractors, FEMA stated that it intends to assess whether it made overpayments and, if so, plans to assert claims against the contractors for the appropriate amount. In response to our recommendation to issue task orders to companies at the lowest cost, FEMA stated that has reallocated work under the GSM contracts on a “low price basis per site” and under the MD contracts on a “best value basis.” In response to our recommendation to inventory mobile homes and trailers, create a database, and link work assigned to the contractors with specific unit barcodes, FEMA states that it began an invoice-matching project in March 2007 and is in the process of completing an inventory count to ensure that all the temporary housing units at the sites are recorded in the agency’s existing management system. Concerning our recommendation that FEMA enforce the existing payment and invoice review process, FEMA states that it has established an Acquisition Program Management Office (PMO) that is in charge of enforcing the process. In addition, FEMA notes that the PMO has developed guidance and training on what constitutes proper invoice documentation and has also obtained the services of a contractor to automate the payment process to provide automatic calculation checks and line item tracking. FEMA states that it is also implementing a COTR training program and initiatives aimed at converting from paper to electronic files, developing a COTR program policy, and creating a comprehensive database of COTR information. With regard to our recommendation to evaluate the allocation of trailers and work at the groups sites in order to achieve savings, FEMA states that it is working to close and consolidate the sites and that it has reallocated work under both the GSM and MD contracts. Finally, concerning our recommendation that FEMA create permanent partnerships with other agencies to determine whether there are less expensive options that meet the needs of disaster victims, FEMA states that it has established a task force called the Joint Housing Solutions Group to evaluate other methods of housing disaster victims. In addition, as indicated in our report, FEMA states that it has implemented the Alternative Housing Pilot Program and has also entered into an interagency agreement with HUD establishing a temporary housing rental assistance and case management program for individuals displaced by the hurricanes. According to FEMA, the program will be administered though HUD and will include a needs assessment and individual development plan for each family. We are sending copies of this report to the Secretary of Homeland Security and the Director of Federal Emergency Management Agency. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-6722 or kutzg@gao.gov if you have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Key contributors are listed in appendix IV. The objective of our investigation was to determine whether there were indications of fraud, waste, and abuse related to Federal Emergency Management Agency (FEMA) oversight of the 10 MD and 5 GSM contracts in Mississippi. We focused our efforts on investigating (1) FEMA’s issuance of task orders to the MD contractors and (2) FEMA’s invoice review process. We also prepared case studies to determine the costs associated with the placement of travel trailers at group sites and investigated allegations of criminal and improper activity related to the contracts. To investigate FEMA’s issuance of task orders to the MD contractors, we assessed whether the agency issued the task orders in a cost-effective manner. We analyzed the costs associated with the five most expensive contract line items. We analyzed MD contractor invoices and FEMA receiving reports from June 2006 through January 2007 to find the total number of units paid for by FEMA. For each of the 10 contractors, we totaled the number of units paid for by FEMA for the preventative maintenance, phase in, deactivation, septic bladder pumping, and emergency after-hours repairs contract line items. We then totaled the number of units and amount paid to all contractors for all listed contract line items. To determine the five least expensive contractors, we divided the total number of units for each line item by five, and then multiplied that total by each contractor’s line item cost. By adding up the cost of all line items for each contractor, we were able to determine the five least expensive contractors. Using these five contractors, we determined what the total cost for each line item would have been if FEMA had awarded these five the MD task orders. We then compared the new cost to the original FEMA payments to figure potential savings for the line items. To investigate FEMA’s invoice review process, we reviewed invoices and backup documentation associated with the $28.5 million in payments FEMA made for monthly preventative maintenance and the $2.2 million in payments FEMA made for emergency after-hours repairs. With regard to monthly preventative maintenance, we initially reviewed approximately 90 preventative maintenance invoices submitted by the MD contractors from June 2006 through January 2007. Each of these invoices contained approximately 1,000 to 3,000 monthly inspection billings. As a result of this review, we identified billings for 12,000 inspections, totaling $2.2 million, that did not contain any documentation to support that an inspection had actually occurred. To provide an estimate of improper or potentially fraudulent payments related to the remaining $26 million in preventative maintenance payments FEMA made to the MD contractors, we drew a statistical sample of 250 units that were paid for by FEMA as receiving a preventative maintenance inspection. We constructed the population of preventative maintenance inspections using contractor back-up invoice documentation and monthly contract status reports as well as FEMA receiving reports confirming FEMA payments for unit maintenance from June 2006 through January 2007. We acquired preventative maintenance inspection forms from the MD contractors and FEMA. Improper or potentially fraudulent payments for unit maintenance include cases where the payment was made (1) for preventative maintenance inspections on units not identified in FEMA’s database, (2) based on preventative maintenance inspection forms that did not exist, and (3) based on inspection forms that did not contain an occupant’s signature denoting a full inspection occurred or that three attempts to conduct an inspection were made. To assess the reliability of the preventive maintenance inspections documentation from June 2006 through January 2007, we (1) reviewed existing documentation related to the data sources and (2) examined the data to identify obvious problems with completeness, accuracy, or duplicates. We determined that the data were sufficiently reliable for the statistical sample. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 5 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. With regard to emergency after-hours calls, we could not test the $2.2 million in payments FEMA made because the data we received concerning these calls did not contain complete information. To determine whether FEMA made emergency after-hours repair payments for units that do not exist in its inventory records, we compared the barcodes on the 7,310 housing units that received emergency repairs from June 2006 to January 2007 with the barcodes listed in FEMA’s main database for tracking the assignment and location of mobile homes and trailers. We were unable to identify records for 1,732 of these 7,310 units. Using FEMA’s payment records, we then determined that FEMA made 2,780 improper or potentially fraudulent emergency repair payments related to these 1,732 trailers. To prepare case studies, we calculated the expenses associated with a nonrepresentative selection of three group sites and one commercial site in Mississippi. We used cost information issued by FEMA to calculate expenses associated with trailer purchase, site design and construction, and trailer installation. To identify the specific trailer barcodes located at each case study site, we searched several databases provided by FEMA, as well as data provided by the contractors for park address or occupant name matches. Because FEMA could not provide us with a definitive number of trailers at each site, for purposes of our analysis, we assumed a best case scenario for FEMA: that the parks were operating with a trailer on each available pad. Using the list of trailer barcodes we identified, we analyzed the invoices submitted by the MDC contractor responsible for each site, and the accompanying FEMA receiving reports to determine the number and type of services performed on each trailer and paid for by FEMA. The charges cover the period of June 2006 through January or February 2007, depending upon each contractor’s available data. We also added in the following costs as provided by FEMA: group site contractor costs for each site, including a portion of their phase-in cost, and monthly security costs and monthly lease costs, if applicable. The one-time and recurring costs were combined for each park, resulting in a total cost for each park. To provide a general lifecycle cost for a FEMA trailer, we estimated these totals through March 2009, which is the date FEMA stated the travel trailer rental assistance program will end. To determine the general costs for a FEMA trailer located on a private site, we identified trailers noted as “private” in the FEMA databases, and selected the first three for each MDC contractor. We then searched the contractor invoices, covering the period of June 2006 through January 2007 and recorded and totaled the charges for each barcode. The resulting totals were projected for 1 year, and used as an estimate of the annual costs for maintaining a trailer on a private site. We also projected the costs for these trailers through March 2009. Our estimates are likely understated because did not have access to trailer maintenance and group site maintenance payments made to the original four contractors. We also could not calculate MD phase-in costs, nor could we calculate deactivation expenses because it is not certain which of the current MD contractors will be responsible for deactivating the trailers in 2009. In addition, we do not know how much it will cost to return the group sites to their original condition, as required by the terms of the group site lease. Results from nonprobability samples (case studies) cannot be used to make inferences about a population, because in a nonprobability sample, some elements of the population have no chance or an unknown chance of being selected as part of the sample. Our findings cannot be generalized to all sites, but when coupled with our other results they do provide useful insight into FEMA’s expenses. Finally, our interviews with FEMA officials, contractor personnel, and confidential informants led us to identify improper activity associated with the contract award process. To further investigate this activity, we reviewed and compared the contract proposals, total bid prices, line item bids, and government estimates for work. It is important to note that we did not conduct a comprehensive evaluation of whether FEMA adhered to its own solicitation requirements and other laws or regulations when awarding the 10 MD or 5 group site maintenance contracts. We conducted our work from October 2006 through July 2007. We conducted our investigative work in accordance with the standards prescribed by the Presidents Council on Integrity and Efficiency and conducted our audit work in accordance with generally accepted government auditing standards. . In addition to the individual named above, the following made key contributions to this report: Gary Bianchi, Bruce Causseaux, Jennifer Costello, Randy Cole, George Depaoli, Terrell Dorn, Craig Fischer, Janice Friedeborn, Matthew Harris, Adam Hatton, Brad James, Jason Kelly, John Kelly, Barbara Lewis, James Madar, Megan Maisel, Lisa Mirel, John Ryan, Barry Shillito, Nathaniel Taylor, and Quan Thai. | Hurricane Katrina destroyed or damaged 134,000 homes and 10,000 rental units in Mississippi alone. The Federal Emergency Management Agency (FEMA) in part responded by providing displaced individuals with temporary housing in the form of mobile homes and travel trailers, placed on both private property and at FEMA-constructed group sites. In 2006, FEMA awarded 10 contracts in Mississippi to maintain and deactivate (MD) the housing units and 5 for group site maintenance (GSM). GAO was asked to investigate whether there were indications of fraud, waste, and abuse related to FEMA's oversight of these 15 contracts. GAO analyzed FEMA's issuance of task orders, tested a representative sample of monthly maintenance inspections payments, prepared case studies detailing the costs related to trailers placed at group sites, and investigated improper activity related to the contracts. FEMA's ineffective oversight resulted in an estimated $30 million in wasteful and improper or potentially fraudulent payments to the MD contractors from June 2006 through January 2007 and likely led to millions more in unnecessary spending beyond this period. For example, FEMA wasted as much as $16 million because it did not issue task orders to the contractors with the lowest prices. In addition, GAO estimates that FEMA paid the contractors almost $16 million because it approved improper or potentially fraudulent invoices. This amount includes about $15 million spent on maintenance inspections even though there was no evidence that inspections occurred and about $600,000 for emergency repairs on housing units that do not exist in FEMA's inventory. Furthermore, FEMA's placement of trailers at group sites is leading to excessive costs. FEMA will spend on average about $30,000 on each 280 square foot trailer at a private site through March 2009, the date when FEMA plans to end temporary housing occupancy. In contrast, expenses for just one trailer at the Port of Bienville Park case study site could escalate to about $229,000---the same as the cost of a five bedroom, 2,000 square foot home in Jackson, Mississippi. Part of the reason for this expense is that FEMA placed only eight trailers at the Bienville site. FEMA wastes money when it operates sites with such a small number of trailers because GSM costs are fixed whether a site contains 1 or 50 trailer pads. At Bienville, FEMA spends over $576,000 per year--$72,000 per trailer--just for grounds maintenance and road and fence repair. GAO also found evidence of improper activity related to the contract award process. For example, FEMA awarded GSM contracts to two companies that did not appear to have submitted independent bids, as required. These companies shared pricing information prior to submitting proposals to FEMA and also shared the same president and accountant. Personnel at both companies also misrepresented their job titles and functions, a potential violation of the False Statements Act. In another case, FEMA's contracting officer awarded a $4 million contract to make the temporary housing units disabled-accessible; the contracting officer allegedly had a previous relationship with the awardee's subcontractor. GAO licensed engineers estimated that the work should have only cost about $800,000, or one-fifth of what FEMA ultimately paid. |
Congressional concerns that banks and thrifts (institutions) were not adequately responsive to credit needs of the communities they served, including low- and moderate-income areas, prompted the passage of the Community Reinvestment Act (CRA) in 1977. The act requires each federal bank and thrift regulator—the Federal Reserve Board (FRB), the Office of the Comptroller of the Currency (OCC), and the Federal Deposit Insurance Corporation (FDIC) for banks, and the Office of Thrift Supervision (OTS) for thrifts—(regulators) to encourage institutions under its jurisdiction to help meet the credit needs in all areas of the community the institution is chartered to serve, consistent with safe and sound operations. The act also requires the regulators to periodically assess institutions’ community lending performance during examinations and to consider that performance in their evaluations of institutions’ applications for expansion or relocation of their operations. Growing concern about the effectiveness of CRA’s implementation and its regulatory burden on institutions led to the regulators’ major reform effort, which resulted in two major proposed CRA revisions, issued in December 1993 and October 1994, and a final revised CRA regulation in May 1995. This report responds to a request from the former Chairmen, House Committee on Banking, Finance and Urban Affairs and the Subcommittee on Consumer Credit and Insurance asking us to evaluate whether the regulators’ reform efforts would improve compliance with the CRA, encourage institutions’ lending to their entire communities, and reduce unnecessary burden. The former Chairmen also asked us to evaluate the regulators’ implementation of the fair lending laws—the Fair Housing Act (FHA), the Equal Credit Opportunity Act (ECOA), and the Home Mortgage Disclosure Act (HMDA). The result of our work on fair lending will be discussed in a separate report. The debate preceding enactment of CRA was similar to the current debate. Community groups urged its passage to curb what they believed to be a lack of adequate lending in low- and moderate-income areas. Bank and thrift officials (bankers) generally opposed CRA as an unnecessary measure that could, among other things, unduly affect business decisions by mandating credit allocation and cause safety and soundness problems by forcing institutions to make excessively risky loans. Since the passage of CRA, the regulatory, economic, and legislative environments have changed. It is therefore useful to review the history of, and substantive amendments to, CRA to understand its origins and where emphasis has shifted. Table 1.1 briefly illustrates the major amendments to CRA since its passage. CRA was passed as title VIII of the Housing and Community Development Act of 1977 (12 U.S.C. 2901 et seq.). CRA requires each federal banking regulator to use its authority, when examining institutions, to encourage such institutions to help meet the credit needs of the local communities in which they are chartered, consistent with the institution’s safe and sound operation. In connection with these examinations, the regulators are required to assess an institution’s record of lending in its community and take it into account when evaluating any type of application by an institution for a deposit facility. CRA was amended by FIRREA to require that the regulator’s examination rating and a written evaluation of each assessment factor be made publicly available. FIRREA also established a four-part qualitative rating scale so that the publicly available CRA ratings would not be confused with the five-part numerical ratings given to institutions by the regulators on the basis of the safety and soundness of their operations. These safety and soundness ratings are confidential. In 1991, FDICIA further amended CRA to require public discussion of data underlying the regulators’ assessment of an institution’s CRA performance in the public CRA evaluation. The Housing and Community Development Act of 1992 amended CRA to require that the regulators consider activities and investment involving minority- and women-owned financial institutions and low-income credit unions in assessing the CRA performance of institutions cooperating in these efforts. The Riegle-Neal Interstate Banking and Branching Efficiency Act of 1994 amended CRA to require that institutions with interstate branching structures receive a separate rating and written evaluation for each state in which they have branches and a separate written evaluation of their performance within a multistate metropolitan area where they have branches in two or more states within the area. The principle contained in CRA, that institutions must serve the “convenience and needs” of the communities in which they are chartered to do business consistent with safe and sound operations, is one that federal law governing deposit insurance, bank charters, and bank mergers had embodied before CRA was enacted. The Banking Act of 1935 declared that banks should serve the convenience and needs of their communities. The Bank Holding Company Act, initially passed in 1956, requires FRB, in acting on acquisitions by banks and bank holding companies, to evaluate how well a bank meets the convenience and needs of its communities within the limits of safety and soundness. Under CRA, the concept of “convenience and needs” was refined to explicitly include extensions of credit. CRA and the fair lending laws, while separate, have related objectives. The primary purpose of CRA was to prohibit redlining—arbitrarily failing to provide credit to low- and moderate-income neighborhoods. FHA and ECOA prohibit lending discrimination based on certain characteristics of potential and actual borrowers. The FHA, passed by Congress in 1968 as title VIII of the Civil Rights Act of 1968, among other things prohibits discrimination in residential real estate-related transactions on the basis of an applicant’s race, color, religion, gender, handicap, familial status, or national origin. Such prohibited activities include denying or fixing the terms and conditions of a loan based on discriminatory criteria. The ECOA, passed in 1974, prohibits discrimination with respect to any aspect of a credit transaction based on race, color, religion, national origin, gender, marital status, age, receipt of public assistance, or the exercise, in good faith, of rights granted by the Consumer Credit Protection Act. HMDA was enacted by Congress in 1975 to provide regulators and the public with information so that both could determine whether depository institutions were serving the credit needs of their communities but was expanded over time to detect evidence of possible discrimination based on the individual characteristics of applicants. HMDA established a reporting obligation for depository institutions. Initially, HMDA required depository institutions with total assets of more than $10 million to compile data on the number and total dollar amount of mortgage loans originated or for which the institution received completed applications or purchased during each fiscal year by geographic area and make that data available for public inspection. In 1989, HMDA was amended to require collection and reporting of data on race, gender, and income characteristics of mortgage applicants to provide data to assist in identifying discriminatory lending practices and enforcing fair lending statutes. Amendments to HMDA in 1988 and 1991 expanded the reporting requirements to most mortgage banking subsidiaries of bank and thrift holding companies and independent mortgage companies not affiliated with depository institutions. In 1992, HMDA was amended to require affected financial institutions to make available to the public, upon request, their loan application registers, which maintain data for loans covered by HMDA. Both HMDA and CRA were originally enacted to remedy a perceived lack of lending by institutions to the communities in which they were chartered to do business by the regulators. HMDA was amended in 1989 to include the collection of data on race, sex, and income of applicants for credit to provide indications of possible lending discrimination. In addition, 2 of the 12 assessment factors, factors D and F, in the current CRA regulation address the issue of discrimination to be considered in determining an institution’s CRA rating. Where available, HMDA data are to be used by examiners when assessing compliance with CRA, FHA, and ECOA. The federal banking regulators have primary responsibility for the examination of CRA performance and enforcement of the act. In addition to their responsibilities for examining institutions for financial condition and safe and sound operations, the regulators have been, since the late 1960s, responsible for examining and enforcing laws and regulations primarily related to matters other than safety and soundness. These include various consumer protection or civil rights laws and regulations intended to ensure that the provision of banking services is consistent with legal and ethical standards of fairness, corporate citizenship, and the public interest. These laws include CRA, and the regulators monitor compliance with them through compliance examinations. Since the late 1960s, the number of laws and regulations covered by compliance examinations has increased to over 20. Believing that bank operations had become too complex to be adequately covered by a single group of examiners, the FRB established a special compliance examiner program in 1977, which is responsible for performing compliance examinations separately from safety and soundness examinations. The FRB made the compliance examiner program permanent in 1979. A distinct group of compliance examiners, initially established by this program, has remained in place since 1979 and has grown relative to the number of Federal Reserve member banks. FDIC initiated a compliance examiner program in the late 1970s that established a compliance specialty but did not represent a separate career path and did not preclude examiners from also conducting safety and soundness examinations. FDIC did not establish an entirely separate compliance examiner force exclusively responsible for compliance examinations until 1990. FDIC’s compliance examiner program was not fully staffed, however, until the end of 1993. The compliance examiners remained part of FDIC’s Division of Supervision until an August 1994 reorganization that consolidated activities formerly divided between the Division of Supervision and the Office of Consumer Affairs into a single Division of Compliance and Consumer Affairs. Similar to FDIC, OCC established a compliance examination specialty in the late 1970s. The specialty did not represent a separate career path for examiners and often resulted in examiners spending only a portion of their time doing compliance examinations. Junior examiners were usually responsible for doing compliance examinations. The perceived greater attractiveness of safety and soundness work combined with the safety and soundness crisis in the banking industry during the late 1980s and early 1990s rendered the compliance specialty a low priority. OCC began to develop a separate compliance program with a separate compliance examiner career path in 1993. OCC currently has an operating staff of compliance examiners composed of approximately 170 people. An additional 110 people are to be part-time compliance examiners who will be expected to devote a minimum of 20 percent of their time to compliance examinations. OCC believes that devoting at least 20 percent of these examiners’ time to compliance will ensure that they maintain a sufficient level of expertise. This group is to be responsible for compliance examinations of “program” banks, banks with $1 billion or more in assets. Banks with less than $1 billion in assets, approximately 70 percent of OCC’s banks, are to continue to be examined by OCC’s nonspecialized examiners. Although OTS supervises thrifts, as opposed to commercial banks, it is responsible for assessing compliance with most of the same compliance laws and regulations as the banking regulators. In 1989, OTS established a separate compliance examiner program in which compliance examinations are to be conducted by specially trained, career professional staffs in the OTS regional offices. The original mandate for establishing such a program came from the Federal Home Loan Bank Board. The passage of FIRREA, which abolished the Federal Home Loan Bank Board and established OTS, slowed the process of establishing the compliance examiner program. The program was fully implemented in 1990 and as of December 1994, OTS had 105 compliance examiners on board. Table 1.2 shows the number of institutions subject to examination and the number of compliance examiners for each regulator at year end for the period beginning in 1988. The regulators rely primarily on the examination process to ensure that institutions comply with CRA. The CRA examination is a major component of an institution’s compliance examination and in some cases, for example, where an application is pending, it is done independently from the compliance examination. Although they have approached their compliance programs differently, the regulators jointly developed and issued the original regulations for CRA examinations in 1978. When examining an institution’s compliance with CRA, an examiner is to evaluate its technical compliance with a set of specific rules, such as recordkeeping requirements, and to qualitatively evaluate the institution’s efforts and performance in serving the credit needs of its entire community. The examiner is to do this in a variety of ways, which include using a CRA “examination checklist,” reviewing a questionnaire filled out by the institution and returned to the examiner prior to the examination, and reviewing a wide variety of institution records and data. Table 1.3 lists the CRA regulation’s technical requirements. Assessing compliance with the technical requirements of CRA is relatively straightforward. An institution either maintains its CRA statement and file or it does not, and the examiner can determine whether the institution complied with the technical requirements by working through the CRA checklist. However, assessing qualitative compliance with CRA is more difficult and subjective. In addition to the technical requirements of the CRA regulations, the regulators are to evaluate each institution on the basis of its efforts to ascertain community credit needs and its determination and performance in helping to meet those needs. When examining an institution, the examiner is instructed to apply the CRA procedures on a case-by-case basis to accommodate institutions that vary in size, type, expertise, and locale. Regulatory guidance indicates that community credit needs will often differ with the specific characteristics of each local community, and institutions may serve these local credit needs in a variety of ways. The qualitative aspect of an institution’s performance is currently to be assessed according to 12 factors. These factors were developed as part of the original regulations implementing CRA and have not changed. To allow the examiner sufficient flexibility necessary to weigh the factors and categories consistent with their significance in the context of a particular institution, the regulators have not assigned a relative weighting to the factors. However, regulatory guidance notes that compliance with antidiscrimination laws and regulations, including ECOA and FHA, is a significant factor in reaching the overall CRA rating. Moreover, regulatory guidance issued in 1992 also stresses that examiners are to weigh CRA performance over process, i.e., how well an institution helps meet the credit needs of its community over documentation showing how the institution ensures CRA compliance. Financial institutions are to demonstrate their CRA performance under various assessment factors in several ways. For example, an institution is required to assess the credit needs of its community. To show that an assessment was done an institution might document its discussions with members of the community, such as community groups or civic organizations, regarding credit needs of the community. To show that it lends to all parts of its community, an institution might plot its lending data onto a map to show the geographic locations where the institution has extended credit. A sophisticated form of coding loans according to their location is called geocoding. The CRA assessment factors are grouped under five performance categories identified in guidance provided by the regulators and published in the Federal Register on May 1, 1990. Table 1.4 lists the assessment factors to be reviewed by compliance examiners during a CRA examination. A compliance examination generally results in two ratings: (1) a compliance rating for an institution’s overall compliance effort with regard to various laws, other than CRA, covered by the compliance examination and (2) a CRA rating for the institution’s compliance with CRA. Although the regulators may do a CRA examination separately from a compliance examination, officials from all four regulators said that they generally do them together. A compliance rating is based on a numerical scale ranging from 1 for top rated institutions to 5 for the lowest rated institutions. The CRA scale is a four-part descriptive scale including “outstanding,” “satisfactory,” “needs to improve,” and “substantial noncompliance.” Although there have been fluctuations over time, approximately 90 percent of all institutions examined for CRA compliance have received a “satisfactory” rating or better since July 1990 when, as a result of amendments to CRA contained in FIRREA, ratings were made public, and the rating scale was changed. Table 1.5 shows aggregate CRA ratings and ratings for each regulator since July 1, 1990, when the regulators began publicly disclosing CRA ratings. Federal regulators are to take an institution’s CRA record into account when considering certain types of applications from depository institutions, including most applications for mergers and acquisitions among depository institutions. This requirement is written directly into the CRA. Although CRA compliance is not to be the only issue the regulators consider when reviewing applications, it may play a major role. Community groups and some members of Congress have described the applications approval process as not being an effective enforcement mechanism for CRA because the regulators do not always deny applications on the basis of an applicant’s poor CRA performance. Table 1.6 shows the number of applications denied on the basis of poor CRA performance since 1989. Although they have been criticized for denying few applications on the basis of CRA performance, the regulators defend their records by stating that they consider the denial of an application to be a last resort. FRB and FDIC also approve applications with commitments. An example might include increased lending efforts in targeted neighborhoods. This provides the regulators with better enforcement leverage by explicitly tying an application’s approval to tangible improvement of the applicant’s CRA performance. However, regulatory guidance states that commitments can only remedy specific problems in an otherwise satisfactory CRA record and cannot be the basis for the approval of an application. OCC and OTS do not typically approve applications with commitments but instead prefer to conditionally approve applications, if deemed appropriate. The conditions for such approvals may be similar to commitments; however, the applicant institution must meet the conditions before consummation of the transaction for which it has applied. An example of a condition might be to require an applicant with a “needs to improve” CRA rating who is seeking to open a branch office to upgrade its rating to “satisfactory” before opening the branch. Table 1.7 shows the number of applications approved with commitments since 1989 by FRB and FDIC and shows the number of applications approved with conditions by OCC and OTS. The regulators also pointed out that institutions considering expansion plans are aware of the role CRA plays in the approval process. An institution contemplating expansion would likely make sure that its CRA performance is at least satisfactory or reconsider submitting an application. Most institutions would prefer avoiding the adverse publicity and needless expense of filing an application only to be denied. If an institution perceives that its application for expansion is likely to be denied, it may choose to withdraw the application rather than have it formally denied. In addition to potentially having an application denied, institutions wishing to expand must consider another element of the application process—the potential for a protest by community groups or other members of the public. Many bankers have complained that community groups have used protests of applications and the threat of adverse publicity, delay, possible public hearings—and their attendant costs—to force lending commitments from institutions attempting to expand. Because regulators must consider protests in their approval process, these groups have exercised a measure of leverage over institutions wishing to expand and have added an element to the process beyond the potential for an application denial. In some cases, agreements have been reached between bankers and community groups and then protests have been withdrawn and applications approved. In other cases, the regulators have approved the application after evaluating the protest and determining that it did not warrant a denial. Table 1.8 shows the number of applications from 1989 to 1994 that had protests lodged against them and the number of protested applications that were denied. To minimize disruptions to the applications process caused by protests, it is the regulators’ policy to encourage and sometimes facilitate meetings between institutions wishing to expand and protestants to help them clarify their areas of dispute and perhaps come to an understanding. They encourage the parties at odds to come together before an application is submitted to the regulator for approval. However, the regulators do not broker agreements between the parties, nor do they monitor or enforce the implementation of such private agreements. The public has played a key role in enforcing CRA in both the applications review process and the CRA examination process. This role was strengthened by amendments to CRA enacted by FIRREA in 1989 and FDICIA in 1991. Applications filed by institutions for expansion are a matter of public record, and the regulators invite public comment when they are considering them. Filing an application has the potential for inviting public comment and possibly protest. CRA examination guidance encourages examiners to contact community groups and other members of the public during examinations, and the regulators are expected to encourage interested parties to submit written comments on an institution’s CRA performance, which are to be included in the institution’s public CRA file. The CRA file is also to be reviewed by examiners during examinations. When FIRREA made CRA ratings public and FDICIA required more detail in CRA evaluations, members of the public were provided with more information to use in deciding whether to protest an application or patronize an institution. For example, some local governments have established programs in which they have required the deposit of public funds to be made with only institutions having satisfactory or better CRA ratings. Public disclosure of CRA ratings has also made the regulators more accountable by allowing interested members of the public to see how the regulators were rating various institutions. Each of the regulators has taken enforcement actions, such as supervisory agreements, memorandums of understanding, and cease and desist orders, to address CRA violations. Few such actions have been taken to date, and those taken have only been with the consent of the affected institutions. Those institutions were advised in the consent actions that in the event they did not comply, the regulators could take more stringent enforcement actions. The regulators have not taken any more stringent actions thus far. Moreover, in a December 1994 opinion, the Department of Justice determined that the regulators lack authority to use any enforcement mechanism for CRA other than measures taken in the context of an application. The extent to which the regulators have used enforcement actions for CRA purposes is unclear because such actions generally include a variety of issues needing institution management attention in addition to CRA issues. FRB reported that 14 of the enforcement actions that it issued in 1993 included provisions related to technical CRA violations. OCC reported that 9 actions it issued included CRA provisions while OTS reported that it issued 8 such actions. FDIC does not currently track this information but said that it issues enforcement actions that include provisions for CRA violations. CRA has remained one of the most controversial banking laws. From its beginning, bankers have generally said they disliked the law, suggesting that it leads to credit allocation and imposes an unreasonable regulatory burden. Community groups, however, have maintained that the law is critical but has not been effectively enforced by the regulators and that institutions could do more to provide credit to underserved communities. Meanwhile, there has been a renewed call by some in Congress for more effective enforcement of CRA and less regulatory burden on institutions. In the mid-1970s, many Members of Congress said that too many institutions accepted deposits from households and small businesses in inner cities while directing a disproportionate amount of lending and investment elsewhere. They said that given this disinvestment, credit needs for urban areas in decline were not being met by the private sector. Moreover, they said the problem was worsening because public resources were becoming increasingly scarce. In January 1977, the original Senate bill on community reinvestment was introduced. Opponents of that bill voiced serious concerns that the bill could result in credit allocation based on the volume of deposits coming from certain areas, without regard for credit demand or the merits of loan applications. They argued that the law would therefore disrupt the normal flow of capital from areas of excess supply to areas of strong demand and undermine the safety and soundness of depository institutions. Proponents of the bill stated that it was meant to ensure only that bankers did not ignore good borrowing prospects in their communities and that they treated credit worthy borrowers even-handedly. Senator William Proxmire, the bill’s sponsor, said that it would neither force high-risk lending nor substitute the views of regulators for those of bankers. He said that safety and soundness should remain the overriding factor when regulators evaluate applications for corporate expansion. Meeting the credit needs of the community was to be only one of the criteria for the regulators to evaluate when considering applications. Since enactment of CRA, the debate has continued. Many bankers still regard CRA as an unwelcome statute that limits their flexibility in business decisions and mandates relatively low-profit lending that could cause safety and soundness problems. Bankers complain that CRA regulations are unclear and burdensome, reducing their competitiveness with other lenders who are not subject to CRA. CRA was among the major complaints by bankers in all major studies of regulatory burden, including our report.We found that bankers’ complaints included CRA-based documentation, reporting, and geocoding requirements as well as lack of recognition of banks’ different characteristics, examination emphasis on form over substance, and a variety of other examination-related issues. In addition, bankers argued that other financial intermediaries, such as insurance and securities firms and credit unions, compete with banks for funds and loans but are not subject to CRA. Bankers said this results in a double standard that puts them at a competitive disadvantage. Many community groups, however, have complained that too many institutions are receiving satisfactory CRA ratings without actually lending to their communities. They complained that CRA examinations are more concerned with an institution’s CRA process, while ignoring whether it has engaged in actual lending to its community. In addition, they have complained that while over 90 percent of all institutions receive at least satisfactory CRA ratings, there continue to be large geographic areas that suffer from an inability to obtain credit from these institutions. These groups have called for an examination process that stresses actual lending performance over process. They have also called for better public disclosure of the information and the rationale used to assess institutions’ lending performance. Although arguments for and against CRA and various aspects of its implementation have often been presented as belonging to bankers or community groups, it is important to note that there have also been disagreements among members of these groups, further complicating efforts to satisfy all sides of the controversy. The interests of large and small institutions have at times diverged. For example, bankers from small institutions have often been more concerned with regulatory burden associated with documentation requirements of CRA while bankers from larger institutions, which can more easily absorb the expense of documentation requirements, have been more concerned with the role application protests have played in delaying their expansion plans. There have also been instances where some community groups have defended particular institutions that were accused of poor performance by the regulators or other community groups. Some community groups have said they prefer to work with institutions to reach agreements on community needs and how those needs should be met, while others said they rely more on protests to get institutions to make commitments to the community. There have also been differences among regulators about how to properly implement CRA, with some advocating stronger enforcement and others raising concerns about credit allocation. On July 15, 1993, the President announced his initiative to facilitate low- and moderate-income community economic development. In addition to other measures, the President called for a revision to the current CRA regulation that would move CRA examinations toward a performance-based system focusing on results rather than process and paperwork—especially results in low- and moderate-income areas of institutions’ communities. He instructed the regulators to make examinations more consistent, improve enforcement to provide more effective sanctions, and reduce the cost and burden of compliance. The four regulators jointly released their proposed revision to the current CRA regulation for comment on December 21, 1993. The proposal would have replaced the current qualitative CRA examination system, including the 12 assessment factors, with a more quantitative system based on actual performance as measured through the use of three tests: the lending, service, and investment tests. A key element of the December 1993 proposal was the “market share test,” which would, as part of the lending test, compare an institution’s lending relative to other lenders in low- and moderate-income neighborhoods, with its lending in other parts of its service community. Collectively, the regulators received over 6,700 comment letters on the December 1993 proposal from representatives of the banking industry, community groups, Congress, and state and local governments. Reaction to the proposal was mixed and generally polarized based on the interests of the individual or organization commenting. On January 26, 1994, we submitted our analysis of the regulators’ proposal in a letter to the former Chairmen, House Committee on Banking, Finance and Urban Affairs and the Subcommittee on Consumer Credit and Insurance. In response to comments received on their first proposal, the regulators released a second proposed CRA regulation that was published in the Federal Register on October 7, 1994. This proposal reflected comments received on the December 1993 proposal and the regulators’ further internal considerations. While still striving for a system that measured performance and not efforts or processes, the new proposal made revisions to the first proposal that would increase the role of examiner discretion in CRA examinations. For example, the lending test would no longer be based on the market share test. Collectively, the regulators received over 7,200 comment letters on the October proposal. In May 1995, FRB, OCC, OTS, and FDIC released the new revised CRA regulations. The final regulations retained, to a significant extent, the principles and structure of the December 1993 and October 1994 proposals but made changes to some details to respond to concerns raised in the comment letters and further regulator consideration. The final revised regulation eliminates the previously discussed 12 assessment factors and substitutes a three-part, performance-based evaluation system for institutions that do not qualify as small institutions. The regulation defines small institutions as independent retail institutions with total assets of less than $250 million and holding company affiliates with total assets of less than $1 billion. The revised regulation includes a streamlined examination for small banks and the option for all institutions to have their CRA performance examined according to a regulator-approved strategic plan. To take into account community characteristics and needs, the revised CRA regulation makes explicit the performance context against which the tests and standards set out in the proposed regulation are to be applied. This performance context includes consideration of six factors concerning the unique characteristics of the institution under examination and the market in which it operates. To determine a performance context, the regulators are to request any information that the institution has developed on lending, investment, and service opportunities in its assessment area(s). The regulators have stated that they will not expect more information than what the institution normally would develop to prepare a business plan or to identify potential markets and customers, including low- and moderate-income persons and geographies in its assessment area(s). The regulators are to consider this information from the institution along with information from community, government, civic, and other sources to enable the examiner to gain a working knowledge of the institution’s community. The revised CRA regulation gives particular attention to the institution’s record of helping to meet credit needs of low- and moderate-income communities and individuals based on community characteristics and needs. In general, the regulators are to rate an institution’s performance under each of the tests, but the lending test rating is to carry more weight than the others. An institution must receive a rating of at least “low satisfactory” on the lending test to receive an overall CRA rating of satisfactory. However, ratings on the other two tests are still to have considerable effect on the overall rating as well. The major elements of the regulators’ revised CRA regulations are described as follows: Lending test: The lending test is to entail a review of an institution’s lending record, including originations and purchases of home mortgage, small business, small farm, and, at the institution’s option, consumer loans throughout the institution’s service area, including the low- and moderate-income areas; the proportion of the institution’s lending in its service area(s); the distribution of loans to borrowers of various income levels; the number of loans to small businesses and farms; and the like. If the regulators determine that a substantial majority of an institution’s business is consumer lending, then they are to evaluate this lending as part of the lending test whether or not the institution elects to provide consumer lending data. The regulators are to consider loans to individuals of all incomes wherever they reside. The number, amount, and complexity of an institution’s community development loans are also to be included in the lending examination. The regulators are to consider the lending of affiliates at the election of the institution or if an institution appears to be attempting to inappropriately influence a CRA examination by conducting activities that would be unfavorably evaluated by an examiner in an affiliate. Investment test: The investment test is to evaluate an institution’s investments in community development activities. In reviewing these investments, the examiner is to take into account the amount, innovativeness, or complexity of the investment as well as the degree to which it responds to community credit and economic development needs. Institutions with limited investment authority, such as thrifts, are to receive a low-satisfactory rating under the investment test, even if they have made few or no qualified investments, as long as they have a strong lending record. A donation, sale on favorable terms, or rent-free occupancy of a branch (in whole or in part) in a predominantly minority neighborhood to any minority- or women-owned depository institution, or a financial institution with a primary mission of promoting community development, is to be considered a qualifying investment. Service test: The service test is to require the examiner to analyze an institution’s systems for delivering retail banking services and the extent and innovativeness of its community development services. The examiner is to review, in addition to the branching information, information regarding alternative service delivery mechanisms such as banking by telephone, mobile branches, loan production offices, automated teller machines (ATM), etc., in low- and moderate-income areas and for low- and moderate-income individuals. The evaluation is to also consider the range of services, including noncredit services, available to, and the degree to which those services are tailored for, the various income level areas. The focus of the test, however, is to be on the institution’s current distribution of full-service branches. Alternative systems for delivering retail banking services, such as ATMs, are to be considered only to the extent that they are effective alternatives in providing needed services to low- and moderate-income areas and individuals. Data collection, reporting, and disclosure: Data reporting requirements on institutions are to be expanded by requiring that originations and purchases of all small business and small farm loans be collected and reported to the regulator. Each institution is required to collect and maintain for each loan in a standardized, machine readable format; the amount at origination, location, and an indicator whether the loan was to a business with $1 million or less in gross annual revenues. The location of the loan is to be maintained by census tract or block numbering area. Each institution is to report in machine-readable form annually, aggregated for each census tract/block numbering area in which the institution made at least one small business or small farm loan during the prior calendar year, the number and amount of loans with original amounts of $100,000 or less, more than $100,000 but less than or equal to $250,000, or more than $250,000, and the number and amount of loans to businesses and farms with gross annual revenues of $1 million or less. The regulators, rather than the institutions, are to annually prepare individual CRA disclosure statements for each reporting institution and aggregate disclosure statements for each metropolitan statistical area (MSA) and the non-MSA portion of each state. The regulators are to make both the individual and the aggregate disclosure statements available to the public at central depositories. The aggregate disclosure statements will indicate, for each geography, the number and amount of small business and small farm loans originated or purchased by all reporting institutions, except that the regulators may adjust the form of the disclosure if necessary, because of special circumstances, to protect the privacy of a borrower or the competitive position of an institution. Institutions are also to include the disclosure statements in their public files. In keeping with the lending test, data collection and maintenance are optional for consumer loans, and there are no reporting requirements. Streamlined examination for small institutions: Independent banks and thrifts with assets below $250 million and institutions with assets below $250 million that are subsidiaries of holding companies with less than $1 billion in assets are to be evaluated under a streamlined examination method unless an institution affirmatively requests an alternative examination method. The streamlined method is to focus on an institution’s loan-to-deposit ratio, degree of local lending, record of lending to borrowers and geographies of different income levels, and record of responding to complaints. An institution’s fair lending record is also to be taken into account in assigning a final rating. The regulators are to consider an institution’s size, financial condition, and credit needs of its service area in evaluating whether its loan-to-deposit ratio is reasonable. The regulators are to further consider, as appropriate, other lending-related activities, such as originations for sale on the secondary market and community development lending and investment. Strategic plan option: Every institution is to have the alternative of submitting a strategic plan to its supervisory agency for approval that was developed with community input detailing how the institution proposes to meet its CRA obligation. The strategic plan option is not to relieve an institution from any reporting obligations that it otherwise has. However, small institutions do not subject themselves to any data reporting responsibilities by electing the strategic plan option. Community development test for wholesale or limited purpose institutions: The regulation is to replace the investment test with a community development test for wholesale or limited purpose institutions. The regulation incorporates into this community development test both community development lending and community development services in addition to qualified investments. Therefore, under the regulation, wholesale or limited purpose institutions are to be subject only to the community development test. Wholesale or limited purpose institutions must be designated as such by the regulators. Institutions are to continue maintaining a public file that contains (1) all written comments received from the public during the previous 3 years that comment on the institution’s CRA performance; (2) a copy of the public portion of the institution’s most recent CRA examination; (3) a list of the institution’s branches, their street addresses, and geographic areas to be served; (4) a list of branches opened or closed by the institution during the previous 3 years, their addresses, and geographic areas to be served; (5) a list of services generally offered at the institution’s branches and descriptions of material differences in the availability or cost of services at particular branches; (6) a map of each assessment area showing the boundaries of the area and identifying the geographic areas to be served within the area; (7) and any other information the bank chooses. In addition, large banks are also to include in their public file (1) any consumer loan data that the institution wishes to have considered as part of its CRA examination; (2) the institution’s CRA disclosure statement that it receives from its regulator; and (3) relevant HMDA disclosure statements for the previous 2 years. Small banks are to include their loan-to-deposit ratio for each quarter of the previous year and any additional information that they see fit, including the information required for large institutions if they elect to be evaluated under the lending, investment, and service tests. Institutions that elect to be evaluated under the strategic plan are to include the plan in the public file. An institution that received a less than satisfactory rating during its most recent examination is to include a description of its efforts to improve its performance. The revised CRA regulations are to amend the current CRA regulations over time, eventually replacing the existing regulations in their entirety by July 1, 1997. However, various elements of the new regulations are to be phased in sooner, some as early as January 1, 1996. Until that time, the regulators will continue to follow the current CRA regulations to examine institutions for CRA compliance. The objective of this report is to address four questions regarding the federal regulators’ implementation of CRA: (1) What were the major problems in implementing CRA, as identified by the affected parties—bankers, regulators, and community groups? (2) To what extent do the regulatory reforms address these problems? (3) What challenges do the regulators face in ensuring the success of the reforms and what, if any, actions would help the regulators in facing these challenges? and (4) What initiatives have been taken or proposed to help bankers overcome community lending barriers and enhance lending opportunities, particularly in low- and moderate-income areas? We interviewed regulatory officials responsible for bank or thrift examinations to understand and identify the major problems with the current regulatory system used in implementing CRA and to understand the context in which the regulators examine and enforce the law. In addition, we reviewed the legislative history of the CRA to discern its original intent and to see how amendments have changed the law over time. We also collected data from each of the regulators relevant to various aspects of their CRA enforcement. In addition to regulatory officials, we judgmentally selected and interviewed other parties who were located in the areas where we did our work and who are concerned with or active in CRA compliance issues, including bankers, community groups, trade groups, consultants, representatives of the secondary markets, and officials from other federal agencies, including Justice. We also collected data from each of these groups and from Justice regarding CRA examinations and enforcement. In addition to our interviews, we reviewed testimonies and speeches by representatives of the groups described above from a large number of congressional hearings and other forums that have taken place since enactment of CRA. Statements in this report representing the views of the affected parties reflect all of the sources described above. To identify the major problems in implementing CRA, determine to what extent the regulatory reforms would address these problems, and identify challenges the regulators would face in ensuring the success of the reforms, we reviewed in detail compliance examinations at 40 banks and thrifts located in 4 regions, including the Northeast, Midwest, West, and South Central parts of the United States. At each of the 40 institutions, to the extent possible, we completed a case study using standardized data collection instruments to gather the impressions and experiences of the bankers and examiners. For our case studies, we judgmentally selected institutions that included a variety of asset sizes; business types; and a mix of rural, suburban, and urban institutions. We selected institutions regulated by each of the four regulators and attempted to select institutions with a variety of good and bad CRA ratings. However, we found that institutions that received low CRA ratings from their last compliance examination were less willing to participate in the case studies than those that had fared better. While 11 of the institutions had received a “needs to improve” rating on their last CRA examination, none had received a “substantial noncompliance.” The institutions we studied included 10 from each of the four regions; 6 were examined by FRB, 13 by FDIC, 9 by OCC, and 12 by OTS. Nine of the institutions had assets over $1 billion, 13 had assets of less than $1 billion but more than $100 million, and 18 had assets of $100 million or less. We also talked to community groups known to be active in each region about their involvement in CRA compliance. In this way, we could identify the positive and negative aspects of the current examination system and verify some of the anecdotal complaints surrounding it. In addition, the case studies afforded us the opportunity to discuss other related issues, such as CRA reform, with a large number of individuals who worked with CRA compliance on a regular basis. To determine the extent to which the regulators’ reform proposals would address the problems we identified from work previously described and to identify the challenges the regulators would face in ensuring the success of the reforms, we evaluated a number of proposals for CRA reform that were put forward from several sources, including the proposals released by the regulators on December 21, 1993, and October 7, 1994, and the final revised CRA regulation, released in May 1995. In addition, we reviewed letters submitted by bankers, community groups, and other concerned parties commenting on the regulators’ proposals. We discussed numerous suggestions for improving the CRA examination and enforcement process with participants in our case studies. We also reviewed the transcripts from hearings held by the regulators around the country during their development of the revised CRA regulation. To identify the initiatives that had been taken to overcome lending barriers and enhance community lending opportunities, (1) we judgmentally selected, on the basis of availability, and interviewed over 20 community group representatives; (2) held a roundtable discussion involving representatives from the Association of Community Organizations for Reform Now, the Center for Community Change, and the Consumer Federation of America; and (3) attended several workshops and conferences sponsored by a variety of industry and community groups, in addition to the regulators covering CRA compliance. We also identified the activities of the regulators’ consumer affairs programs and reviewed a large volume of material generated by banks, community groups, and the regulators on their activities to promote community lending. We conducted work on our case studies in Chicago, San Francisco, Boston, and Dallas, from July 1993 to March 1994 and our work in Washington D.C. continued through June 1995 in accordance with generally accepted government auditing standards. We obtained written comments on a draft of the report from FDIC, the Federal Reserve, OCC, and OTS. A discussion of these comments and our responses appears at the end of chapters 3 and 4. In addition, the agencies’ comments and our additional responses are printed in appendixes I through IV. All of the affected parties that we spoke with—bankers, community groups, and regulators—agreed on many of the problems with the implementation of the Community Reinvestment Act (CRA). However, the reasons they gave for why they believed the problems adversely affected their interests—which form the basis of their concerns—and the often contradictory solutions they offered to address the problems showed that the affected parties differed considerably on how best to revise CRA. The revised CRA regulation, if effectively implemented, should focus examinations on results, thereby eliminating a major problem that all parties identified—an overreliance in the regulators’ examinations upon an institution’s documentation of efforts and processes used to ascertain and meet community needs. However, the revised regulations neither fully address all identified problems nor wholly satisfy the often conflicting concerns or contradictory solutions of bankers and community groups. The success of the reform efforts will depend largely upon how effectively the revised regulations are implemented. The first section of this chapter discusses the similarities and differences among the groups on the problems they identified as well as their concerns with and solutions to the problems. The second section presents our analysis of the extent to which the revised regulations should address those problems and concerns. Bankers, community groups, and the regulators generally agreed in interviews and in public testimonies on what they considered to be major problems with the examination and enforcement of CRA. These problems included too little reliance on lending results and too much reliance on documentation of efforts and processes, leading to an excessive paperwork burden; inconsistent CRA examinations by regulators resulting in uncertainty about how CRA performance is to be rated; examinations based on inadequate information that may not reflect a complete and accurate measure of institutions’ performance; and dissatisfaction with regulatory enforcement of the act, which largely relies on protests of expansion plans to ensure institutions are responsive to community credit needs. While the affected parties generally agreed on these four problems, their underlying concerns differed significantly and the solutions they offered were often contradictory or incompatible. Generally, bankers’ concerns about the problems focused on the regulatory burden of compliance, and they sought to reduce that burden. For example, they sought to increase certainty about examination ratings through use of preapproved strategic plans and guarantees (“safe harbors”) that satisfactory and outstanding ratings would protect from CRA protests institutions’ applications to move or expand operations. In contrast, community groups were generally concerned about the lack of accountability on the part of institutions to ensure that they meet their community lending obligations. These groups also sought measures to increase regulators’ accountability through more public disclosure of institutions’ CRA performance and tougher enforcement. The differences in the concerns and solutions reflected bankers’ and community groups’ different perspectives and constituencies and broader philosophical differences, as discussed in chapter 1. Bankers, community groups, and the regulators we contacted generally agreed that a major problem with CRA examinations was that examiners relied too heavily during examinations upon an institution’s paperwork. This paperwork was to document the institution’s efforts and processes to ascertain and help meet community credit and service needs. All parties also generally agreed that the examination should be based on the results of those efforts and processes, with emphasis on the institution’s community lending performance. The parties agreed that a single community lending standard or formula for evaluating those results was unworkable because of the importance of considering such factors as an institution’s business strategy, its financial condition, and the specific needs in different areas of the community that the institution served. Despite these areas of agreement, bankers and community groups had different underlying concerns and offered different solutions. Bankers were most concerned that the focus on their CRA efforts and processes caused them to produce many documents that served no purpose within the institution other than to satisfy the information needs of examiners conducting CRA examinations. They advocated that the CRA reform should eliminate this burden by focusing examinations on performance or results. However, community group representatives were most concerned that the focus on documentation of efforts and processes had failed to hold institutions accountable for their actual lending and service in communities. They too favored a focus on results with examiners evaluating data on actual lending and services that institutions provided to their communities. In fact, they proposed that community groups be given access to the data evaluated by examiners and be permitted to provide input on an institution’s performance. Regulators supported a performance- or results-based evaluation system to reduce institutions’ documentation burden and improve CRA compliance. They also suggested that a performance-based system would promote improved consistency in examinations. Bankers, community groups, and regulators all identified inconsistency in performance examinations as a problem with the implementation of the act. It was apparent from our case studies that inconsistency was due in part to examiners using their discretion and focusing on or emphasizing different aspects of the CRA regulations. This inconsistency resulted in uncertainty among the affected parties about how institutions’ performance would be evaluated during examinations. Although the affected parties’ underlying concerns and solutions tended to differ, the solutions were all designed in one way or another to reduce, or more clearly direct, examiner discretion to provide greater consistency to the examination process. Generally, bankers were concerned about inconsistency in performance examinations because this led to confusion and uncertainty about what actions were necessary to attain a positive rating. As a result of the uncertainty, many bankers believed that institutions were producing unneeded documentation of their efforts. Some bankers sought to reduce this uncertainty through more specific instructions or lending targets from the regulators, thereby getting more definition to what actions count as CRA activities. Community groups generally recognized inconsistency as a problem that represented a failure of the regulators to hold institutions accountable for adequately serving all areas of their delineated communities. Some groups said they felt that examiners do little to determine whether institutions are meeting community needs. Many group representatives advocated more emphasis on performance standards as well as increased disclosure of information about institutions’ community reinvestment results. Regulators also recognized that inconsistency in examinations was a problem. Many of the examiners we interviewed said that they thought inconsistency resulted from the subjectivity inherent in examinations due to vague standards, unclear guidance, and frequent changes in the focus of examinations. The examiners’ latitude in interpreting standards, such as “the institution’s ability to meet various community credit needs based on its financial condition and size, legal impediments, local economic conditions and other factors,” resulted in examiners focusing on different parts of the guidance. In addition, since 1989, changes to the guidance for CRA examinations occurred more frequently than before and shifted emphasis from institutions’ programs for managing CRA as part of day-to-day activities to the results of their CRA programs. We further discuss the role of examiner discretion in chapter 3. Another factor cited by the affected parties was insufficient experience and training of examiners conducting CRA examinations. Some community groups pointed out the need to improve the capacity of the regulators for analyzing data in the context of community credit needs and institutions’ efforts to satisfy those needs. As discussed in more detail in chapter 3, many examiners sought clearer guidance and better training as a solution to their concern about inconsistency in examinations. Bankers, community groups, and the regulators have identified numerous concerns related to whether CRA examinations are based on information that reflects a complete and accurate measure of institutions’ performance. Disagreements persist among the affected parties as to what information should be collected and reported by institutions and what information should be disclosed publicly. Bankers generally view most data collection and reporting as burdensome and its disclosure a potential violation of the proprietary nature of their business. Community groups, however, generally believe that information transparency—which includes both obtaining the data and understanding how the examiners move from applying performance data and other information against the standards to arrive at the CRA rating—is key to ensuring accountability and measuring CRA compliance. Bankers complain that they are forced by the regulators to generate data that (1) may not fully reflect their business activities, (2) would not be produced without the regulatory requirement, and (3) should be kept confidential. For example, some bankers were concerned that data collected under the Home Mortgage Disclosure Act (HMDA) may be misleading without an explanation, as in cases where high loan rejection rates may result from aggressive marketing efforts by institutions seeking low-income applicants. Many bankers opposed existing and new reporting requirements as being burdensome. They were particularly concerned about frequent changes in reporting requirements that require costly changes to their data collection systems. In addition, bankers expressed concern about publicly disclosing information that they believe reveals too much about their business practices and should be kept confidential. Community groups told us that public availability of data is of great value and that the transparency of institutions’ lending performance is what would make it useful. Community groups strongly opposed any reduction in reporting requirements and advocated the collection, reporting, and public disclosure of additional data to better evaluate institutions’ performance. These groups said they believe that it is essential that they have access to the data used by CRA examiners in determining regulatory ratings so that they can evaluate both the institution’s and the regulator’s performance. Examiners in our case studies said they generally believed that data are necessary for them to examine institutions’ compliance with CRA. However, they said that data collected are useful only if they are accurate and appropriately reflect the relevant activities of the institution being examined. Some examiners we interviewed said HMDA data are sometimes limited in their usefulness for a number of reasons, including poor data quality and inconsistent reporting by institutions. They also said that examiners may lack the time or training to perform HMDA analyses. Finally, they said that other information, involving the credit worthiness of the borrower or property, had to be used in conjunction with HMDA data because the data may not accurately or completely portray an institution’s lending activity, particularly for institutions that are not heavily involved in home mortgage lending. Both bankers and community groups identified regulatory enforcement of CRA as a problem, but members of the two groups generally had different concerns. Most bankers commented that there is no protection against application protests for institutions that regulators have determined are in compliance with CRA and that positive incentives are not in place to promote compliance with CRA. For example, bankers complained that community groups have used protests to needlessly delay the approval of applications. They noted that a satisfactory or outstanding CRA rating means nothing when a community group mounts a protest against expansion plans. Bankers charged that these groups use protests to further their own agendas regardless of an institution’s lending record. Many bankers advocate safe harbors that would protect institutions from protests if the regulators have determined, through the examination process, that their CRA compliance is outstanding. Another type of safe harbor would reward good CRA performance with less frequent CRA examinations. In practice, the regulators currently have policies that consider an institution’s CRA rating in determining the frequency of examinations, with lower-rated institutions to be examined more frequently. In addition, bankers have contended that there should be positive incentives in place to encourage CRA compliance in addition to what they see as exclusively negative sanctions to punish noncompliance. Some bankers have proposed that CRA be replaced by or supplemented with direct financial subsidies to those willing to extend credit to low- and moderate-income areas. Community groups, however, identified as a problem the fact that regulatory enforcement of CRA was limited to the denial of applications by a depository institution for expansion (including applications for a merger or acquisition) or negative publicity from a low CRA rating. They pointed out that institutions with no plans for expansion and no fear of adverse publicity from a low CRA rating may not feel the need to commit significant resources to CRA compliance. To strengthen enforcement of the act, community groups have advocated regulator use of more stringent enforcement actions, such as cease-and-desist orders and civil money penalties. Although some cease-and-desist orders and formal agreements between regulators and institutions have included CRA performance as one of many issues, no such actions have been taken solely to address noncompliance with the act or poor CRA performance. The regulators have recognized the general dissatisfaction with CRA enforcement by bankers and community groups as well as by some Members of Congress. From our review of the reform proposals and the revised CRA regulations, it appears that the regulators have thoroughly assessed the problems related to CRA examinations and the revised regulations attempt to address the problems and concerns raised to us by the affected parties. However, the revised regulations will not wholly satisfy the often contradictory concerns of bankers and community groups. Bankers and community groups continue to have fundamentally different expectations about institutions’ CRA obligations. If effectively implemented, we believe the revised regulations will significantly reduce the first problem of overreliance on documentation of community reinvestment efforts and processes by focusing the examination standards on results. However, the regulators success in addressing the second problem of examination inconsistency and uncertainty will depend upon implementation, especially how effectively examiners use their discretion. This, in turn, will depend on the effectiveness of the guidance and training examiners are provided. In response to the third problem of data usefulness, the final regulations have clarified the information to be used to evaluate performance, but the affected parties disagree about whether the data to be collected under the revised regulations will appropriately reflect lending results or be too burdensome. The reform proposals related to the fourth problem of CRA enforcement were dropped by the regulators (1) because of Justice’s opinion stating that the regulators do not have authority to take stronger enforcement action for CRA and (2) because of community groups’ concerns that safe harbors would preclude them from protesting applications of those institutions they determine to be poor performers. Consequently, the revised regulations do not resolve the affected parties’ divergent concerns with CRA enforcement. The revised regulations address the problem of overreliance on documentation of efforts and processes by shifting the focus of examination standards to an institution’s community reinvestment results. Under the revised regulations, an examiner is to analyze an institution’s community reinvestment results in three performance areas—lending, investment, and services. Although all the affected parties generally agreed with the shift to results-based examinations, the revised regulations may not address community groups’ desire to hold institutions more accountable for the results of their community lending activities. The regulators initially proposed, and later dropped, the use of more quantifiable performance measures in the first CRA proposal as part of the “market share test” described in chapter 1. While community groups generally supported this test, many bankers were concerned that it would not accurately reflect their lending performance and could lead to unsafe and unsound lending. Disagreements continue between the affected parties about the use of quantifiable measures to examine CRA performance, but they generally agreed that some flexibility is needed in CRA examinations. In developing the revised regulations, the regulators attempted to balance the need for objective standards with the need for flexibility in examining different types of institutions operating under differing financial conditions and serving widely different types of communities. We believe that the success of the revised regulations in addressing the problem of inconsistent examinations will depend upon how effectively the examiners exercise their discretion when implementing the new regulations. This problem has been, and may continue to be, difficult for examiners to overcome because examinations involve subjective, case-by-case judgments about an institution’s performance. For example, examiners will still be required to judge the “innovativeness” of loans and investments and differentiate between “excellent” and “good” responsiveness to credit needs. The regulators recognized the need to improve examination consistency in the revised regulations and indicated that they intend to improve guidance and training for examiners before implementing the new regulations. While it is too soon to evaluate their progress in these areas, we agree that clear guidance and comprehensive training in community development techniques are critical for consistency in examinations. Chapter 3 further discusses the issues that need to be addressed to ensure successful implementation of the revised regulations. The revised regulations also include an option that responds to bankers’ concerns that inconsistency in examinations contributes to uncertainty about what is needed to ensure a positive rating. Institutions may submit to regulators a strategic plan for community reinvestment that sets standards of performance. Although institutions could experience some uncertainty when the plan is submitted to the regulator for approval, this option may help alleviate uncertainty at the time of an examination. This “strategic plan” option includes a requirement that institutions make public their plans for comment prior to the plans being approved by the regulators. For this reason, this option has not been favorably received by all institutions. Many bankers have raised concerns that making the plan public may have anticompetitive effects, since they would have to disclose their strategic business objectives and goals. However, banks would not have to publicly disclose proprietary information. To fulfill their examination responsibilities, the regulators have explained that assessing performance against results-oriented CRA examination standards will require complete and accurate measures of performance in the areas of lending, investment, and service to delineated communities. The issue of what data should be collected and reported to the regulators and disclosed publicly has been among the most controversial issues surrounding the CRA reform efforts. The regulators have tried to balance the contradictory calls by bankers to reduce regulatory burden with the community groups’ call for additional data reporting and public disclosure to increase institutions’ accountability. The regulators’ attempt to strike a balance in the revised regulations among the competing points of view has led to (1) exempting small institutions from additional data reporting requirements, (2) increasing data collection and reporting requirements for large institutions, and (3) shifting data analysis responsibilities to the examiners. The regulators also increased public disclosure of aggregate loan information for small business, small farm, or community development lending but not information on individual loans. In addition, the revised regulations permit voluntary collection and disclosure of consumer loans, although reporting is not required. The revised regulations will not completely satisfy all parties, some of whom continue to disagree about whether the data collection requirements are appropriate or burdensome. Although the revised regulations address the issues of what information will be collected to examine CRA performance under the new standards and what information must be disclosed, they do not address the other information problems identified in our case studies related to data inaccuracies and the need for clearer explanations of how performance ratings are determined. To fully respond to the problems raised by the affected parties, these remaining issues will need to be addressed as the regulators implement the new regulations. Chapter 3 discusses the actions that the regulators have taken thus far to improve data reliability, specifically related to HMDA data accuracy. The proposed reforms included measures to address concerns of both bankers and community groups regarding CRA enforcement. However, those measures are not included in the revised regulation. Both the December 1993 and the October 1994 reform proposals would have addressed the community groups’ call for stricter enforcement by clarifying institutions’ CRA obligations and providing that a bank that receives a rating of “substantial non-compliance” would be subject to enforcement actions authorized by the Federal Deposit Insurance Act.However, Justice opined in December 1994 that CRA did not provide the regulators with the legal authority to use such enforcement actions to enforce CRA. The regulators also tried to address the bankers’ interest in positive incentives or protection against protests in the application process in their December 1993 proposal. The regulators attempted to clarify how various CRA ratings would affect decisions on applications filed by institutions for expansion or relocation of their deposit facilities. In particular, absent other information regarding CRA performance, the proposal stated that an “outstanding” rating would be given extra weight in reviewing a “satisfactory” rating would generally be consistent with approval of the a “needs to improve” rating would generally be an adverse factor and, absent demonstrated improvement in the institution’s CRA performance or other countervailing factors, would result in denial or conditional approval of the application; and a “substantial noncompliance” rating generally would be so adverse as to result in denial of the application. However, community group comments submitted to the regulators on this measure strongly protested that it constituted a safe harbor, and it was dropped in the October 1994 proposal. Because the regulators’ proposed measures to resolve the enforcement concerns of both bankers and community groups have been unsuccessful, these concerns will likely continue. Although the regulators have attempted to address the major problems with the implementation of CRA identified by the affected parties, the revised regulations will not satisfy all of the sometimes conflicting concerns of these parties. For example, the revised regulations do not require the level of data reporting or disclosure that has been called for by community groups, nor do they provide for more stringent enforcement actions for CRA. Although many community groups see the revised CRA regulations as an improvement over the current system, many also believe that they do not go far enough in compelling institutions to fulfill their community lending obligations. Some concerns of bankers also will likely continue. Although the burden associated with documenting efforts and processes is to be eliminated, many bankers consider additional data reporting requirements to be burdensome. In addition, the regulators’ success in addressing the problem of examination inconsistency will depend upon how effectively they implement the revised regulations. The regulators have recognized the need to improve examination consistency and plan to improve guidance and training for examiners as they implement the new regulations. The regulators also attempted to strengthen enforcement and introduce a level of certainty into the enforcement process, by clarifying how CRA ratings would be considered in application decisions and for enforcement actions, but the effort was unsuccessful. The regulators face significant challenges in successfully implementing the revised Community Reinvestment Act (CRA) regulatory reforms, many of which they have had difficulty addressing in the past. From our case studies, we identified several areas that are key to implementing the revised CRA regulations. To minimize the problems of uncertainty and inconsistency associated with CRA assessments, the regulators will need to (1) provide clear guidance and comprehensive examiner training that address how examiners should conduct performance-based assessments, (2) ensure that data used to assess performance is accurate by increasing the priority and consistency of actions taken to ensure data accuracy, and (3) improve disclosure in public evaluation reports on how examiners determined institutions’ performance ratings. In addition, the regulators acknowledge that the revised regulations will increase examiner responsibilities and may thereby require additional examination techniques and resources. The regulators have previously tried to address the challenges of achieving greater certainty and consistency in compliance examinations. Some of the difficulties that have hindered past efforts will likely continue to challenge the regulators as they implement the regulatory reforms. These difficulties have included the subjectivity inherent in examiners’ interpretation of vague CRA standards, frequent shifts in the regulatory focus of examinations, and differences in the levels of examiners’ CRA compliance evaluation experience and training. In addition, inadequate information and disclosure about institutions’ CRA performance and the basis for their ratings have contributed to concerns about examination consistency. Although the revised CRA regulations are more objective and performance-based, examiners will have to continue to exercise discretion in interpreting the CRA standards. Differences in levels of examiner experience will also continue because of the recent hiring of additional CRA examiners by some regulators over the past 2 years. Training will be particularly important during implementation, as all CRA examiners will need comprehensive training in new examination standards and procedures that regulators will be issuing. Moreover, accurate and accessible data will continue to be critical for effective results-based assessments. Finally, examination consistency will be judged by the public through the information on institutions’ performance provided in the evaluation reports. The success of the CRA regulatory reforms will ultimately depend on how effectively these issues are addressed by regulators in implementing the revised regulations. The regulators stated in the revised regulations that they intend to ensure consistency in assessments by providing more guidance in minimizing unnecessary subjectivity, improving examiner training, and increasing interagency coordination. These goals are consistent with the suggestions made by bankers and examiners in our case studies and in public comments to the proposed regulations before they were finalized. However, we also found that the regulators’ previous attempts to ensure consistency by revising their examination guidance and training programs have not achieved consistent implementation. We found from our case studies that inconsistency resulted in part from examiners having had considerable discretion and assessing institutions differently because they focused on different parts of the examination guidance. These differences were particularly evident in examiners’ assessment of factors that involved the most discretion, such as the factors relating to ascertainment of community credit needs and development of marketing and advertising programs. To illustrate, one of the more problematic factors has been judging the reasonableness of institutions’ delineation of their service communities. Some bankers have been confused about how they should define their service community because they received conflicting direction from examiners. One banker said that he was told by one examiner not to include loan production offices in the bank’s delineated community, but the next examiner told him that the offices should be included. Other bankers were asked by examiners to change the size of their delineated communities and were confused about whether the service area delineation should be based on definitive geographic boundaries, location of deposit facilities, or where the preponderance of loans were located. Under the revised regulations, examiners will spend less time assessing the reasonableness of an institution’s delineated service community. However, examiners will continue to use discretion in determining whether an institution arbitrarily excludes areas, particularly low- and moderate-income areas. Both bankers and examiners have cited frequent changes in the focus of examinations as a reason for inconsistency. From the time CRA was enacted in 1977 to 1989, there were not many changes in the way CRA was implemented by the regulators. However, during the period 1989 through 1992, the regulators issued several policy statements with new guidance regarding CRA examinations. Among other things, a March 1989 statement focused examinations on the processes and efforts needed by institutions for a well-managed CRA program. Guidelines issued in May 1990 focused on implementation of requirements for public disclosure of CRA ratings, written examiner evaluations of institutions’ CRA performance, and examiner use of a new four-tiered descriptive rating system mandated by the Financial Institutions Reform, Recovery and Enforcement Act (FIRREA). A December 1991 policy statement established the need for institutions to analyze the geographic distribution of their lending patterns as a part of their CRA planning process. In March 1992, in an effort to achieve consistency in CRA evaluations, the regulators provided guidance on the inclusion of numerical data in public CRA evaluations, consistent with the Federal Deposit Insurance Corporation Improvement Act (FDICIA). Additional guidance provided in June 1992 shifted the focus of CRA examinations to performance or results, rather than the documentation of efforts. Despite the emphasis on performance over efforts, however, the same 12 assessment factors, which were largely process-oriented mixed with some lending measures, continued to be used as tools for measuring performance. Although the recent regulatory reforms will once again change the focus of examinations, the comprehensiveness of the reform’s detailed review of the problems in CRA examinations and the overall agreement to focus on performance should help to improve consistency and reduce the need for major changes in the near future. Another frequently cited reason for inconsistency in CRA examinations has been insufficient examiner experience and training. Some community groups commented that there is not a sufficient level of expertise within the regulatory community about what constitutes an analysis of a community’s credit needs, what constitutes a loan program that would actually meet credit needs, how time-consuming an analysis would be, and what is adequate performance. Some bankers also commented that examiners need to understand how credit needs can vary based on the characteristics of a specific community, particularly between urban and rural communities, and how institutions may meet community credit needs and their CRA obligations in different ways. The experience levels of the examiners have varied considerably among regulators. The Federal Reserve Board (FRB) has had a separate core of CRA compliance examiners since 1979, while the Office of Thrift Supervision (OTS), the Federal Deposit Insurance Corporation (FDIC), and the Office of the Comptroller of the Currency (OCC) established such cores in 1989, 1990, and 1993, respectively. From 1992 to 1994, FDIC significantly expanded its compliance examination staff to about 300 examiners. From 1993 to 1994, OCC increased its examiner staff dedicated to compliance examinations from 94 to approximately 170 examiners, while about 110 OCC examiners will continue to perform both safety and soundness and compliance examinations. In addition, from 1991 to 1993, OTS increased its separate compliance examination staff from 82 to 105 compliance examiners. The amount of examiner training also has varied among regulators and by experience level—ranging from none to advanced training specific to CRA and the Home Mortgage Disclosure Act (HMDA). Even within regulators, we were told that training availability differed by region or district and that some regulators supplemented classroom training through newsletters or self-study, computer-based courses. Although most examiners in our case studies said they have had instruction on CRA assessments as part of general entry-level training, many examiners commented that much of their training was on the job. Training needs that examiners identified included additional training in the use of HMDA and census data; regular seminars or refresher courses to provide updates and guidance on more advanced training for experienced examiners; more training on fair lending laws and new discrimination detection more focused training on skills such as data collection, computer analysis, and reading property appraisals; training with a focus on lenders’ communities, safety and soundness issues, and new examination techniques; regular conferences or seminars where new and experienced examiners from different agencies can exchange information on examination techniques and experiences; and external training that includes perspectives of lenders’ and community groups. The regulators have acknowledged that training is important to the success of the reforms and have indicated their intention to work together to improve examiner training. Some examiners told us that they would welcome more interagency training, which could help improve consistency among regulators. Past attempts to develop interagency training programs have had mixed success. Some regulators explained that interagency training did not always meet their different training needs because some regulators had examiners strictly devoted to CRA compliance examinations, while others had examiners perform both safety and soundness and compliance examinations. Recently, FRB’s interagency training has included the specialized course on HMDA data analysis. Generally, however, each of the regulators has developed its own core CRA examination training programs. The magnitude of the changes in the CRA reforms, as well as the resulting increase in examiner responsibilities, created the need for clear guidance and comprehensive training for all examiners performing CRA examinations and thereby implementing the revised CRA regulations. Consistency in training would help to improve examination consistency among all regulators. Under the revised regulations, examiners will have additional responsibilities in areas such as analyzing performance information. Further, examiners will have to make judgments relating to various types of community development lending and investment activities. Community groups have said these areas need better examiner understanding. The recent shift to performance-based examinations should increase the reliance on quantified data to assess institutions’ performance. Inaccurate data used by various affected parties may lead them to inappropriate conclusions about an institution’s CRA performance. Bankers, regulators, and community groups interviewed in our case studies identified concerns about data quality that resulted in limiting the usefulness of some data collected. Some of the regulators have also acknowledged data quality problems, particularly with HMDA data, and have taken steps to improve the accuracy of HMDA data. However, while examination guidelines include procedures to assess HMDA data accuracy, they do not address the quality of other kinds of data used to assess performance, like other lending data or financial statistics. Moreover, the regulators do not have a uniform policy on what actions should be taken against institutions with poor data quality, and they have not been consistent in the actions they have taken to date. Bank management is primarily responsible for ensuring that data provided by the institution are accurate, and examiners are responsible for verifying data accuracy during examinations. Bankers and examiners in our case studies commented that some data problems are due to unclear reporting requirements, difficulties in determining correct geographic codes, incomplete data, and human and technical errors. Among the four regulators, FRB has done the most detailed analysis of HMDA data quality. From March 1993 to February 1994, the Federal Reserve District Banks participated in a survey to determine the quality of HMDA data submitted by state member banks for the year 1992 by cross checking each institution’s HMDA Loan Application Register with its 1992 HMDA data submission. This survey confirmed FRB’s long-standing concerns about HMDA data accuracy during this time period. As a result, FRB required one out of every five banks to resubmit its HMDA data for 1992. The most significant errors found in these examinations involved the loan applicant’s reported income. Over half of all income-related errors were the result of banks reporting income figures from unverified application information. The other half consisted mostly of clerical errors. FRB staff said these high error rates are because, in most institutions, HMDA reporting is done by insufficiently trained clerks, with little review from more senior management. FRB amended the HMDA regulation (regulation C) in December 1994 to help improve HMDA data quality by clarifying and simplifying the reporting requirements. OTS officials said they have also taken action to address HMDA data quality problems. For 1992 data, the directors of OTS regional offices sent letters of reprimand to institutions with the worst data quality. For 1993 data, the Financial Reporting Division sent detailed logs of reporting accuracy and timeliness to the regional compliance managers for use in examinations. In addition, an OTS official noted that the regulators’ interagency examination council, through its HMDA Subcommittee, has made recommendations to improve the examination of data quality, which are likely to be reflected in forthcoming revised HMDA examination procedures. Poor HMDA data quality was mentioned in some of the examination reports from our case studies. Some of these institutions were required to resubmit their data, while others were not. FRB officials stated that they generally require institutions with a 10 percent or greater error rate to resubmit their HMDA data. Other regulators did not have a specific policy on when resubmissions would be required. The FRB also recently announced that the institutions it supervises will be subject to the same monitoring and enforcement rules that are currently in place for other types of reports, such as Call Reports. Similarly, OTS stated in its comments to this report that it recently adopted guidelines for the assessment of civil money penalties against institutions that submit late or inaccurate HMDA data. Only FDIC has actually penalized institutions for not submitting their HMDA data on time. While these types of actions taken by regulators have helped to increase HMDA data reliability for the affected institutions, they do not ensure uniform or consistent reliability across the industry. Current compliance examination procedures include steps to check the accuracy and completeness of HMDA data. Similar data quality checks for any other data used to assess performance would, if effectively implemented, help to ensure that data used in assessments are accurate. For example, procedures could be established that require examiners to check for data deficiencies during examinations. However, some examiners told us they did not always have time to complete the required procedures and such additional procedures may increase examination time. Notwithstanding the possible issue of timeliness, if data accuracy is not checked by the regulators during examinations, it may not be viewed as important by the institutions. The credibility of the revised CRA examinations will also depend upon the explanations provided in the public evaluation reports about how ratings are determined. Community groups cited their perception that examiners were inconsistent and that the bases for ratings were unclear from the information provided in past evaluation reports. They emphasized the importance of the public evaluation reports, because these reports are the groups’ primary source of information about institutions’ CRA performance, and they viewed transparency about institutions’ lending performance as the best form of regulation. More specifically, they cited the need to provide more information in the public evaluation reports about institutions’ actual lending performance including the data used to support conclusions and clear explanations about how an institution’s performance was assessed. Bankers have also stated that they do not always understand the bases for their ratings. The regulators have acknowledged that bankers and the public will learn what is expected under the regulations and judge whether examination consistency has improved on the basis of the rationale provided by the regulators on how the revised regulatory standards have been applied to determine institutions’ ratings. Although the revised CRA regulations included specific instructions to institutions on what information must be included in the public CRA files, they did not address the contents of the public evaluation reports. We recognize that the regulators have taken steps to include more performance data in public evaluation reports, as required by FDICIA. However, we believe the regulators can better demonstrate their move towards performance-based CRA examinations by designing and submitting public CRA reports that establish a basis for the given evaluations supported by objective data analysis and indicators. Some regulatory officials have indicated that they would like to develop a uniform interagency report format, but past interagency efforts to develop uniform evaluation reports have not succeeded. The regulators have indicated in the revised regulations that examiners will relieve some of the burden on institutions by assuming greater responsibility for areas such as analyzing data collected. Even without the additional responsibilities under the current system, some of the regulators have had difficulty meeting their goal of conducting CRA examinations for all institutions at least once every 2 years. In addition, some examiners in our case studies told us that they have not had sufficient time to complete all of their responsibilities during examinations. They said that this generally resulted in one of three outcomes: (1) the time needed to complete examinations was lengthened; (2) the institutions were asked to provide more information or analyses; or (3) some activities, such as making community contacts to assess community needs, were not completed. The regulators have varied in the resources devoted to conducting CRA examinations, as shown in chapter 1, table 1.2. Until 1993, the FRB had the largest CRA examination force and the fewest number of institutions to examine. It has generally been able to examine all of its institutions within a 2-year time frame. FDIC has the largest number of institutions and increased its CRA examination force by 75 percent from 1992 to 1993. FDIC, OTS, and OCC have not been able to examine all of their institutions within a 2-year time frame. OCC does not anticipate beginning a 2-year examination schedule until 1997, when it plans to have a sufficient number of trained examiners. Also, examiners told us that they did not always have time to complete all required procedures or analyses. Some examiners mentioned that, for various reasons, making community contact was not always accomplished. Examiners were generally encouraged to make contacts during each examination but said they often relied on previously gathered information. Under the revised regulations, the examiners’ responsibilities for consulting community sources will be increased. Another area in which examiners will be expected to do more is the analysis of institutions’ lending performance data. Our case studies indicated that responsibilities in this area were not always clear and were sometimes shifted back and forth between institutions and examiners. Another related resource issue involves the implementation of the recently passed legislation, the Riegle-Neal Interstate Banking and Branching Efficiency Act of 1994, that may involve changes to CRA examinations for institutions with interstate branches. The act requires that an interstate institution’s public CRA evaluation report include a state-by-state summary evaluation of the CRA performance for its branches in each state. In addition, the report is also to include an evaluation of a bank’s performance within a multistate metropolitan area where the banks have branches in two or more states within the area. The regulators are not certain if they would be required to review all of an institution’s branches at the same time to complete the CRA examination. If so, the resource requirements could be problematic for examining large institutions located in many states with many branches. The regulators have not yet fully implemented the provisions of the act but said they are considering their potential implications as they develop examination procedures for the revised CRA regulations. To address their CRA responsibilities, the regulators, for the most part, have increased the number of CRA compliance examiners in the last 2 years. They have fewer institutions to examine due to mergers, acquisitions, failures, or other industry consolidation. Moreover, some of the regulators have begun testing new procedures to streamline CRA examinations and reduce examination time. While the revised CRA regulations have a goal to reduce bankers’ regulatory burden, they will also clearly increase examiners’ responsibilities. Currently, it is difficult to determine exactly what resources will be needed and how the regulators’ current resources will change over the next 2 fiscal years. Therefore, the regulators will need to closely monitor implementation of the revised regulations and determine if further actions are needed to ensure that examiners can meet their responsibilities within the appropriate time frames. If regulatory efforts are not successful, examiners may be faced with the situation of not performing necessary data analyses or shifting the responsibility for conducting such analyses back to institutions. Such examiner behavior could reduce CRA examination quality or increase institutions’ regulatory burden. FDIC, OCC, and OTS officials believe that efficiencies will be gained by both bankers and regulators in implementing the revised regulation through, among other things, the elimination of process oriented factors and use of more sophisticated software in examinations. They believe that such efficiencies, in the long run, may actually reduce the overall time needed for CRA examinations. FRB, on the other hand, suggested that the regulators’ costs may increase in assessing CRA compliance under the revised regulations. The success of the newly adopted CRA reforms will likely be judged largely by whether the regulators can address lingering concerns about the certainty and consistency of CRA examinations. The regulators have had difficulties in meeting these challenges in the past. Some of the challenges will likely continue as the regulators implement the revised CRA regulations—including examiners’ use of discretion, differences in examiner experience and training, data quality, ratings justifications provided in public evaluation reports, and regulatory resource limitations. The regulators have indicated in the revised regulations that they intend to work together on improving examination guidance and training to ensure that examiners consistently interpret and apply the new CRA standards. In addition, examiners cannot adequately conduct performance-based evaluations without accurate data. Long-standing concerns about data quality will likely be reduced only if the regulators identify and ensure that the institutions correct inadequate data for future CRA examinations. Examination consistency will ultimately be judged by the information and explanations provided in public evaluation reports on how performance ratings have been determined. Finally, by closely monitoring their resource needs and their ability to accommodate their increased CRA responsibilities, the regulators may be better able to ensure that the requirements of the revised CRA regulations will be met. We recommend that the heads of FRB, FDIC, OCC, and OTS work together to take the following actions to better ensure the effective implementation of the revised regulations and consistency of CRA examinations: Develop or revise regulatory guidance and training programs by clarifying how examiners should interpret the performance standards, and require that all examiners receive comprehensive training necessary to implement the new regulations. Improve data accuracy by (1) requiring examiners to assess the accuracy of data used in performance evaluations and (2) developing a uniform policy on what actions will be taken against institutions with poor data quality. Improve disclosures in publicly available evaluation reports by clearly presenting performance information and the rationale used to assess institutions’ performance against the revised performance-based examination standards. Assess agency resources and examination techniques to determine what resources and techniques are needed to meet the requirements of the revised CRA regulations. Generally, agency officials agreed with our report message and recommendations to help ensure effective implementation of the revised CRA regulations. FDIC commented that while it agrees that examination consistency is a major priority for the regulators, which it is pursuing through enhanced interagency training, examiner judgment is still critical to implementation of the revised regulations as each community and each institution has unique characteristics that must be considered. OTS commented that although the revised regulations still call for examiner judgment, they provide for reasoned conclusions to be drawn from objective data under a clearer set of performance standards. FRB acknowledged that one of the biggest challenges faced by the agencies in the implementation of CRA is the ongoing challenge to achieve the appropriate balance between desired certainty and the need for flexibility in implementation to reflect unique community banking circumstances. With regard to resource needs for the regulators to implement the revised regulations, FDIC, OCC, and OTS suggested that the revised regulations should reduce the overall time devoted to CRA evaluations in the long run due to the elimination of process oriented factors, coupled with use of enhanced, more sophisticated software that they are currently introducing. OTS also suggested that the small bank and strategic plan options may further reduce examination resource requirements. FRB, on the other hand, has publicly recognized in its impact analysis of the revised regulations that implementation of the regulations may increase regulators’ costs in assessing CRA compliance. FRB and OTS responded to our recommendations by describing what they are doing or plan to do. Some of the actions include revised guidance and initiation of training programs (which covers interagency training begun in September 1995), measures to improve data accuracy, better supported conclusions in public CRA evaluations, and monitoring of compliance examination resources. In its efforts to improve data accuracy, FRB commented that it is establishing enforcement mechanisms. In addition, FDIC and OCC acknowledged that interagency training and other efforts would further regulators’ plans to improve disclosures in public CRA evaluation reports by developing uniform and accurate CRA performance evaluations and emphasizing the need to fully support related conclusions. These actions, if effectively implemented, should be helpful in enabling the regulators to fulfill the intent of our recommendations. Many public and private sector efforts have reduced various barriers to community lending in low- and moderate-income areas. Through individual activities and cooperative efforts, institutions and community groups have used the flexibility of the CRA statute and addressed important cost-related barriers and market impediments to enhance community lending opportunities. While we did not assess individual initiatives as a part of this review, we present examples that bankers, regulators, and community groups we contacted considered to be successful techniques in helping to lower costs and risks for institutions participating in community development lending strategies. The secondary mortgage markets have also taken steps to broaden opportunities for institutions to sell community loans on those markets. In addition to bankers calling for certain compliance incentives, local, state, and federal governments have provided incentives for lending in low- and moderate-income communities. The federal bank regulators have also been able to play a key role in facilitating institutions’ community lending activities by providing forums for educating bankers and disseminating information about successful initiatives. Each of the regulators has established a community affairs program to encourage and promote community lending and investment initiatives among bankers. As they further develop these programs and better coordinate their efforts, the regulators could enhance their role in this respect. Comments from some of the bankers we interviewed confirmed the contention of some industry observers that private sector efforts to meet the credit needs of low- and moderate-income communities may be limited by the perception that such lending is likely to entail relatively high credit risk and relatively small potential returns. Many bankers tended to believe that the profits of such activities are lowered by relatively high credit risk—that is, the risk of financial loss due to the possibility of borrower default—and high transaction costs. The transaction costs for community lending may be higher than for other commercial or consumer lending because of, among other factors, additional time and effort necessary to ascertain the creditworthiness of the borrower or the related property in certain low- or moderate-income areas. Another significant barrier faced by bankers is the opportunity cost of community lending. The primary objective of a bank is to maximize profits for its shareholders. To the extent that community lending is believed to be inconsistent with that objective, community lending expenditures represent lost opportunities to achieve greater returns through more profitable activities. Closely aligned with the cost factors is the issue of safety and soundness policies and regulations, which some bankers we interviewed believe are inherently in conflict with community lending because of the perceived greater risks involved in such lending. As evidenced by our case studies, a matter of concern frequently mentioned by bankers about community lending is the issue of high credit risk, which represents one element in the cost of lending. When a banker extends a loan, some possibility exists that the borrower will not repay the loan or will delay payment. Bankers making a large number of loans expect a small percentage to be nonperforming. To cover expected losses, they may structure their loan rates accordingly and also voluntarily set aside loan loss reserves. The concern about credit risk is understandable in that community lending is made to low- and moderate-income borrowers who may not meet normal creditworthiness standards such as debt-to-income ratio. However, according to a 1993 Federal Reserve report to Congress,available evidence was insufficient to determine the extent to which credit risk is associated with different income, racial, or ethnic characteristics across neighborhoods. A significant cost element in any type of lending by an institution is the cost of originating, processing, and servicing loans, also known as transaction costs. Transaction costs rise and fall with the volume of lending. They include, among other costs, expenses related to evaluating an applicant’s credit history and ability to pay off the debt as scheduled; obtaining appraisals and surveys of properties offered as collateral; and processing loan payments, including monitoring borrowers who have fallen behind on their payments. The amount of time and effort expended on these activities may vary considerably from loan to loan, depending upon the type and complexity of the loan and characteristics of the borrower. Generally, the larger the loan amount and the smaller the transaction costs, the more profitable the loan for the institution. More specifically, since transaction costs do not usually rise in proportion to the loan amount, larger loans are generally more profitable. According to bankers we interviewed, community loans are less profitable for institutions than many other types of loans because the loan amounts are relatively low, while loan transaction costs are relatively high. High transaction costs may be due to greater time and care required to qualify borrowers for loans by gathering additional information to help better identify the lender’s credit risk. According to a banker from a medium-sized Texas bank, loans to low- and moderate-income individuals are not profitable because their small size nets a low return to the bank’s fixed costs. One of the perceived issues surrounding CRA is whether community lending reduces an institution’s safety and soundness. There are those who believe that CRA regulations encourage “high loan-to-value ratio” mortgage loans in local communities, which could also lead to incurring greater risk. According to some bankers we interviewed, community lending has added costs resulting from loss reserves required by safety and soundness examiners. Both bankers and community groups have said that safety and soundness examiners do not understand many of the techniques institutions use to reduce credit risk of community loans—such as, for example, the “layering of loans with state and city financing.” As a result, they require institutions to set aside loan loss reserves that bankers, community groups, and even compliance examiners may view as unnecessary. Two examples illustrate noted concerns about the perceived problem pertaining to safety and soundness. The Chairman of the California League of Savings Institutions testified at the public CRA hearings that members of the League support strong capital regulations, but Congress and the regulatory agencies must recognize that current risk-based capital regulations have an unavoidable impact on an institution’s ability to fulfill community needs. The treatment of (capital requirements for) rehabilitation loans, apartment loans, and equity participations makes them too “expensive” in capital costs for many institutions. During CRA hearings in 1993, an official of a large nationwide bank stated that over time her bank has learned that community development lending is not unsafe and that the bank’s community development lending portfolios perform as well or better than its general market loans. The banker pointed out that community development loans look different from so-called traditional loans in that the sources of equity and debt-to-income or loan-to-value ratios are different, and the appraised value is often no measure of real value. Recognizing that these variables do not make for unsafe community development loans, the banker noted that such loans are viewed adversely from a safety and soundness perspective, and, thus, are more heavily reserved against, more heavily monitored, and, at best, more expensive to make. During our review, we frequently heard concerns or complaints from bankers about possible or perceived safety and soundness risks in the implementation of CRA. We did not independently verify the accuracy of these claims. Various individual and cooperative efforts among institutions, community groups, and others have provided the means to lower credit risk and reduce transaction costs in community lending. Although the lack of specific performance criteria in CRA has complicated compliance and enforcement of the law, it has allowed institutions flexibility in designing and implementing programs to better serve the credit needs in low- and moderate-income areas. Bankers taking a proactive approach have used the law’s flexibility to create innovative programs and strategies that allow them to expand lending opportunities and increase or cultivate new markets. Also, those bankers who gain experience or develop expertise in community lending and make it a part of their normal business operations find that CRA obligations need not be perceived as a regulatory burden. Many cooperative ventures have also permitted community groups to play an important role in reducing barriers to community lending. Bankers who are committed to serving the credit needs of their communities have taken advantage of CRA’s flexibility and carved out ways to make loans that other bankers might not find attractive. Regulators have found, and our case studies revealed, that a more effective CRA program was generally evident when bank management exhibited certain types of proactive approaches to CRA implementation. These bankers took action to get their board members involved, reached out to members of the community to determine the needs of their communities, developed marketing and advertising strategies, and established sound CRA plans designed to address community needs. The types of initiatives implemented by bankers and found by regulators to have effective CRA performance included education and counseling seminars, community outreach efforts, flexible underwriting standards or policies, participation in government-sponsored lending programs, and implementation of special programs offering unique products or using specialized staff to better meet the needs of customers. Also, some banking associations have developed programs to inform bankers of these different types of initiatives. In some cases, greater financial and staff resources of larger institutions have allowed them to create designated CRA departments that can devote time to developing and promoting various unique product lines to attract consumers. However, some smaller or rural bankers who serve predominantly low- and moderate-income areas, by necessity, have succeeded in meeting CRA goals during their normal course of business with customers. Considering their clientele and the special needs that many require, these bankers have found that to make a profit and satisfy community needs, it was necessary for them to create specialized programs and develop flexible policies. Some examples of the types of programs or initiatives that bankers have implemented to meet their CRA goals are presented in table 4.1. Bankers use various mechanisms to lower credit risk and transaction costs on community loans. They have found that losses can be reduced by screening out the riskiest applicants and by supporting successful applicants before and after loans are extended. Two approaches that bankers use to help keep credit losses on community development lending to a minimum include (1) screening, counseling, and monitoring borrowers and (2) risk-sharing arrangements. Cooperative efforts also help to share costs, so costs for individual bankers may be reduced. Based on roundtable discussions, many bankers agree that thorough applicant screening, applicant education and counseling prior to loan extension, and diligent monitoring of borrowers after loans are granted help lower the risks of lending in low-income areas and to low-income borrowers. Bankers and other organizations that support lending use various techniques to screen potential borrowers, including home buyer and small business education, credit counseling, and extensive direct contact with loan officers. Many institutions provide technical assistance and grants to nonprofit housing counseling groups and community groups that help with loan packaging. The groups screen potential borrowers, help assemble documentation, and make sure that applicants meet the institution’s underwriting criteria. They also help market and promote loan programs and minimize institution processing costs. These activities are done to allow the bankers to become familiar with the applicants and their communities. Lending consortia may be either formal or informal, for profit or nonprofit. Consortia often consist of institutions that pool lending money or collect equity stakes for low- and moderate-income housing and community development. The types of participants, bankers, and funding involved vary from program to program. In all cases, consortia allow institutions to spread risk and transaction costs to avoid high concentrations of credit risk in individual projects or in limited geographic areas. Risk sharing allows institutions the opportunity to expand lending through various means to nontraditional borrowers whose risk characteristics are difficult to quantify or assess. For example, they can save member institutions time and money by gathering information and developing expertise about public and private subsidy programs, the past performance of real estate developers and property management companies, and the characteristics of targeted communities and local community groups. They can attract staffs that are knowledgeable about matters such as underwriting and property appraisal. Loan consortia also provide an opportunity for smaller institutions to participate more in community development lending, because such institutions, on their own, are less able to bear the cost and develop the expertise for community development lending. Several of the bankers included in our review said that they have participated in various consortia or multibank activities in meeting their CRA goals. Examples of some of the consortia organizations named by institutions in our sample review are included in table 4.2. Regulators and bankers have taken steps to address potential conflicts with regulations designed to help ensure safe and sound operations. In January 1994, the Federal Deposit Insurance Corporation (FDIC) adopted changes to its risk-based capital standards that were intended to facilitate prudent lending for multifamily housing purposes. Similar action was also taken by the other federal regulators. The risk-based capital final rule lowered from 100 percent to 50 percent the “risk weight” accorded loans secured by multifamily residential properties that meet certain criteria as well as securities collateralized by such loans. The effect of this ruling is that an institution making or acquiring these loans or securities can hold less capital than required in the past under the risk-based capital standards. However, to be eligible for the lower risk weight, the loans must satisfy certain loan-to-value and debt service coverage requirements. To ensure that appropriate and affordable financing can be provided for community development projects, institutions have often found it necessary to depart from traditional standards of credit extension. We found bankers who created ways to make secure, profitable loans while sharing costs and risks through their own individual initiatives or by employing such techniques as government loan guarantees, interest rate subsidies, or blended-rate loans with participation from public and private lenders. Examples of individual policy initiatives used by institutions are included in table 4.3. During our review, we often heard complaints that secondary market standards made it difficult for institutions to sell some of their more nontraditional loans that did not meet normal underwriting standards. Similar complaints have been made at focus group meetings sponsored by secondary market entities. The secondary market provides the mechanism for existing loans, marketable securities, and other assets to be sold to investors, either directly or through an intermediary. More specifically, the secondary mortgage market represents the national market where residential mortgages are assembled into pools and sold to investors. This market, which originated with such corporations as Fannie Mae and Freddie Mac, supplies additional liquidity to mortgage lenders. The single most important contribution of the secondary mortgage market is the creation of a national market for resale of residential mortgages. This ensures that mortgage originators, regardless of where they are located, have access to pools of capital managed by pension funds, insurance companies, and other institutional buyers of mortgage-backed securities. Home buyers are assured an adequate supply of mortgage financing as the secondary market sales provide lending institutions with a constant source of new funds to make more home loans. Banks receive CRA credit for originating loans to particular low-income communities or individuals whether they sell the loans in the secondary market or hold them in their portfolio. Our interviews with bankers disclosed several concerns pertaining to the secondary market underwriting standards that some bankers believed pose a barrier and tend to restrict lending in low- and moderate-income areas. One primary concern was that institutions do not want to deviate from the secondary market standards because they want to be able to sell all their loans to the secondary markets. As one of the regulatory officials noted, if the secondary markets will not accept a loan, an institution is forced to keep the loan in its portfolio and assume the market and interest rate risk for the full life of the loan. Therefore, some institutions look for loans that do not have any nonconforming provisions or any questions about collateral. Other concerns raised included the following: A thrift regulatory official noted that one of the secondary market standards that can reduce flexibility is the requirement that no more than 36 percent of the borrower’s salary can be used for loan payments. In an area with high housing costs, such as the San Francisco Bay area, these standards are very limiting. He said many people already pay 40 to 50 percent of their salary for rent and are probably able to continue to pay a high percentage in house payments. A bank management official of a large urban Chicago thrift said that Fannie Mae formulas or ratios represent the industry standard; however, he noted that he was not aware of empirical evidence that an applicant who does not meet these ratios cannot service the debt. Some of the players in the secondary market have begun to recognize the problems associated with the underwriting standards and have initiatives under way that are intended to help alleviate some of these problems. We did not assess the effect of these initiatives as part of this review. Fannie Mae and Freddie Mac have both announced initiatives in recent years to purchase loans with underwriting guidelines or payment terms that do not meet their more traditional loan purchase programs. Congress has encouraged these corporations to support low- and moderate-income loans by setting specific volume goals over a 2-year period, which began in 1993. For example, for all the loans they purchase, 30 percent of the units financed must be for low-and moderate-income borrowers, 30 percent must be located in central cities, and $3.5 billion ($1.5 billion for Freddie Mac, $2 billion for Fannie Mae) must finance loans to low-income and very low-income home buyers. In announcing its initiatives in February 1994, Freddie Mac cited the potential effect of these initiatives on mortgage lending in inner cities as well as its efforts to broadly redefine creditworthiness. Officials of Freddie Mac stressed the fact that the clarifications do not represent a lowering of its standards but an effort to dispel misconceptions among originators of mortgage loans. Through meetings with lenders, appraisers, mortgage insurers, and others, the corporation was able to identify more than a dozen underwriting issues that were causing originators to needlessly deny credit in the belief that some particular factor would make a loan ineligible for a Freddie Mac pool. For example, numerous people thought that Freddie Mac did not want to purchase any loans extended to borrowers with one or two 30- or 60-day delinquencies in their credit histories. While recognizing that such a history could indicate a bad risk, Freddie Mac officials said that they now tell lenders that they may focus on the borrower’s history of housing payments as well as consider explanations for the delinquencies. Acknowledging that the initiatives could reduce the quality of loans in its pools, Freddie Mac officials plan to vigilantly monitor the performance of the loans. In March 1994, Fannie Mae announced its $1 trillion plan to help finance affordable housing loans by the year 2000. Significant among the 11-initiative program were 2 initiatives, 1 involving the clarification of guidelines and the other testing an approach for underwriting loans, which were intended to help break down the barriers pertaining to secondary market criteria. In clarifying the guidance, Fannie Mae officials tried to ensure that the underwriting guidelines are clear and flexible and are applied equally to everyone. To ensure appropriate use by lenders, Fannie Mae plans to maintain a constant dialogue with mortgage lenders to identify the loan characteristics and underwriting procedures it thinks need clarification; develop a comprehensive training program for mortgage industry develop easy-to-use reference tools for underwriters, including on-line access to Fannie Mae guidelines; establish regional hotlines that lenders can call for instant guidance on establish an internal Fannie Mae loan review board to review loans initially rejected by its underwriters; and make an automated underwriting system available to lenders that will use artificial intelligence to analyze loans, ensure consistency, and free up time for underwriters to work on complex applications. Additionally, through a separate initiative, Fannie Mae announced its commitment of $5 billion to conduct experiments in new underwriting approaches designed to probe and test ways to underwrite loans to make credit more accessible to minorities, low- and moderate-income families, central city and rural residents, and people with special housing needs. In line with the administration’s emphasis on reforming CRA and improving community development, several governmental agencies or entities have initiated activities, or revised guidance governing ongoing programs, to enhance community investment in low- and moderate-income areas. Many of these program activities are geared towards rebuilding communities within inner cities and small, rural areas by providing affordable housing and facilitating small business lending. Some local governments have sought to encourage community lending in underserved areas by recognizing and rewarding institutions that demonstrate performance and commitment in helping to meet the needs of residents in these areas. Such rewards might result in better service delivery through branch expansions or increased investments or deposits. Some state governments require commitments to community reinvestments before out-of-state institutions can operate in their localities. They premise entry on a standard of net new benefits to the state, such as increased in-state lending and investments. A California County Board of Supervisors approved a community reinvestment policy that would rank institutions on the basis of their performance in making loans to minorities and in depressed neighborhoods. Those ranked in the top half would then reap the benefits of the county’s investment business. To encourage community reinvestment and development, some municipalities condition their placement of deposits upon the institution making specific types of loans. For example, in Chicago, institutions must file reports on their residential and commercial lending in the Chicago metropolitan area before they can qualify for the city’s deposits. Similarly, during our case studies, we learned that the city of Boston has a Linkage Program that ties deposit of city funds to an institution’s CRA rating. A Boston national bank branch located in a depressed area of the city was rewarded with a $5 million deposit by the city. States also encourage community development through deposit subsidies. For example, Iowa’s State Treasurer’s Office offers several “linked deposit” programs that support below-market rate and small business and agricultural loans. Below-market rate deposits are placed with institutions that, in turn, use them to match fund lower rate loans, with a spread over the deposit rate. This approach provides two unique, highly targeted programs through participating Iowa institutions. One is targeted for minority- and women-owned small businesses and provides below-market rate financing for a variety of business purposes. The other is focused on helping diversify Iowa’s rural economy and increasing employment. It offers linked deposits as incentives for institutions to fund below-market rate loans for horticultural and agricultural projects that involve products not typically found on Iowa farms. Federal efforts to encourage community development lending have included government subsidized programs, changes in regulatory requirements, and legislation promoting investment incentives, some of which correspond with the suggested incentives offered by bankers. Government subsidies, such as those provided by Small Business Administration (SBA), can significantly affect the profitability of lending by making it easier for the borrower to qualify for a loan or, through a guarantee, cushion anticipated losses from a loan, allowing the lender to set aside a smaller amount of funds against this contingency. In accordance with the Financial Institutions Reform, Recovery, and Enforcement Act (FIRREA), the Federal Housing Finance Board (FHFB)—which regulates the credit advance (loan) activities of the Federal Home Loan Banks (FHLB)—was required to develop regulations establishing standards of community investment or service for member institutions to maintain continued access to long-term FHLB advances.Through the Bank Enterprise Act (BEA, P.L. 101-242) and the Riegle Community Development and Regulatory Improvement Act of 1994, (P. L. 103-325), Congress took action to increase financial services provided to underserved and distressed areas and to low- and moderate-income individuals. SBA is an independent federal agency chartered in 1953 to provide financial assistance to small businesses. SBA makes direct loans to borrowers who are unable to obtain conventional financing, participates in loans originated by financial institutions, and also guarantees loans (typically, a guarantee of 85 percent of a small business loan) made by institutions. This agency has efforts under way to foster small business community lending through various pilot programs or initiatives. SBA has initiated a pilot program in several southwestern states to test a new short-form loan application, which should benefit both bankers and borrowers. Under this pilot, for loans under $50,000, bankers must now provide SBA with only a one-page document. Loans between $50,000 and $100,000 require the one-page summary document plus the applicant’s business tax returns for the previous 3 years, a personal financial statement, and the institution’s internal credit memorandum. A national bank official said the shorter form decreases the time it takes to finalize the loan from as long as 6 weeks to 1 or 2 weeks. Also, he said the shorter form makes borrowers feel more comfortable about the application process and bankers more willing to make smaller loans because the previous paperwork made small loans unprofitable. The Rhode Island Area SBA Program has $13.1 million in initial commitments for business loans of up to $50,000 with maturities of 1 to 7 years. The program was developed by SBA’s Providence office and the Ocean State Economic Development Authority, a private entity. Besides offering SBA guarantees, the program virtually eliminates paperwork and “hand holding” burdens for institutions. While the FHLB System has sponsored special community development initiatives in the past, the passage of FIRREA in 1989 has contributed to the system taking on a more active leadership role in the development of community lending programs. To encourage the flow of funds into low- and moderate-income areas, FIRREA required the FHFB to develop regulations that condition access to long-term FHLB advances on member institutions meeting certain standards of community support. Congress specified that the regulations were to take two factors into account—an institution’s CRA performance and its record of lending to first-time home buyers. This provision thereby created an additional CRA enforcement mechanism by tying an FHLB member’s access to long-term advances used to finance residential mortgage lending to the institution’s CRA performance. FIRREA also established an Affordable Housing Advisory Council at each of the FHLBs. The councils are to meet periodically to advise the FHLBs on low-income housing needs in each region. Through its Affordable Housing Program and Community Investment Program, the FHLB system is to provide assistance to its member institutions by supporting their CRA activities. It is to advance funds or subsidize below-market-rate loans originated for low-and moderate-income families and for businesses in low- and moderate-income areas. The Affordable Housing Program is to provide home lending funds to support housing for people whose income does not exceed 80 percent of an area’s median income, and rental housing funds where at least 20 percent of the units are occupied by low-income tenants. Its Community Investment Program is to provide home lending funds to projects aimed at individuals with incomes of up to 115 percent of an area’s median income. To encourage institutions to lend to all parts of their community, some bankers have suggested that CRA be replaced or supplemented with financial subsidies or other positive incentives. Others have called for modifying or supplementing CRA with incentives such as (1) tax credits, (2) deposit insurance credits, (3) streamlined or less frequent examinations, (4) revisions of safety and soundness requirements for CRA lending and (5) broadening the base of institutions and organizations that can buy low-income housing tax credits, and (6) permitting below market financing for community development lending programs with supporting funds coming from FDIC or other regulatory premiums. Past, as well as current legislative matters for congressional consideration have included some of these proposals, as described in the next section. Over the years, Congress has been concerned about how to provide adequate financial services to distressed rural and urban areas throughout the country. In the past, to address the problem, Congress has enacted numerous legislative provisions, such as those included in FIRREA, which created the Community Investment Program under the FHLB system described earlier. However, because this is a complex and far-reaching problem, Congress has continued to seek workable solutions and recently enacted legislative provisions aimed at enhancing community development in underserved areas. In 1991, Congress enacted BEA, and, in September 1994, the Riegle Community Development and Regulatory Improvement Act was passed. BEA was designed to provide banking institutions with incentives to offer more services to low-income communities. Originally, it was to provide for reductions in the deposit insurance premiums that institutions pay on deposits placed in lifeline accounts—checking accounts for low-income individuals. In addition to encouraging lending in poor communities, the act was to establish a deposit insurance premium credit system for lending or establishing branches in these communities. Institutions engaged in such activities would have their deposit insurance premiums reduced. Although BEA was enacted in 1991, funds were not authorized until passage of the Community Development Banking and Financial Institutions Act of 1994 (CDB Act) in September 1994. Along with the funding came modifications to BEA. Instead of institutions receiving deposit insurance rebates as provided under the original BEA, the CDB Act calls for money to be paid directly to institutions to provide financial incentives for lending in low-income communities. The funding level for BEA was eliminated in the recently passed fiscal year 1995 rescissions act (P.L. 104-19). Serving as the umbrella legislation, the Riegle Community Development and Regulatory Improvement Act of 1994, H.R. 3474, includes a number of separate legislative proposals that were added as it proceeded through the legislative process. The CDB Act (known as title I of the Riegle Community Development and Regulatory Improvement Act) creates a fund for forming and expanding community development financial institutions (CDFI) by providing financial and technical assistance for development services, lending, and investment in distressed urban and rural areas. The act authorizes $382 million to be distributed over a 4-year period, under the administration of an independent board. One-third of this amount has been earmarked to fund BEA. Financial assistance may be provided as loans, grants, equity investments, deposits, or credit union shares on a competitive, matching basis. Institutions, local and state governments, and other community organizations may form community partnerships with CDFIs to work cooperatively to revitalize communities. Assistance must be matched dollar for dollar (allowing a reduced match for CDFIs with severe constraints on available matching funds). Selection for assistance is to be based on several factors, including community need and representation, ability to leverage private funds, extent of targeting to low-income individuals, and strength of the revitalization plan. During the past several years, other legislative proposals have been introduced (but not enacted), which offered various approaches to supporting development in economically disadvantaged communities. Although the proposals shared the primary goal of revitalizing low-income areas, they varied in the type and scope of assistance provided, administration of programs created, and other areas. For example, the proposed Community Banking and Economic Empowerment Act of 1993 (H.R. 1699), which was to provide money for loans and technical assistance, had a goal of making credit and credit-related services available to low-income families and others not adequately served by traditional lending institutions. More recently proposed legislation would encourage community development or reinvestment by amending CRA. In a proposed amendment to CRA, the Community Reinvestment Improvement Act of 1995 (H.R. 1211) seeks to enhance the availability of investment capital for low- and moderate-income housing in low- and moderate-income neighborhoods. The proposed Microenterprise Opportunity Expansion Act (H. R. 1019, February 1995) sets forth criteria and describes how microenterprise loans and grants would be treated as an investment in a regulated financial institution’s community. Through their various consumer affairs offices or outreach programs, regulators have established a mechanism to encourage and support community development. Many of their responsibilities and promotional efforts are carried out primarily through guidance, educational forums, information dissemination, and technical assistance activities. The experience levels and the amount of resources the regulators have devoted to their respective community affairs programs and operations vary. The Federal Reserve Board (FRB) and FDIC established programs in 1980 and 1990, respectively, while the Office of Thrift Supervision (OTS) and the Office of the Comptroller of the Currency (OCC) began staffing their programs as recently as 1994. Despite the different levels of operation, development, and resources, all of the regulatory programs have a general goal of encouraging financial institutions to increase the flow of credit to low- and moderate-income applicants and areas. However, the effectiveness of the regulators’ programs in providing community affairs activities or participating in community outreach efforts is largely dependent upon the availability of resources. One mechanism used by regulators to facilitate community lending is through guidance highlighting “best practices” that are characteristic of effective CRA programs. The regulators issued interagency guidance in March 1989, acknowledging that an institution that has (1) ongoing programs or methods to identify community needs, (2) the ability to develop and extend products and services to meet the credit needs identified through the ascertainment process, and (3) a comprehensive marketing program that reaches all segments of its delineated community will generally be in compliance with CRA. The guidance further acknowledges regulators’ belief that to secure an effective CRA program, an institution’s management should be actively involved, maintain policy oversight, and regularly review the community reinvestment compliance program. Such actions can help to ensure that the products and services offered and extended by an institution (1) will meet community credit needs, (2) can be modified when those needs change, and (3) will be available to all segments of the community. The regulators also are to use the expertise of their community affairs staff to counsel and assist institutions that do not have good compliance programs. FRB, which has the most developed outreach program, operates a community affairs office (CAO) in each of its 12 Federal Reserve districts. The staffing level for this program has grown from 14 in the mid-1980s to a current level of approximately 70 employees. According to an FRB official, the staff hired often have some background in housing or the community development area. The principal responsibility of the CAO is to perform outreach work wherein staff contact people in local governments and community organizations to find out what types of unmet needs exist in different communities. The CAO staff develops education and information programs to help meet the community needs identified. Through interviews with FRB officials, we learned that CAO staff are involved in various types of activities that promote community outreach and provide support to examiners. All 12 Federal Reserve regions publish newsletters that discuss different programs and various community development issues. In addition to sponsoring conferences and publishing newsletters, some CAO staff conduct research and issue community profiles (which provide bankers with information on perceived credit needs, existing community development initiatives, and programs within regions that might be duplicated on a local level). Furthermore, they provide assistance to examiners by maintaining a database of community group contacts and may help to analyze home mortgage data. When an institution receives a less than satisfactory rating, the examiner is to refer the institution to the CAO staff for consultation and guidance. The CAO staff may transmit information on community needs through examiner training or by circulating written reports. Being locally based, the administration of the CAO program is left to the discretion of the individual reserve banks. Consequently, although all of the reserve banks are involved in community outreach activities, the methods used for disseminating information may vary. For example, a FRB official told us that an approach used by the Kansas City CAO is to develop a road show presentation and travel to designated areas and present the show. This approach allows institutions and community organizations (which may be located in small, rural areas with limited budgets) to take advantage of the FRB’s outreach efforts. The San Francisco CAO helped to develop a state-wide lending consortia by convening bankers and experts. The CAO in Dallas encouraged community lending by sponsoring geocoding seminars and small business lending workshops. These sessions are designed to teach institutions how to analyze geographic data to help ensure that the institution serves all areas of its community. The Philadelphia CAO established bankers’ councils, which are to meet three or four times a year. By organizing a network of bankers into Community Affairs Officers Councils, the Philadelphia CAO has not only provided a forum for its staff to disseminate CRA information and offer education but also provided a means for encouraging bankers to come together to discuss issues and opportunities for reinvestment in their communities. A FRB official pointed out that the primary strength of CAOs is that they are effective in providing communication forums. FDIC has reached its goal of having at least three community affairs (CA) positions (CA officer, CA assistant, and fair lending specialist), in each of its eight regions. FDIC operates its regional community affairs activities with a staff of 26 who report to regional management with program oversight being provided by headquarters. Similar to FRB, FDIC’s staff has some background experience in housing and community development, and although the program staff’s operations may vary by region, they perform a variety of functions, which are coordinated centrally. For example, the CA officers provide training and information to examiners and develop community reports similar to, but less detailed than, the FRB’s community profiles. The fair lending specialist analyzes home-mortgage data and handles consumer complaints. FDIC headquarters office coordinates functions with community affairs staff through quarterly meetings. Also, centrally coordinated projects, such as a recently published paper on Native-American issues, may be directed by the Washington, D.C. office. FDIC anticipates that its newly created division of compliance and consumer affairs will allow the agency to broaden its outreach initiatives and be more responsive to consumers and bankers. OCC plans to have 12 community affairs staff working in conjunction with its new compliance program. As part of this staffing goal, OCC intends to have one community affairs officer located in each of its district offices. The officers are to be responsible for outreach and communication with community groups and other members of the public. As of February 1995, staffing of these positions had not been completed. Although its community affairs program is in the early stages of development, OCC has had a Community Development Division (CDD) to (1) oversee community development corporations (CDC) and investment programs and (2) approve applications by national banks to invest in CDCsin accordance with the National Bank Act. The role of the CDD is to provide policy guidance to the OCC on community development issues that affect national banks, their customers, and banking community and consumer organizations. The division is responsible for (1) developing initiatives related to the creation of affordable housing for low- and moderate-income individuals; (2) the provision of technical assistance and financing for small, minority, and women-owned businesses; and (3) the economic redevelopment of low- and moderate-income areas. In February 1993, the CDD published the 1992 National Bank Community Development Survey Report, which highlighted the types of community development activities in which national banks participate. The report was distributed to more than 7,500 national banks, community representatives, and other interested parties. The CDD also publishes a quarterly newsletter, Community Developments, which is designed to provide national banks and others with information on innovative bank community development programs, regulatory updates on community issues, and news of federal and state programs that might be of interest to national banks. In February 1994, OTS announced the appointment of five experienced senior staff members to fill positions in the consumer affairs area. In making the announcement, OTS said that the appointments are part of agency initiatives emphasizing community reinvestment, nondiscrimination in lending, and other consumer-oriented goals for thrift institutions. According to OTS officials, during 1994, the community affairs liaison officers in each of its five regions were actively involved in outreach and support efforts related to affordable housing, community development, and related fair lending and CRA matters. For example, these activities included (1) training programs for industry and staff, (2) assistance to institutions with poor CRA ratings, (3) the establishment of a community contact database for examiners, (4) meetings with local government agencies and community organizations to ascertain community credit needs and community development programs, (5) forums for thrift institutions and local community organizations to discuss local credit needs and community development programs for thrift participation, and (6) policy work on regulatory barrier and safety and soundness issues related to community development and affordable housing. A National Community Affairs Coordinator was appointed in February 1995, in Washington, D.C., to oversee the function of and coordinate the activities among the regional community affairs liaisons. OTS officials also noted that in 1994, they issued a guide on the federal laws and regulations governing community development activities of savings associations, entitled Community Development Investment Authority. In addition, OTS officials said they began a new training program for safety and soundness examiners on understanding and evaluating multifamily affordable housing loans/projects. Coordination of community affairs activities among the regulatory agencies is not something that is required by regulations or mandated by legislation. In practice, however, much of the interagency coordination of the regulators’ community affairs activities that occurs is done through joint training and meetings or established councils. According to an FRB official, FRB holds many conferences jointly with the FHLB Board and has cosponsored conferences with FDIC. Now that OCC and OTS have separate compliance offices, FRB anticipates working more closely with these two agencies. On a regional level, FRB and other government agencies sponsor joint interagency programs, such as training or conferences dealing with community affairs issues. Information is also shared through regulatory publications, such as newsletters or community reports and community contact forms. Upon request, newsletters and community reports containing information such as community lending techniques and investment opportunities are generally disseminated to the public and shared among the regulators. During the examination process, if examiners find that recent contact has been made with community representatives and the results documented, examiners who are assigned to assess an institution’s CRA performance in identifying and/or addressing community needs in that same general neighborhood or community can save time by taking advantage of information obtained from shared community contact forms. These methods of information sharing are generally done on an ad hoc basis. Consequently, the overall benefits to be gained by the regulators as well as the community may not be as far reaching as they could be under a more systematic, coordinated approach to information sharing. Interagency coordination in the use of regulators’ resources can expand or broaden the effectiveness of these resources in helping bankers to understand and implement various initiatives that have proven successful in meeting CRA goals, while providing much needed credit assistance to communities that may require revitalization or redevelopment. To the extent that regulators can apply a systematic, coordinated interagency approach to providing community outreach services that are commonly provided by all regulators—such as community contact information or databases—institutions, community groups, government entities, and others who benefit from such services could be more efficiently served despite the limited resources of regulators. While some bankers perceive an inherent conflict between safety and soundness and CRA goals and are concerned about the secondary market requirements and/or higher transaction costs and smaller loan amounts associated with CRA lending, others have worked to overcome such barriers through individual and/or collective innovative and creative initiatives. Because lending and community development in low- and moderate-income areas often involve different and more complex methods of financing, successful initiatives tend to require the cooperative efforts and expertise of multiple financial partners. Given the recent emphasis on CRA reform and sparked by the need to remove perceived barriers and provide additional compliance incentives, program initiatives have been taken by the secondary market, governments, and Congress to provide financial and other incentives to promote community development and revitalization. The banking regulators have also played a key role in facilitating community lending by providing educational forums and disseminating information to encourage cooperative working relationships among banks and thrifts, other financial entities, community groups, and various government agencies. In this current climate of CRA reform and limited government resources, the regulators’ role of encouraging institutions to meet the needs of all segments of their delineated communities will be a key factor in continuing and expanding upon workable and successful CRA initiatives. Given the differences in resource availability among the regulators, more systematic coordination could help to better utilize limited resources and enhance the regulators’ role in encouraging community development lending. The varied positions taken by the affected parties further demonstrate that the debate about how best to achieve the goals of community reinvestment is both complicated and contentious. The approach embodied in the current CRA statute uses the levers of compliance examinations and application approvals to increase community reinvestment lending. The new regulations are an attempt to generate better results with less regulatory burden. However, given the positions of the different parties, it is not clear that the results will fully satisfy all of those parties. If the concerns raised by the affected parties should persist even after the regulators have had sufficient time to implement the revised regulations, Congress may want to consider revisiting and revising the CRA statute to clarify its intent and scope, possibly examining alternative strategies for reaching its goals. Such strategies might include incentives to strengthen positive CRA performance by bankers and additional enforcement authority for regulators to discourage negative performance. With regard to the “Matter for Congressional Consideration” FDIC and OCC were concerned that congressional action before sufficient time has passed for full implementation of the revised regulations may be premature, and that further revisions to CRA without feedback on the effectiveness of the revised regulations could undermine their implementation. OTS noted that the agencies have already agreed to conduct a full review of the revised regulations 5 years after they are fully implemented. We agree that the regulators have made extensive efforts in revising the regulations to address the diverse concerns raised about the effectiveness of CRA. Consequently, we modified the matter for congressional consideration to suggest that Congress may want to consider the results from implementation of the revised CRA regulations in its deliberations as to whether the objectives of community reinvestment are being well served through the CRA statute and regulations. | Pursuant to a congressional request, GAO reviewed the major problems in implementing the Community Reinvestment Act (CRA), focusing on the: (1) extent to which regulatory reforms address these problems; (2) challenges regulators face in ensuring the success of CRA reforms; and (3) initiatives taken to enhance lending opportunities in low-income areas. GAO found that: (1) bankers, community groups, and regulatory officials generally agree that there is too much reliance on bank documentation efforts and processes, and that CRA examinations are inconsistent and do not accurately reflect the lending institutions' compliance or performance; (2) revised CRA regulations clarify the data used to assess results against performance-based standards, but the affected parties disagree about whether the data collection requirements provide for meaningful performance assessment or are unduly burdensome; (3) differences in examiner training and experience, vague interpretations of CRA standards, and inadequate information and time for implementing CRA performance ratings challenge regulators as they implement CRA regulations; (4) bankers, regulators, and community groups are taking part in a variety of individual and cooperative initiatives to improve community lending and reduce related burdens; (5) barriers to community lending and investment include the higher costs and risks associated with community lending and the underwriting requirements of major participants in secondary mortgage markets; and (6) Congress has considered proposals to amend CRA that would reduce the compliance burden and exempt small institutions from CRA requirements. |
Even though FAA has increased security procedures as the threat has increased, the domestic and international aviation systems continue to have numerous vulnerabilities. According to information provided by the intelligence community, FAA makes judgments about the threat and decides which procedures would best address the threat. The airlines and airports are responsible for implementing the procedures and paying for them. For example, the airlines are responsible for screening passengers and property, and the airports are responsible for the security of the airport environment. FAA and the aviation community rely on a multifaceted approach that includes information from various intelligence and law enforcement agencies, contingency plans to meet a variety of threat levels, and the use of screening equipment, such as conventional X-ray devices and metal detectors. For flights within the United States, basic security measures include the use of walk-through metal detectors for passengers and X-ray screening of carry-on baggage—measures that were primarily designed to avert hijackings during the 1970s and 1980s, as opposed to the more current threat of attacks by terrorists that involve explosive devices. These measures are augmented by additional procedures that are based on an assessment of risk. Among these procedures are passenger profiling and passenger-bag matching. Because the threat of terrorism had previously been considered greater overseas, FAA mandated more stringent security measures for international flights. Currently, for all international flights, FAA requires U.S. carriers, at a minimum, to implement the International Civil Aviation Organization’s standards that include the inspection of carry-on bags and passenger-bag matching. FAA also requires additional, more stringent measures—including interviewing passengers that meet certain criteria, screening every checked bag, and screening carry-on baggage—at all airports in Europe and the Middle East and many airports elsewhere. In the aftermath of the 1988 bombing of Pan Am flight 103, a Presidential Commission on Aviation Security and Terrorism was established to examine the nation’s aviation security system. This commission reported that the system was seriously flawed and failed to provide the flying public with adequate protection. FAA’s security reviews, audits prepared by the Department of Transportation’s Office of the Inspector General, and work we have conducted show that the system continues to be flawed. Providing effective security is a complex problem because of the size of the U.S. aviation system, the differences among airlines and airports, and the unpredictable nature of terrorism. In our previous reports and testimonies on aviation security, we highlighted a number of vulnerabilities in the overall security system, such as checked and carry-on baggage, mail, and cargo. We also raised concerns about unauthorized individuals gaining access to critical parts of an airport and the potential use of sophisticated weapons, such as surface-to-air missiles, against commercial aircraft. According to FAA officials, more recent concerns include smuggling bombs aboard aircraft in carry-on bags and on passengers themselves. Specific information on the vulnerabilities of the nation’s aviation security system is classified and cannot be detailed here, but we can provide you with unclassified information. Nearly every major aspect of the system—ranging from the screening of passengers, checked and carry-on baggage, mail, and cargo as well as access to secured areas within airports and aircraft—has weaknesses that terrorists could exploit. FAA believes that the greatest threat to aviation is explosives placed in checked baggage. For those bags that are screened, we reported in March 1996 that conventional X-ray screening systems (comprising the machine and operator, who interprets the image on the X-ray screen) have performance limitations and offer little protection against a moderately sophisticated explosive device. In our August 1996 classified report, we provided details on the detection rates of current systems as measured by numerous FAA tests that have been conducted over the last several years. In 1993, the Department of Transportation’s Office of the Inspector General also reported weaknesses in security measures dealing with (1) access to restricted airport areas by unauthorized persons and (2) carry-on baggage. A follow-on review in 1996 indicated that these weaknesses continue to persist and have not significantly improved. New explosives detection technology will play an important part in improving security, but it is not a panacea. In response to the Aviation Security Improvement Act of 1990, FAA accelerated its efforts to develop explosives detection technology. A number of devices are now commercially available to address some vulnerabilities. Since fiscal year 1991, FAA has invested over $150 million in developing technologies specifically designed to detect concealed explosives. (See table 1.) Since fiscal year 1992, funding for these technologies has fallen, except for the most current fiscal year, 1996. FAA relies primarily on contracts and grants with private companies and research institutions to develop these technologies and engages in some limited in-house research. The act specifically directed FAA to develop and deploy explosives detection systems by November 1993. However, this goal has not been met. Since fiscal year 1991, these expenditures have funded approximately 85 projects for developing new explosives detection technology. Currently, FAA has 40 active development projects. Of these, 19 projects are developing explosives detection prototype systems. The remaining 21 projects are conducting basic research or developing components for use in explosives detection systems. In September 1993, FAA published a certification standard that explosives detection systems for checked bags must meet before they are deployed. The standard is classified and sets certain minimum performance criteria.To minimize human error, the standard also requires that the devices automatically sound an alarm when explosives are suspected; this feature is in contrast to currently used conventional X-ray devices, whereby the operator has to look at the X-ray screen for each bag to determine whether it contains a threat. In 1994, we reported that FAA had made little progress in meeting the law’s requirement for deploying explosives detection systems because of technical problems, such as slow baggage processing. As of today, one system has passed FAA’s certification standard and is being operationally tested by U.S. airlines at two U.S. airports and one foreign location. Explosives detection devices can substantially improve the airlines’ ability to detect concealed explosives before they are brought aboard aircraft. While most of these technologies are still in development, a number of devices are now commercially available. However, none of the commercially available devices are without limitations. On the basis of our analysis, we have four overall observations on detection technologies that have important implications for their use at airports. First, these devices vary in their ability to detect the types, quantities, and shapes of explosives. Second, explosives detection devices typically produce a number of false alarms that must be resolved either by human intervention or technical means. These false alarms occur because the devices use various technologies to identify characteristics, such as shapes, densities, and other properties, to indicate a potential explosive. Given the huge volume of passengers, bags, and cargo processed by the average major U.S. airport, even relatively modest false alarm rates could cause several hundreds, even thousands, of items per day to need additional scrutiny. Third, and most important, these devices ultimately depend upon human beings to resolve alarms. This activity can range from closer inspection of a computer image and a judgment call, to a hand search of the item in question. The ultimate detection of explosives depends on extra steps being taken by security personnel—a correct judgment by them—to determine whether an explosive is present. Because many of the devices’ alarms signify only the potential for explosives being present, the true detection of explosives requires human intervention. The higher the false alarm rate, the greater is the system’s need to rely on human judgment. As we noted in our previous reports, this reliance could be a weak link in the explosives detection process. In addition, relying on human judgments has implications for the selection and training of operators for new equipment. Fourth, although these devices can substantially increase the probability of discovering an explosive, their performance in the field may not be as good as in laboratory tests. For example, the FAA-certified system has not performed as well in operational testing at two airports as in FAA’s certification test. The need to rely on operators to resolve false alarms is a primary reason for this. Despite the limitations of the currently available technology, some countries have already deployed some explosives detection equipment because of differences in their perception of the threat and their approaches to counter the threat. The Gore Commission recommends that $161 million in federal funds be used to deploy some of these devices. The Gore Commission has also recommended that decisions about deploying equipment be based on vulnerability assessments of the nation’s 450 largest airports. It may take some time to deploy new detection technology for screening checked baggage at U.S. airports because of production limitations and difficulties in integrating new equipment with airline and airport operations. A number of explosives detection devices are currently available or under development to determine whether explosives are present in checked and carry-on baggage or on passengers, but they are costly. FAA is still developing systems to screen cargo and mail at airports. Four explosives detection devices with automatic alarms are commercially available for checked bags, but only one has met FAA’s certification standard—the CTX-5000. FAA’s preliminary estimates are that the one-time acquisition and installation costs of the certified system for the 75 busiest airports in the United States could range from $400 million to $2.2 billion, depending on the number of machines installed. These estimates do not include operating costs. The four devices rely on three different technologies. The CTX-5000 is a computerized tomography device, which is based on advances made in the medical field. It has the best overall detection ability but is relatively slow in processing bags and has the highest price. To meet FAA’s standard for processing bags, two devices are required, which would cost approximately $2 million for a screening station. This system was certified by FAA in December 1994. Two other advanced X-ray devices have lower detection capability but are faster at processing baggage and cheaper—costing approximately $350,000 to $400,000 each. The last device uses electromagnetic radiation. It offers chemical-specific detection capabilities but only for some of the explosives specified in FAA’s standard. The current price is about $340,000 each. FAA is funding the development of next-generation devices based on computerized tomography, which is currently used in the CTX-5000. These devices are being designed to meet FAA’s standard for detecting explosives at faster processing speeds; the target price is about $500,000 each, and they could be available by early 1998. Advanced X-ray devices with improved capabilities are also being developed. Explosives detection devices are commercially available for screening carry-on bags, electronics, and other items but not yet for screening bottles or containers that could hold liquid explosives. Devices for liquids, however, may be commercially available within a few years. Carry-on bags and electronics. At least five manufacturers sell devices that can detect the residue or vapor from explosives on the exterior of carry-on bags and on electronic items, such as computers or radios. These devices, also known as “sniffers,” are commonly referred to as “trace” detectors and range in price from about $30,000 to $170,000 each. They have very specific detection capabilities as well as low false alarm rates. One drawback to trace devices, among others, is nuisance alarms. The alarms on these devices could be activated by persons who have legitimate reasons for handling explosive substances, such as military personnel. Also available is an electromagnetic device that offers a high probability of chemical-specific detection but only for some explosives. The price is about $65,000. Detecting liquid explosives. FAA is developing two different electromagnetic devices for screening bottles and other containers. A development issue is processing speed. These devices may be available within 2 years. The cost is projected to be between $25,000 and $125,000 each. Although a number of commercially available trace devices could be used on passengers if deemed necessary, passengers might find their physical intrusiveness unacceptable. In June 1996, the National Research Council, for example, reported that passenger-screening devices may pose a number of health, legal, operational, privacy, and convenience concerns. FAA and other federal agencies are developing devices that passengers may find more acceptable. FAA estimates that the cost to provide about 3,000 of these devices to screen passengers would be about $1.9 billion. A number of trace devices in development will detect residue or vapor from explosives on passengers’ hands. Two devices screen either documents or tokens that have been handled by passengers. These devices should be available in 1997 or 1998 and sell for approximately $65,000 to $85,000 each. Another five devices under development use walk-through screening portals similar to current metal detectors. Three will use trace technology to detect particles and vapor from explosives on passengers’ clothing or in the air surrounding their bodies. Projected selling prices range from approximately $170,000 to $300,000. One of these devices will be tested at an airport in the latter part of 1996, and another device may undergo airport testing next year. Two other walk-through portals based on electromagnetic technology are in development. Rather than detecting particles or vapor, these devices will show images of items concealed under passengers’ clothing. Prices are projected to be approximately $100,000 to $200,000. Screening cargo and mail at airports is difficult because individual packages or pieces of mail are usually batched into larger shipments that are more difficult to screen. If cargo and mail shipments were broken down into smaller packages, some available technologies could be used. For example, the electromagnetic device available for checked baggage will be tested for screening cargo and mail at a U.S. airport. Although not yet commercially available, two different systems for detecting explosives in large containers are being developed by FAA and other federal agencies. Each system draws vapor and particle samples and uses trace technology to analyze them. One system is scheduled for testing in 1997. In addition, FAA is considering, for further development, three nuclear-based technologies originally planned for checked-bag screening for use on cargo and mail. These technologies use large, heavy apparatuses to generate gamma rays or neutrons to penetrate larger items. However, they require shielding for safety reasons. These technologies are not as far along and are still in the laboratory development stage rather than the prototype development stage. If fully developed, these devices could cost as much as $2 million to $5 million each. To reduce the effects of an in-flight explosion, FAA is conducting research on blast-resistant containers, which might reduce the number of expensive explosives detection systems needed. FAA’s tests have demonstrated that it is feasible to contain the effects—blast and fragments—of an internal explosion. However, because of their size, blast-resistant containers can be used only on wide-body aircraft that typically fly international routes. FAA is working with a joint industry-government consortium to address concerns about the cost, weight, and durability of the new containers and is planning to blast test several prototype containers later this year. Also this year, FAA will place about 20 of these containers into airline operations to assess, among other things, their durability and effect on airline operations. In addition to technology-based security, FAA has other methods that it uses, and can expand upon, to augment domestic aviation security or use in combination with technology to reduce the workload required by detection devices. The Gore Commission has recommended expanded use of bomb-sniffing dogs, profiling passengers to identify those needing additional attention, and matching passengers with their bags. Dogs are considered a unique type of trace detector because they can be trained to respond in specific ways to the smell of explosives. Dogs are currently being used at a number of U.S. airports. The Gore Commission has recommended that 114 additional teams of dogs and their handlers be deployed at a cost of about $9 million. On July 25, 1996, the President announced additional measures for international and domestic flights that include, among other things, stricter controls over checked baggage and cargo as well as additional inspections of aircraft. Two procedures that are routinely used on many international flights are passenger profiling and passenger-bag matching. FAA officials have said that profiling can reduce the number of passengers and bags that require additional security measures by as much as 80 percent. The Gore Commission has recommended several initiatives to promote an automated profiling system. In addition, to determine the best way to implement systemwide matching of passengers with their bags, the Gore Commission has recommended testing techniques at selected airports. Profiling and bag matching are unable to address certain types of threats. However, in the absence of sufficient or effective technology, these procedures are a valuable part of the overall security system. FAA has estimated that incorporating bag matching in everyday security measures could cost up to $2 billion in start-up costs and lost revenue. The direct costs to airlines include, among other things, equipment, staffing, and training. The airlines’ revenues and operations could be affected differently because the airlines currently have different capabilities to implement bag matching, different route structures, and different periods of time allotted between connecting flights. Addressing the vulnerabilities in the nation’s aviation security system is an urgent national issue. Although the Gore Commission made recommendations on September 9, no agreement currently exists among all the key players, namely, the Congress, the administration—specifically FAA and the intelligence community, among others—and the aviation industry, on the steps necessary to improve security in the short and long term to meet the threat. In addition, who will be responsible in the long term for paying for new security initiatives has not been addressed. While FAA has increased security at domestic airports on a temporary basis, FAA and Department of Transportation officials believe that more permanent changes are needed. Furthermore, the cost of these changes will be significant, may require changes in how airlines and airports operate, and will likely have an impact on the flying public. To achieve these permanent changes, three initiatives that are under way may assist in developing a consensus among all interested parties on the appropriate direction and response to meet the ever-increasing threat. Once actions are agreed upon, congressional oversight will be needed to ensure the successful implementation of new technology and procedures. On July 17, 1996, FAA established a joint government-industry working group under its Aviation Security Advisory Committee. The committee, composed of representatives from FAA, the National Security Council, the Central Intelligence Agency, the Federal Bureau of Investigation, the Departments of Defense and State, the Office of Management and Budget, and the aviation community, will (1) review the threat to aviation, (2) examine vulnerabilities, (3) develop options for improving security, (4) identify and analyze funding options, and (5) identify the legislative, executive, and regulatory actions needed. The goal is to provide the FAA Administrator with a final report by October 16, 1996. Any national policy issues would then be referred to the President by the FAA Administrator through the Secretary of Transportation. In recognition of the increased threat of terrorism in general, the President established a Commission on Critical Infrastructure Protection on July 15, 1996. Moreover, with respect to the specific threat against civil aviation, in the aftermath of the TWA flight 800 crash, the President established a commission headed by the Vice President on July 25, 1996, to review aviation safety, security, and the pace of modernization of the air traffic control system. The Gore Commission is working with the National Transportation Safety Board, the Departments of Transportation and Justice, aviation industry advisory groups, and concerned nongovernmental organizations. In our August 1, 1996, testimony before the Senate Committee on Commerce, Science, and Transportation, we emphasized the importance of informing the American public of and involving them in this effort. Furthermore, we recommended that several steps be taken immediately, including among other things, conducting a comprehensive review of the safety and security of all major domestic and international airports and airlines to identify the strengths and weaknesses of their procedures to protect the traveling public. In addition, in our classified August 1996 report, we concluded that to sustain the Gore Commission’s momentum and its development of long-term actions to improve aviation security, the commission should be supported by staff composed of the best available government and industry experts on terrorism and civil aviation security. We made a number of unclassified recommendations aimed at improving the various initiatives underway, including a recommendation that the President report to the Congress, during the current congressional session, on (1) what statutory changes may be required, including who should pay for additional security measures; (2) whether aviation security should be considered a national security issue; and (3) whether changes are needed in the requirement for FAA’s certification of explosives detection technology before mandating its deployment. The Gore Commission was charged with reporting its initial findings on aviation security within 45 days, including plans (1) to deploy new technology to detect the most sophisticated explosives and (2) to pay for that technology. We are pleased that the Gore Commission’s September 9, 1996, report contains many recommendations similar to those we made. The commission recommended a budget amendment for fiscal year 1997 of about $430 million to implement some of the 20 recommendations made in the report. However, the commission stated that it did not settle the issue of how security costs will be financed in the long run. The commission will continue to review aviation safety, security, and air traffic control modernization over the next several months and is scheduled to issue its final report by February 1, 1997. Given the urgent need to improve aviation security and FAA’s less-than-effective history of addressing long-standing safety and security concerns, it will be important for the Congress to oversee the implementation of new security measures once they are agreed upon. Therefore, we recommend that (1) the Congress, along with responsible agencies and other affected parties, establish consistent goals and performance measures and (2) the Congress require periodic reports from FAA and other responsible federal agencies on the progress and effectiveness of efforts to improve aviation security. In summary, Mr. Chairman, the threat of terrorism has been an international issue for some time and continues to be, as illustrated by events such as the bombing of U.S. barracks in Saudi Arabia . But other incidents—such as the bombings of the World Trade Center in New York and the federal building in Oklahoma City—have made terrorism a domestic as well as an international issue. Public concern about aviation safety, in particular, has already been heightened as a result of the ValuJet crash, and the recent TWA flight 800 crash—regardless of the cause—has increased that concern. If further incidents occur, public fear and anxiety will escalate, and the economic well-being of the aviation industry will suffer because of reductions in travel and the shipment of goods. Given the persistence of long-standing vulnerabilities and the increased threat to civil aviation, we believe that corrective actions need to be undertaken immediately. These actions need a unified effort from the highest levels of the government to address this national issue. With three separate initiatives under way, the Vice President could be the focal point to build a consensus on the actions that need to be taken to address a number of these long-standing vulnerabilities. The Gore Commission’s September 9, 1996, report to the President provides opportunities for agreement on steps to improve security that could be taken in the short term. In our opinion, once steps are agreed on, it will be important for the Congress to work with agencies to establish consistent goals and performance measures and for the Congress to oversee their implementation. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed aviation security, focusing on: (1) vulnerabilities in the aviation security system; (2) the availability and limitations of explosives detection technology; and (3) efforts under way to improve aviation security. GAO noted that: (1) the Federal Aviation Administration (FAA) has mandated additional aviation security procedures; (2) effective security is limited by the size of the aviation system, differences among airlines and airports, and the unpredictable nature of terrorism; (3) specific unclassified aviation security weaknesses include unauthorized access to restricted areas of airports and carry-on baggage; (4) explosives detection technology is becoming more widely available, but has limitations, including variable effectiveness, false alarms, the need for human intervention, and decreased performance under field conditions; and (5) Congress, FAA, the intelligence community, and the aviation industry are working together to take action to meet the terrorist threat to aviation security. |
Dramatic increases in computer interconnectivity, especially in the use of the Internet, continue to revolutionize the way our government, our nation, and much of the world communicate and conduct business. The benefits have been enormous. Vast amounts of information are now literally at our fingertips, facilitating research on virtually every topic imaginable; financial and other business transactions can be executed almost instantaneously, often 24 hours a day, and electronic mail, Internet Web sites, and computer bulletin boards allow us to communicate quickly and easily with an unlimited number of individuals and groups. However, this widespread interconnectivity poses significant risks to the government’s and our nation’s computer systems and, more important, to the critical operations and infrastructures they support. For example, telecommunications, power distribution systems, water supplies, public health services, national defense (including the military’s warfighting capability), law enforcement, government services, and emergency services all depend on the security of their computer operations. If they are not properly controlled, the speed and accessibility that create the enormous benefits of the computer age may allow individuals and organizations to eavesdrop on or interfere with these operations from remote locations for mischievous or malicious purposes, including fraud or sabotage. Table 1 summarizes the key threats to our nation’s infrastructures, as observed by the Federal Bureau of Investigation (FBI). Government officials remain concerned about attacks from individuals and groups with malicious intent, such as crime, terrorism, foreign intelligence gathering, and acts of war. According to the FBI, terrorists, transnational criminals, and intelligence services are quickly becoming aware of and using information exploitation tools such as computer viruses, Trojan horses, worms, logic bombs, and eavesdropping sniffers that can destroy, intercept, degrade the integrity of, or deny access to data. In addition, the disgruntled organization insider is a significant threat, because these individuals often have knowledge about the organization and its system that allows them to gain unrestricted access and inflict damage or steal assets without knowing a great deal about computer intrusions. As larger amounts of money and more sensitive economic and commercial information are exchanged electronically, and as the nation’s defense and intelligence communities increasingly rely on standardized information technology (IT), the likelihood increases that information attacks will threaten vital national interests. As the number of individuals with computer skills has increased, more intrusion or “hacking” tools have become readily available and relatively easy to use. A hacker can download tools from the Internet and literally “point and click” to start an attack. Experts agree that there has been a steady advance in the level of sophistication and effectiveness of attack technology. Intruders quickly develop attacks to exploit vulnerabilities that have been discovered in products, use these attacks to compromise computers, and share them with other attackers. In addition, they can combine these attacks with other forms of technology to develop programs that automatically scan networks for vulnerable systems, attack them, compromise them, and use them to spread the attack even further. From 1995 through 2003, the CERT Coordination Center (CERT/CC) reported 12,946 security vulnerabilities that resulted from software flaws. Figure 1 illustrates the dramatic growth in security vulnerabilities over these years. The growing number of known vulnerabilities increases the potential for attacks by the hacker community. Attacks can be launched against specific targets or widely distributed through viruses and worms. Along with these increasing vulnerabilities, the number of computer security incidents reported to CERT/CC has also risen dramatically—from 9,859 in 1999 to 82,094 in 2002 and to 137,529 in 2003. And these are only the reported attacks. The Director of the CERT Centers has estimated that as much as 80 percent of actual security incidents goes unreported, in most cases because (1) there were no indications of penetration or attack, (2) the organization was unable to recognize that its systems had been penetrated, or (3) the organization was reluctant to report. Figure 2 shows the number of incidents that were reported to the CERT/CC from 1995 through 2003. According to the National Security Agency (NSA), foreign governments already have or are developing computer attack capabilities, and potential adversaries are developing a body of knowledge about U.S. systems and methods to attack these systems. The National Infrastructure Protection Center (NIPC) reported in January 2002 that a computer belonging to an individual who had indirect links to Osama bin Laden contained computer programs that indicated that the individual was interested in the structural engineering of dams and other water-retaining structures. The NIPC report also stated that U.S. law enforcement and intelligence agencies had received indications that Al Qaeda members had sought information about control systems from multiple Web sites, specifically on water supply and wastewater management practices in the United States and abroad. Since the terrorist attacks of September 11, 2001, warnings of the potential for terrorist cyber attacks against our critical infrastructures have increased. For example, in his February 2002 statement for the Senate Select Committee on Intelligence, the Director of Central Intelligence discussed the possibility of a cyber warfare attack by terrorists. He stated that the September 11 attacks demonstrated the nation’s dependence on critical infrastructure systems that rely on electronic and computer networks. Further, he noted that attacks of this nature would become an increasingly viable option for terrorists as they and other foreign adversaries become more familiar with these targets and the technologies required to attack them. James Woolsey, a former Director of Central Intelligence, shares this concern, and on October 29, 2003, in a speech before several hundred security experts, he warned that the nation should be prepared for continued terrorist attacks on our critical infrastructures. Moreover, a group of concerned scientists warned President Bush in a letter that “the critical infrastructure of the United States, including electrical power, finance, telecommunications, health care, transportation, water, defense and the Internet, is highly vulnerable to cyber attack. Fast and resolute mitigating action is needed to avoid national disaster.” According to a study by a computer security organization, during the second half of 2003, critical infrastructure industries such as power, energy, and financial services experienced high attack rates. Further, a study that surveyed over 170 security professionals and other executives concluded that, across industries, respondents believe that a large-scale cyber attack in the United States will be launched against their industry by mid-2006. Control systems are computer-based systems that are used within many infrastructures and industries to monitor and control sensitive processes and physical functions. Typically, control systems collect sensor measurements and operational data from the field, process and display this information, and relay control commands to local or remote equipment. In the electric power industry, control systems can manage and control the generation, transmission, and distribution of electric power— for example, by opening and closing circuit breakers and setting thresholds for preventive shutdowns. Employing integrated control systems, the oil and gas industry can control the refining operations at a plant site, remotely monitor the pressure and flow of gas pipelines, and control the flow and pathways of gas transmission. Water utilities can remotely monitor well levels and control the wells’ pumps; monitor flows, tank levels, or pressure in storage tanks; monitor water quality characteristics—such as pH, turbidity, and chlorine residual; and control the addition of chemicals. Control systems also are used in manufacturing and chemical processing. Control systems perform functions that vary from simple to complex; they can be used simply to monitor processes— for example, the environmental conditions in a small office building—or to manage most activities in a municipal water system or even a nuclear power plant. In certain industries, such as chemical and power generation, safety systems are typically implemented in order to mitigate a potentially disastrous event if control and other systems should fail. In addition, to guard against both physical attack and system failure, organizations may establish backup control centers that include uninterruptible power supplies and backup generators. There are two primary types of control systems. Distributed Control Systems (DCS) typically are used within a single processing or generating plant or over a small geographic area. Supervisory Control and Data Acquisition (SCADA) systems typically are used for large, geographically dispersed distribution operations. For example, a utility company may use a DCS to generate power and a SCADA system to distribute it. Figure 3 illustrates the typical components of a control system. A control system typically is made up of a “master” or central supervisory control and monitoring station consisting of one or more human-machine interfaces where an operator can view status information about the remote/local sites and issue commands directly to the system. Typically, this station is located at a main site, along with application servers and an engineering workstation that is used to configure and troubleshoot the other components of the control system. The supervisory control and monitoring station typically is connected to local controller stations through a hard-wired network or to a remote controller station through a communications network—which could be the Internet, a public switched telephone network, or a cable or wireless (e.g., radio, microwave, or Wi-Fi) network. Each controller station has a remote terminal unit (RTU), a programmable logic controller (PLC), or some other controller that communicates with the supervisory control and monitoring station. The control system also includes sensors and control equipment that connect directly with the working components of the infrastructure—for example, pipelines, water towers, or power lines. The sensor takes readings from the infrastructure equipment—such as water or pressure levels, electrical voltage or current—and sends a message to the controller. The controller may be programmed to determine a course of action and send a message to the control equipment instructing it what to do—for example, to turn off a valve or dispense a chemical. If the controller is not programmed to determine a course of action, the controller communicates with the supervisory control and monitoring station and relays instructions back to the control equipment. The control system also can be programmed to issue alarms to the operator when certain conditions are detected. Handheld devices, such as personal digital assistants, can be used to locally monitor controller stations. Experts report that technologies in controller stations are becoming more intelligent and automated and are able to communicate with the supervisory central monitoring and control station less frequently, thus requiring less human intervention. Historically, security concerns about control have been related primarily to protecting them against physical attack and preventing the misuse of refining and processing sites or distribution and holding facilities. However, more recently, there has been a growing recognition that control systems are now vulnerable to cyber attacks from numerous sources, including hostile governments, terrorist groups, disgruntled employees, and other malicious intruders. In October 1997, the President’s Commission on Critical Infrastructure Protection discussed the potential damaging effects on the electric power and oil and gas industries of successful attacks on control systems. Moreover, in 2002, the National Research Council identified “the potential for attack on control systems” as requiring “urgent attention.” In the first half of that year, security experts reported that 70 percent of energy and power companies experienced at least one severe cyber attack. In February 2003, the President clearly demonstrated concern about “the threat of organized cyber attacks capable of causing debilitating disruption to our Nation’s critical infrastructures, economy, or national security,” noting that “disruption of these systems can have significant consequences for public health and safety” and emphasizing that the protection of control systems has become “a national priority.” Several factors have contributed to the escalation of risk to control systems, including (1) the adoption of standardized technologies with known vulnerabilities, (2) the connectivity of control systems to other networks, (3) insecure remote connections, and (4) the widespread availability of technical information about control systems. In the past, proprietary hardware, software, and network protocols made it difficult to understand how control systems operated—and therefore how to hack into them. Today, however, to reduce costs and improve performance, organizations have been transitioning from proprietary systems to less expensive, standardized technologies such as Microsoft’s Windows, Unix-like operating systems, and the common networking protocols used by the Internet. These widely-used, standardized technologies have commonly known vulnerabilities, and sophisticated and effective exploitation tools are widely available and relatively easy to use. As a consequence, both the number of people with the knowledge to wage attacks and the number of systems subject to attack have increased. Also, common communication protocols and the emerging use of extensible markup language (commonly referred to as XML) can make it easier for a hacker to interpret the content of communications among the components of a control system. Enterprises often integrate their control systems with their enterprise networks. This increased connectivity has significant advantages, including providing decision makers with access to real-time information and allowing engineers to monitor and control the process control system from different points on the enterprise network. In addition, the enterprise networks are often connected to the networks of strategic partners and to the Internet. Furthermore, control systems are increasingly using wide area networks and the Internet to transmit data to their remote or local stations and individual devices. This convergence of control networks with public and enterprise networks potentially creates further security vulnerabilities in control systems. Unless appropriate security controls are deployed in both the enterprise network and the control system network, breaches in enterprise security can affect the operation of control systems. Vulnerabilities in control systems are exacerbated by insecure connections. Organizations often leave access links—such as dial-up modems to equipment and control information—open for remote diagnostics, maintenance, and examination of system status. If such links are not protected with authentication or encryption, the risk increases that hackers could use these insecure connections to break into remotely controlled systems. Also, control systems often use wireless communications systems, which are especially vulnerable to attack, or leased lines that pass through commercial telecommunications facilities. Without encryption to protect data as it flows through these insecure connections or authentication mechanisms to limit access, there is little to protect the integrity of the information being transmitted. Public information about infrastructures and control systems is readily available to potential hackers and intruders. The availability of this infrastructure and vulnerability data was demonstrated last year by a George Mason University graduate student who, in his dissertation, reportedly mapped every business and industrial sector in the American economy to the fiber-optic network that connects them, using material that was available publicly on the Internet—and not classified. In the electric power industry, open sources of information—such as product data and educational videotapes from engineering associations— can be used to understand the basics of the electrical grid. Other publicly available information—including filings of the Federal Energy Regulatory Commission (FERC), industry publications, maps, and material available on the Internet—is sufficient to allow someone to identify the most heavily loaded transmission lines and the most critical substations in the power grid. Many of the electric utility officials who were interviewed for the National Security Telecommunications Advisory Committee’s Information Assurance Task Force’s Electric Power Risk Assessment expressed concern over the amount of information about their infrastructure that is readily available to the public. In addition, significant information on control systems is publicly available—including design and maintenance documents, technical standards for the interconnection of control systems and RTUs, and standards for communication among control devices—all of which could assist hackers in understanding the systems and how to attack them. Moreover, there are numerous former employees, vendors, support contractors, and other end users of the same equipment worldwide who have inside knowledge about the operation of control systems. Security experts have stated that an individual with very little knowledge of control systems could gain unauthorized access to a control system using a port scanning tool and a factory manual that can be easily found on the Internet and that contains the system’s default password. As noted in the following discussion, many times these default passwords are never changed. There is a general consensus—and increasing concern—among government officials and experts on control systems about potential cyber threats to the control systems that govern our critical infrastructures. As components of control systems increasingly make vital decisions that were once made by humans, the potential effect of a cyber attack becomes more devastating. Cyber threats could come from numerous sources ranging from hostile governments and terrorist groups to disgruntled employees and other malicious intruders. Based on interviews and discussions with representatives from throughout the electric power industry, the Information Assurance Task Force of the National Security Telecommunications Advisory Committee concluded that an organization with sufficient resources, such as a foreign intelligence service or a well- supported terrorist group, could conduct a structured attack on the electric power grid electronically, with a high degree of anonymity, and without having to set foot in the target nation. In July 2002, NIPC reported that the potential for compound cyber and physical attacks, referred to as “swarming attacks,” was an emerging threat to the critical infrastructure of the United States. As NIPC reports, the effects of a swarming attack include slowing or complicating the response to a physical attack. For instance, a cyber attack that disabled the water supply or the electrical system, in conjunction with a physical attack, could deny emergency services the necessary resources to manage the consequences of the physical attack—such as controlling fires, coordinating response, and generating light. According to the National Institute of Standards and Technology (NIST), cyber attacks on energy production and distribution systems—including electric, oil, gas, and water treatment, as well as on chemical plants containing potentially hazardous substances—could endanger public health and safety, damage the environment, and have serious financial implications such as loss of production, generation, or distribution by public utilities; compromise of proprietary information; or liability issues. When backups for damaged components are not readily available (e.g., extra-high-voltage transformers for the electric power grid), such damage could have a long-lasting effect. I will now discuss potential and reported cyber attacks on control systems, as well as challenges to securing them. Entities or individuals with malicious intent might take one or more of the following actions to successfully attack control systems: disrupt the operation of control systems by delaying or blocking the flow of information through control networks, thereby denying availability of the networks to control system operators; make unauthorized changes to programmed instructions in PLCs, RTUs, or DCS controllers, change alarm thresholds, or issue unauthorized commands to control equipment that could potentially result in damage to equipment (if tolerances are exceeded), premature shutdown of processes (such as prematurely shutting down transmission lines), or even disabling control equipment; send false information to control system operators either to disguise unauthorized changes or to initiate inappropriate actions by system operators; modify the control system software, producing unpredictable results; and interfere with the operation of safety systems. In addition, in control systems that cover a wide geographic area, the remote sites often are not staffed and may not be physically monitored. If such remote systems were to be physically breached, attackers could establish a cyber connection to the control network. Department of Energy (DOE) and industry researchers have speculated on how the following potential attack scenario could affect control systems in the electricity sector. Using war dialers to find modems connected to the programmable circuit breakers of the electric power control system, hackers could crack passwords that control access to the circuit breakers and could change the control settings to cause local power outages and even damage equipment. A hacker could lower settings from, for example, 500 amperes to 200 on some circuit breakers; normal power usage would then activate, or “trip,” the circuit breakers, taking those lines out of service and diverting power to neighboring lines. If, at the same time, the hacker raised the settings on these neighboring lines to 900 amperes, circuit breakers would fail to trip at these high settings, and the diverted power would overload the lines and cause significant damage to transformers and other critical equipment. The damaged equipment would require major repairs that could result in lengthy outages. Control system researchers at DOE’s national laboratories have developed systems that demonstrate the feasibility of a cyber attack on a control system at an electric power substation where high-voltage electricity is transformed for local use. Using tools that are readily available on the Internet, they are able to modify output data from field sensors and take control of the PLC directly in order to change settings and create new output. These techniques could enable a hacker to cause an outage, thus incapacitating the substation. Experts in the water industry consider control systems to be among the primary vulnerabilities of drinking water systems. A technologist from the water distribution sector has demonstrated how an intruder could hack into the communications channel between the control center of a water distribution pump station and its remote units, located at water storage and pumping facilities, to either block messages or send false commands to the remote units. Moreover, experts are concerned that terrorists could, for example, trigger a cyber attack to release harmful amounts of water treatment chemicals, such as chlorine, into the public’s drinking water. Experts in control systems have verified numerous incidents that have affected control systems. Reported attacks include the following: In 1994, the computer system of the Salt River Project, a major water and electricity provider in Phoenix, Arizona, was breached. In March 1997, a teenager in Worcester, Massachusetts, remotely disabled part of the public switching network, disrupting telephone service for 600 residents and the fire department and causing a malfunction at the local airport. In the spring of 2000, a former employee of an Australian company that develops manufacturing software applied for a job with the local government, but was rejected. Over a 2-month period, the disgruntled rejected employee reportedly used a radio transmitter on as many as 46 occasions to remotely hack into the controls of a sewage treatment system and ultimately release about 264,000 gallons of raw sewage into nearby rivers and parks. In the spring of 2001, hackers mounted an attack on systems that were part of a development network at the California Independent System Operator, a facility that is integral to the movement of electricity throughout the state. In August 2003, the Nuclear Regulatory Commission confirmed that in January 2003, the Microsoft SQL Server worm—otherwise known as Slammer—infected a private computer network at the Davis-Besse nuclear power plant in Oak Harbor, Ohio, disabling a safety monitoring system for nearly 5 hours. In addition, the plant’s process computer failed, and it took about 6 hours for it to become available again. Slammer reportedly also affected communications on the control networks of at least five other utilities by propagating so quickly that control system traffic was blocked. In addition, in 1997, the Department of Defense (DOD) undertook the first systematic exercise to determine the nation’s and DOD’s vulnerability to cyberwar. During a 2-week military exercise known as Eligible Receiver, staff from NSA used widely available tools to show how to penetrate the control systems that are associated with providers of electric power to DOD installations. Other assessments of control systems at DOD installations have demonstrated vulnerabilities and identified risks in the installations’ network and operations. The control systems community faces several challenges to securing control systems against cyber threats. These challenges include (1) the limitations of current security technologies in securing control systems, (2) the perception that securing control systems may not be economically justifiable, and (3) the conflicting priorities within organizations regarding the security of control systems. According to industry experts, existing security technologies, as well as strong user authentication and patch management practices, are generally not implemented in control systems because control systems usually have limited processing capabilities, operate in real time, and are typically not designed with cybersecurity in mind. Existing security technologies such as authorization, authentication, encryption, intrusion detection, and filtering of network traffic and communications, require more bandwidth, processing power, and memory than control system components typically have. Controller stations are generally designed to do specific tasks, and they often use low-cost, resource-constrained microprocessors. In fact, some control system devices still use the Intel 8088 processor, which was introduced in 1978. Consequently, it is difficult to install current security technologies without seriously degrading the performance of the control system. For example, complex passwords and other strong password practices are not always used to prevent unauthorized access to control systems, in part because this could hinder a rapid response to safety procedures during an emergency. As a result, according to experts, weak passwords that are easy to guess, shared, and infrequently changed are reportedly common in control systems, including the use of default passwords or even no password at all. In addition, although modern control systems are based on standard operating systems, they are typically customized to support control system applications. Consequently, vendor-provided software patches may be either incompatible with the customized version of the operating system or difficult to implement without compromising service by shutting down “always-on” systems or affecting interdependent operations. Another constraint on deploying patches is that support agreements with control system vendors often require the vendor’s approval before the user can install patches. If a patch is installed in violation of the support agreement, the vendor will not take responsibility for potential impacts on the operations of the system. Moreover, because a control system vendor often requires that it be the sole provider of patches, if the vendor delays in providing patches, systems remain vulnerable without recourse. Information security organizations have noted that a gap exists between currently available security technologies and the need for additional research and development to secure control systems. Research and development in a wide range of areas could lead to more effective technologies. For example, although technologies such as robust firewalls and strong authentication can be employed to better segment control systems from external networks, research and development could help to address the application of security technologies to the control systems themselves. Other areas that have been noted for possible research and development include identifying the types of security technologies needed for different control system applications, determining acceptable performance trade-offs, and recognizing attack patterns for use in intrusion detection systems. Industry experts have identified challenges in migrating system components to newer technologies while maintaining uninterrupted operations. Upgrading all the components of a control system can be a lengthy process, and the enhanced security features of newly installed technologies—such as their ability to interpret encrypted messages—may not be able to be fully utilized until all devices in the system have been replaced and the upgrade is complete. Experts and industry representatives have indicated that organizations may be reluctant to spend more money to secure control systems. Hardening the security of control systems would require industries to expend more resources, including acquiring more personnel, providing training for personnel, and potentially prematurely replacing current systems, which typically have a lifespan of about 20 years. Several vendors suggested that since there have been no reports of significant disruptions caused by cyber attacks on U.S. control systems, industry representatives believe the threat of such an attack is low. While incidents have occurred, to date there is no formalized process for collecting and analyzing information about control systems incidents, thus further contributing to the skepticism of control systems vendors. We have previously recommended that the government work with the private sector to improve the quality and quantity of information being shared among industries and government about attacks on the nation’s critical infrastructures. Until industry users of control systems have a business case to justify why additional security is needed, there may be little market incentive for the private sector to develop and implement more secure control systems. We have previously reported that consideration of further federal government efforts is needed to provide appropriate incentives for nonfederal entities to enhance their efforts to implement CIP—including protection of control systems. Without appropriate consideration of public policy tools, such as regulation, grants, and tax incentives, private-sector participation in sector-related CIP efforts may not reach its full potential. Finally, several experts and industry representatives indicated that the responsibility for securing control systems typically includes two separate groups: (1) IT security personnel and (2) control system engineers and operators. IT security personnel tend to focus on securing enterprise systems, while control system engineers and operators tend to be more concerned with the reliable performance of their control systems. These experts indicate that, as a result, those two groups do not always fully understand each other’s requirements and so may not effectively collaborate to implement secure control systems. These conflicting priorities may perpetuate a lack of awareness of IT security strategies that could be deployed to mitigate the vulnerabilities of control systems without affecting their performance. Although research and development will be necessary to develop technologies to secure individual control system devices, existing IT security technologies and approaches could be implemented as part of a secure enterprise architecture to protect the perimeters of, and access to, control system networks. Existing IT security technologies include firewalls, intrusion- detection systems, encryption, authentication, and authorization. Approaches to IT security include segmenting control system networks and testing continuity plans to ensure safe and continued operation. To reduce the vulnerabilities of its control system, officials from one company formed a team composed of IT staff, process control engineers, and manufacturing employees. This team worked collaboratively to research vulnerabilities and to test fixes and workarounds. Government, academia, and private industry have independently initiated multiple efforts and programs focused on some of the key areas that should be addressed to strengthen the cybersecurity of control systems. Our March 2004 report includes a detailed discussion of many initiatives. The key areas—and illustrative examples of ongoing efforts in these areas—include the following: Research and development of new security technologies to protect control systems. Both federal and nonfederal entities have initiated efforts to develop encryption methods for securing communications on control system networks and field devices. Moreover, DOE is planning to establish a National SCADA Test Bed to test control system vulnerabilities. However, funding constraints have delayed the implementation of the initial phases of these plans. Development of requirements and standards for control system security. Several entities are working to develop standards that increase the security of control systems. The North American Electric Reliability Council (NERC) is preparing to draft a standard that will include security requirements for control systems. In addition, the Process Controls Security Requirements Forum (PCSRF), established by NIST and NSA, is working to define a common set of information security requirements for control systems. However, according to NIST officials, reductions to fiscal year 2004 appropriations will delay these efforts. Increased awareness of security and sharing of information about the implementation of more secure architectures and existing security technologies. To promote awareness of control system vulnerabilities, DOE has created security programs, trained teams to conduct security reviews, and developed cybersecurity courses. The Instrumentation Systems and Automation Society has reported on the known state of the art of cybersecurity technologies as they are applied to the control systems environment, to clearly define what technologies can currently be deployed. Implementation of effective security management programs, including policies and guidance that consider control system security. Both federal and nonfederal entities have developed guidance to mitigate the security vulnerabilities of control systems. DOE’s 21 Steps to Improve Cyber Security of SCADA Networks provides guidance for improving the security of control systems and establishing underlying management processes and policies to help organizations improve the security of control system networks. In previous reports, we have recommended the development of a comprehensive and coordinated national plan to facilitate the federal government’s CIP efforts. This plan should clearly delineate the roles and responsibilities of federal and nonfederal CIP entities, define interim objectives and milestones, set time frames for achieving objectives, and establish performance measures. The President in his homeland security strategies and Congress in enacting the Homeland Security Act designated DHS as responsible for developing a comprehensive national infrastructure plan. The plan is expected to inform DHS on budgeting and planning for CIP activities and on how to use policy instruments to coordinate among government and private entities to raise the security of our national infrastructures to appropriate levels. According to Homeland Security Presidential Directive 7 (HSPD 7), issued December 17, 2003, DHS is to develop this formalized plan by December 2004. In February 2003, the President’s National Strategy to Secure Cyberspace established a role for DHS to coordinate with other government agencies and the private sector to improve the cybersecurity of control systems. DHS’s assigned role includes: ensuring that there is broad awareness of the vulnerabilities in control systems and the consequences of exploiting these vulnerabilities, developing best practices and new technologies to strengthen the security of control systems, and identifying the nation’s most critical control system sites and developing a prioritized plan for ensuring cyber security at those sites. In addition, the President’s strategy recommends that DHS work with the private sector to promote voluntary standards efforts and the creation of security policy for control systems. DHS recently began to focus on the range of activities that are under way among the numerous entities that are working to address these areas. In October 2003, DHS’s Science and Technology Directorate initiated a study to determine the current state of security of control systems. In December 2003, DHS established the Control Systems Section within the Protective Security Division of its Information Analysis and Infrastructure Protection (IAIP) Directorate. The objectives of this section are to identify computer- controlled systems that are vital to infrastructure functions, evaluate the potential threats to these systems, and develop strategies that mitigate the consequences of attacks. In addition, IAIP’s National Cyber Security Division (NCSD) is planning to develop a methodology for conducting cyber assessments across all critical infrastructures, including control systems. The objectives of this effort include defining specific goals for the assessments and, based on their results, developing sector-specific recommendations to mitigate vulnerabilities. NCSD also plans to examine processes, technology, and available policy, procedures, and guidance. Because these efforts have only recently been initiated, DHS acknowledges that it has not yet developed a strategy for implementing the functions mentioned above. As I previously mentioned, many government and nongovernment entities are spearheading various initiatives to address the challenge of implementing cybersecurity for the vital systems that operate our nation’s critical infrastructures. While some coordination is occurring, both federal and nonfederal control systems experts have expressed their concern that these efforts are not being adequately coordinated among government agencies, the private sector, and standards-setting bodies. DHS’s coordination of these efforts could accelerate the development and implementation of more secure systems to manage our critical infrastructures. In contrast, insufficient coordination could contribute to delays in the general acceptance of security requirements and the adoption of successful practices for control systems, failure to address gaps in the research and development of technologies to better secure control systems, impediments to standards-creating efforts across industries that could lead to less expensive technological solutions, and reduced opportunities for efficiency that could be gained by leveraging ongoing work. In summary, it is clear that the systems that monitor and control the sensitive processes and physical functions of the nation’s critical infrastructures are at increasing risk from threats of cyber attacks. Securing these systems poses significant challenges. Numerous federal agencies, critical infrastructure sectors, and standards-creating bodies are leading various initiatives to address these challenges. DHS’s implementation of our recommendation—with which the department concurred—to develop and implement a strategy for better coordinating the cybersecurity of our critical infrastructures’ control systems among government and private sector entities can accelerate progress in securing these critical systems. Additionally, implementing existing IT technologies and security approaches can strengthen the security of control systems. These approaches include establishing an effective security management program, building successive layers of defense mechanisms at strategic access points to the control system network, and developing and testing continuity plans to ensure safe operation in the event of a power outage or cyber attack. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the Subcommittee may have at this time. If you should have any questions about this statement, please contact me at (202) 512-3317 or Elizabeth Johnston, Assistant Director, at (202) 512- 6345. We can also be reached by e-mail at daceyr@gao.gov and johnstone@gao.gov, respectively. Other individuals who made key contributors to this testimony include Shannin Addison, Joanne Fiorino, Alison Jacobs, Anjalique Lawrence, and Tracy Pierson. | Computerized control systems perform vital functions across many of our nation's critical infrastructures. For example, in natural gas distribution, they can monitor and control the pressure and flow of gas through pipelines. In October 1997, the President's Commission on Critical Infrastructure Protection emphasized the increasing vulnerability of control systems to cyber attacks. At the request of the House Committee on Government Reform, Subcommittee on Technology, Information Policy, Intergovernmental Relations and the Census, this testimony will discuss GAO's March 2004 report on potential cyber vulnerabilities, focusing on (1) significant cybersecurity risks associated with control systems (2) potential and reported cyber attacks against these systems (3) key challenges to securing control systems, and (4) efforts to strengthen the cybersecurity of control systems. In addition to general cyber threats, which have been steadily increasing, several factors have contributed to the escalation of the risks of cyber attacks against control systems. These include the adoption of standardized technologies with known vulnerabilities and the increased connectivity of control systems to other systems. Control systems can be vulnerable to a variety of attacks, examples of which have already occurred. Successful attacks on control systems could have devastating consequences, such as endangering public health and safety. Securing control systems poses significant challenges, including limited specialized security technologies and lack of economic justification. The government, academia, and private industry have initiated efforts to strengthen the cybersecurity of control systems. The President's National Strategy to Secure Cyberspace establishes a role for DHS to coordinate with these entities to improve the cybersecurity of control systems. While some coordination is occurring, DHS's coordination of these efforts could accelerate the development and implementation of more secure systems. Without effective coordination of these efforts, there is a risk of delaying the development and implementation of more secure systems to manage our critical infrastructures. |
Mr. Chairman and Members of the Subcommittee: I am pleased to be here today to discuss the U.S. Census Bureau’s preparations and operational plans for its dress rehearsal for the 2000 Census, which is currently under way at three sites: Sacramento, CA; 11 counties in the Columbia, SC, area; and Menominee County in Wisconsin, including the Menominee American Indian Reservation. To the extent that the dress rehearsal mirrors the actual census, the dress rehearsal could foreshadow how well key census-taking activities might work in the decennial, and thus indicate where additional congressional and Bureau attention is needed now to ensure successful results in 2000. My overall point today is that the dress rehearsal, originally contemplated as a concerted demonstration of a well-defined census design for 2000, instead will leave a number of design and operational questions unanswered. These unresolved issues led us in 1997 to raise concerns about the high risk of a failed census in 2000. At your request, my statement focuses on the progress, if any, that the Bureau has made since July 1997, when we reported that the risk of a failed census in 2000 had increased since we originally designated the 2000 Census as a high-risk area in February 1997. Specifically, we pay special attention to the challenges the Bureau faces in implementing such key census-taking activities as (1) creating a complete and accurate address list, (2) increasing the mail response rate through outreach and promotion, (3) staffing census-taking operations with an adequate workforce, and (4) carrying out its sampling and statistical estimation procedures. These subjects are covered extensively in our report prepared at the request of the Chairman and Ranking Minority Member of the Senate Committee on Governmental Affairs, which we are making available today. I also provide my preliminary observations on the status of the Bureau’s dress rehearsal evaluation program. rehearsal sites, and contacted Menominee officials by telephone; (2) conducted in-person and telephone interviews with local officials on their experiences in reviewing address lists, promoting the census, andrecruiting and hiring census workers; and (3) where applicable, reviewed relevant documents on these activities. Information on the Bureau’s dress rehearsal evaluation program was obtained by conducting a content analysis of the Bureau’s evaluation proposals and by interviewing cognizant Bureau officials. Since the Bureau has yet to finalize its evaluation plans, our observations should be considered preliminary. The 1990 Census was the most costly in history, and it produced data that were less accurate than those from the 1980 Census. About 6 million persons were counted twice in the 1990 Census, while 10 million persons were missed—for a total of 16 million gross errors in the count. Of particular concern was the fact that the 1990 Census was more likely to miss minority groups and renters, particularly those living in rural areas. To address the problems that occurred in 1990, the Bureau redesigned key components of the census, such as procedures for developing a complete and accurate address list, increasing the mail response rate through outreach and promotion, staffing census-taking operations with a capable workforce, and reducing costs and improving accuracy through sampling and statistical estimation. However, Congress has not endorsed the Bureau’s overall design because of its concerns over the validity, legality, and operational feasibility of the Bureau’s statistical sampling and estimation procedures. Because of the significant and long-standing operational and technical challenges that the Bureau faces in taking the census, and the continuing disagreement between Congress and the administration over the use of sampling, in February 1997, we designated the 2000 Decennial Census as being at high risk for wasted expenditures and unsatisfactory results. In July 1997, with still no agreement and uncertainties surrounding the feasibility of some key census operations, we reported that risks of a failed census in 2000 had increased. Day—April 18, 1998. However, as is the case with the actual census, the Bureau’s dress rehearsal activities span a much wider period of time than this single day. Following the selection of the dress rehearsal sites in July 1996, the Bureau developed preliminary mailing lists and materials for these locations, contacted local governments at the three sites, and conducted staffing activities to hire temporary census employees in those locations. Similarly, after April 18, the Bureau is to develop its census count by conducting the necessary follow-up activities at nonresponding households and completing other fieldwork. The key to a successful dress rehearsal is making it as much like the decennial census as possible. Thus, according to the Bureau, the dress rehearsal for the 2000 Census should test nearly all of the various operations, procedures, and questions that are planned for the decennial under as census-like conditions as feasible. However, as an indication of increasing congressional concern over the Bureau’s plans for the 2000 Census, the administration and Congress agreed, as part of a compromise over the Bureau’s fiscal year 1998 appropriation, that the Bureau would use sampling and statistical estimation methods only in the Sacramento site, rather than at all three dress rehearsal sites as it plans to do nationally in 2000. In the Columbia site, the Bureau is to follow up on all nonresponding households just as it did nationwide in the 1990 Census. At the Menominee dress rehearsal site, the Bureau is also to follow-up on all nonresponding households, but is to use sampling and statistical estimation to improve the accuracy of the population count. Although use of the different methods at the dress rehearsal sites invites a comparison of the results, the dress rehearsal is not a test of competing census designs. Geographic, demographic, and possibly other differences among the dress rehearsal locations preclude such a comparison. of a failed census—one on which the nation would have spent billions of dollars and still have demonstrably inaccurate results. Complete and accurate address lists, along with precise maps, are the foundation of a successful census. Accurate addresses are essential for delivering questionnaires, avoiding unnecessary and expensive follow-up efforts at vacant or nonexistent residences, and establishing a universe of households for sampling and statistical estimation. Accurate maps are critical for assigning correct portions of the population to their proper locations—an operation that is the foundation of congressional redistricting. To build its address list, which is known as the Master Address File (MAF), the Bureau initially planned, in part, to (1) use addresses provided by the Postal Service, (2) merge these addresses with the address file the Bureau created during the 1990 Census, (3) conduct limited checks of the accuracy of selected addresses, and (4) send the addresses to local governments for verification as part of a process called Local Update of Census Addresses (LUCA). However, the Bureau’s analyses of the completeness of the Postal Service’s addresses, when combined with the Bureau’s 1990 addresses for a selected number of locations, have shown that the resulting address list was not always complete. For example, address lists created in 1995 for two test locations did not include from 3.6 to 6.4 percent of the addresses identified through other Bureau operations, such as field verification. Following these and similar analyses for lists created in 1996, the Bureau concluded in September 1997 that primary reliance on the Postal Service’s and the Bureau’s 1990 address files was not sufficient, and that it needed to redesign its procedures in order to build a MAF for the 2000 Census that, as a whole, is 99 percent complete. Under the new procedures, which are estimated to cost an additional $108.7 million, the Bureau now plans to canvass neighborhoods across the nation to physically verify the completeness and accuracy of the address file for the 2000 Census prior to local address review. While the components of the new approach have been used and tested in prior censuses, the Bureau has not used or tested them either in concert with each other or in the sequence as presently designed for use in the 2000 Census, and does not plan to do so in the dress rehearsal. Consequently, it will not be known until the 2000 Census whether the Bureau’s redesigned procedures will allow it to meet its goal. Further, the dress rehearsal results to date suggest that LUCA may be too inconsistent and face too many obstacles to systematically verify or increase the accuracy of the MAF. For example, despite the Bureau’s efforts to encourage all local jurisdictions to participate, just 34 of the 60 local jurisdictions involved with the dress rehearsal participated in LUCA. Reasons for the low participation rate included the lack of resources and/or information to review address lists or maps at the local level. Jurisdictions that participated in LUCA said that problems with the level of Bureau assistance, as well as with the accuracy and completeness of the address lists and maps, impeded their review efforts. Although the Bureau’s reengineered address development procedures call for obtaining earlier assistance from local governments to review addresses and maps, this does not address other problems encountered by local officials in reviewing address lists during the dress rehearsal, such as the unavailability of Bureau assistance and the inconsistent quality of the address list and maps. To help increase the mail response rate and thus reduce its costly nonresponse follow-up workload, the Bureau plans to partner with local governments and other organizations to raise public awareness of the census. The Bureau expects that its outreach and promotion efforts, combined with other initiatives, such as simplified census questionnaires, should produce a mail response rate of 66.9 percent for the 2000 Census. This is 12 percentage points higher than the 55-percent response rate that the Bureau expects it would achieve without these activities and slightly higher than the 65-percent response rate achieved in the 1990 Census. Nevertheless, the Bureau’s experience thus far during the dress rehearsal suggests that, in 2000, this goal might be difficult to achieve. According to the Bureau, the success of its outreach and promotion efforts will depend heavily on the effectiveness of the partnerships it hopes to build with state, local, and tribal governments; the private sector; various media; and other organizations. Citing agency policy, the Bureau has said that it is unable to fund local outreach and promotion efforts. It is therefore placing a priority on working with partners because they can help publicize the census, foster participation, and dispel myths, among other activities. community, social service, religious, and other local leaders, CCCs are to help mobilize grassroots promotion efforts. However, not all of the dress rehearsal jurisdictions where the Bureau hoped to establish committees had done so at the time of our review. For example, in South Carolina, of the 11 counties and the City of Columbia participating in the dress rehearsal, just 3 counties and Columbia had active committees at the time of our review. The eight remaining counties either had not started committees or had formed committees that subsequently became inactive. We found that the operational problems the CCCs were encountering had several sources. Among these were communication difficulties between the CCCs and the Bureau. Four of the six active CCCs we contacted at the three dress rehearsal sites indicated that the Bureau did not set clear expectations for their CCCs, especially when they were first initiated, and/or Bureau guidance and literature had been minimal. Another element of the Bureau’s outreach and promotion strategy is a paid advertising campaign. In the 1990 Census, the Bureau relied on pro bono public service advertising to get its message across. In October 1997, the Bureau announced that it had awarded its 2000 Census paid-advertising contract to Young & Rubicam, which is a private advertising agency. The Bureau has budgeted about $100 million dollars for this effort, of which about 80 percent has been earmarked for buying advertising in print and broadcast media. Nevertheless, the advertising agency faces not only the familiar task of developing public awareness of the census, but also the greater challenge of motivating people to return their questionnaires in spite of a long-term decline in the mail response rate. While the Bureau found that 93 percent of the public was aware of the census in 1990, the mail response rate was only 65 percent, 10 percentage points lower than it was in 1980. For the 2000 Census, the Bureau estimates that it will need to recruit over 2.6 million applicants to fill about 295,000 positions. Aside from the large numbers of people needed, hiring census workers could be difficult because most census jobs are part-time and temporary and do not come with such benefits as health insurance. Consequently, potential applicants may not find census jobs as attractive as alternative work opportunities. pay employees a wage that is based on local rates and to offer productivity incentives. However, if current employment trends continue, the Bureau could find itself recruiting workers in a tighter labor market than prevailed in 1990. Furthermore, the Bureau’s decision to focus its recruitment efforts on moonlighters and retirees is based on informal discussions with census workers during the 1995 Census Test, the hiring practices of private survey research firms, and census workforce studies that may not necessarily be comparable to the 2000 Census situation. Declining response rates have posed problems for the Bureau since it began its mail-out/mail-back procedure in 1970. Hundreds of thousands of additional enumerators must be hired to collect census information from an increasing number of nonresponding households. To reduce its nonresponse follow-up workload, the Bureau plans to sample nonresponding households for the 2000 Census. The Bureau has also designed a procedure called Integrated Coverage Measurement (ICM) by which it is to take a separate sample after the nonresponse follow-up is completed to make adjustments to the census counts. However, these activities face several challenges. For example, it is uncertain whether the Bureau can complete its nonresponse follow-up and ICM operations in the time allotted, considering that in 1990 similar processes took longer even though the amount of work was less. In 1990, the Bureau allowed 4 weeks from Census Day for mail response before beginning nonresponse follow-up. In 2000, the Bureau also plans to allow 4 weeks from Census Day for mail response. In 1990, nonresponse follow-up was scheduled to last 6 weeks, but in some locations lasted 14 weeks. For the 2000 Census, the Bureau will again allow 6 weeks for completion of nonresponse follow-up. In 1990, an operation similar to the ICM was not completed until January 4, 1991, while for the 2000 Census, the Bureau plans to perform the same tasks for five times the number of households by the end of September 2000. other individuals outside of the nonresponding households, a method that in the past has been shown to be less accurate. A properly designed evaluation program that provides information on the cost, performance, required resources, timing of various census operations, and the quality and completeness of census data, is essential for the Bureau to assess the feasibility of its operational plans. We believe that, to be most effective, the evaluation effort needs to begin with a determination of what information will be required to support critical decisions and when that information needs to be available to Bureau and other decisionmakers. However, we are concerned that, with Dress Rehearsal Census Day a little over 3 weeks away, the Bureau’s evaluation program plan is still a work in progress, and that uncertainties surround the Bureau’s approaches and methodologies for carrying out specific studies. According to the Bureau, its dress rehearsal evaluation program or “report card” is designed to validate plans for the 2000 Census, measure coverage of persons and housing units, and evaluate the completeness and quality of census data. Specifically, it is to consist of a status report to track the performance of key census operations at different points in time during the rehearsal, a quality assurance checkpoint system to monitor key dress rehearsal processes and signal where additional assistance is necessary to ensure operations remain on track, and a series of evaluations to determine how good the census data is in terms of statistical and other quality measures. The Bureau plans to establish a set of performance standards for measuring success at each site. Such performance standards are to measure, for example, the completeness of the MAF and the effectiveness of the paid advertising campaign. The standards, however, will not be used to measure the operational performance of one site against another. evaluation plans, and the methodologies for some of these evaluations are not sufficiently defined to provide assurances that needed evaluation data will be available on a timely basis. The Bureau continues to complete the methodological details of its evaluations, and plans to issue those details as they are finished. In summary, although the Bureau has made progress in addressing some of the problems that occurred during the 1990 Census, key activities continue to face operational challenges at a disturbingly late stage in the census cycle when the Bureau should be fine tuning rather than revising its basic operational plans. Moreover, the ongoing lack of an agreement between Congress and the administration over the final design of the 2000 Census has only added to the challenges facing the Bureau. So long as this condition persists, the risk of a failed census in 2000 will continue to increase. We look forward to supporting congressional oversight of the planning and conduct of the decennial census, and we will continue monitoring the dress rehearsal and the census evaluation program, as well as the Bureau’s preparations for the decennial census. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions you or other members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Census Bureau's preparations and operational plans for its dress rehearsal for the 2000 Census, focusing on the progress that the Bureau has made since July 1997. GAO noted that: (1) the dress rehearsal for the 2000 Census is currently under way at three sites: (a) Sacramento, California; (b) 11 counties in the Columbia, South Carolina, area; and (c) Menominee County in Wisconsin, including the Menominee American Indian Reservation; (2) although it was originally intended to demonstrate the Census Bureau's plans for the 2000 Census, the dress rehearsal will instead leave a number of design and operational issues unresolved; (3) these unresolved issues led GAO in 1997 to raise concerns about the high risk of a failed census in 2000; (4) accurate address lists and associated maps are the building blocks for successful census; (5) however, the Bureau has concluded that its original procedures for building the 2000 Census address list might not meet its goals of being 99-percent complete; (6) although the Bureau has since revised its address list development procedures, they will not be tested during the dress rehearsal, thus it will not be known until the 2000 Census whether they will meet the Bureau's goal; (7) the Bureau's outreach and promotion initiatives are designed to boost mail response rates and thus avoid costly followups to nonresponding households; (8) while the Bureau is to rely on partnerships with the local governments and organizations to raise public awareness of the census, the level of participation in these efforts has been inconsistent during the dress rehearsal, suggesting their impact on response in 2000 may be limited; (9) uncertainties surround the Bureau's ability to staff the 295,000 mostly temporary office and field positions necessary to conduct the census; (10) census jobs may not be as attractive as other positions, and, if current trends continue, the Bureau could find itself competing for workers in a tight labor market; (11) the Bureau's sampling and statistical estimation procedures, while they could reduce costs and improve accuracy if properly implemented, face methodological, technological, and quality control challenges; (12) in addition to these operational challenges, the Bureau has not finalized its plans for evaluating the dress rehearsal, thus it is not known whether the evaluations will provide needed data to assess the feasibility of the Bureau's plans for the 2000 Census; (13) further, Congress has not endorsed the Bureau's overall design of the 2000 Census because of its concerns over the Bureau's plans to use statistical sampling and estimation procedures; and (14) the longer this impasse continues, the greater likelihood of a failed census. |
Microelectronics focuses on the study and manufacture of micro devices, such as silicon integrated circuits, which are fabricated in submicron dimensions and form the basis of all electronic products. In DOD research, microelectronics extends beyond silicon integrated circuits and cuts across scientific disciplines such as biological sciences, materials sciences, quantum physics, and photonics. DOD research also covers many different types of materials, devices, and processes. For example, DOD service laboratories conduct research in materials other than silicon, such as gallium nitride, indium arsenide, and silicon carbide—materials that could provide higher performing or more reliable devices to meet DOD needs. DOD’s overall budget authority for fiscal year 2005 was approximately $400 billion. About $69 billion, or 17 percent of the overall budget, was directed toward research and development activities. The vast majority of this funding goes to development programs for major systems such as the Joint Strike Fighter and the Space Based Infrared System High. About $5.2 billion, or about 1.3 percent of the overall budget, was directed toward research (see fig. 1). Because DOD tracks funding by funding category, not by specific technology area, the microelectronics portion of this funding category cannot be broken out. DOD research and technology development is conducted by universities, DOD laboratories, industry, and other organizations. Universities and DOD laboratories are primarily involved in research. Once a new device is proven and has potential application for DOD, the technology is transferred to industry to further develop and ultimately produce and integrate into defense systems. These organizations may collaborate on microelectronics projects through various arrangements, such as cooperative research and development agreements and collaborative technology alliances. Figure 2 shows the distribution of DOD research and advanced technology development funding by performing organizations. Microelectronics production and research prototyping require specialized equipment and facilities. To prevent flaws in production, microelectronic devices are produced in clean rooms where the air is constantly filtered, and temperature, humidity, and pressure may be regulated. Clean rooms are rated according to a federal standard. For example, a class 1000 clean room has no more than 1000 particles larger than 0.5 microns in a cubic foot of air, while a class 100 clean room has no more than 100 particles. The people who work in clean rooms wear special protective clothing that prevents workers from contaminating the room (see fig. 3). The equipment found at research facilities and at production facilities are similar but are used for different purposes. Because research facilities focus on developing new device concepts, materials, and processes, the equipment is set up for flexibility because it is used for different experiments to prove concepts and validate theories. Once a technology is sufficiently developed, a small quantity is prototyped in a production environment to prove the design. Production facilities are set up to produce higher volumes of microelectronics and have more automation and multiple sets of equipment to increase productivity. At the time of our review, eight DOD and FFRDC facilities that received funding from DOD were involved in microelectronics research prototyping or production. Three military facilities focused solely on research; three primarily focused on research but had limited production capabilities; and two focused solely on production (see fig. 4). The three military facilities provide basic and applied research covering a wide spectrum of microelectronic devices and materials. For example, the Naval Research Laboratory facility is conducting basic research on the potential application of nonsilicon materials in microelectronic devices. Through its applied research, the Air Force Research Laboratory facility developed a process to improve the performance and reliability of microwave devices needed for military radar and communications systems. This technology was ultimately transferred from the Air Force to various contractors and used in a number of systems, including the Joint Strike Fighter. The Army Research Laboratory facility conducts both basic and applied research, primarily on multifunction radiofrequency, optoelectronics, and power conversion. Three other facilities also conduct research but can produce prototypes or limited numbers of devices if commercial sources are not available. For example, the Lincoln Laboratory’s facility—which primarily focuses on applied research in sensing and signal processing technologies—has developed components for the space-based visible sensor because no commercial source was available to meet this DOD need. Sandia’s facility primarily focuses on research and design of radiation hardened microelectronics. However, because the number of commercial producers able to meet the radiation requirements of the Department of Energy and DOD has dwindled to two suppliers, Sandia maintains limited in-house production capability to fill near-term critical needs. According to Sandia officials, they have not been called upon to produce microelectronics for DOD in recent years. The SPAWAR facility, which recently closed, primarily conducted research on radiation-hardened microelectronics, but at one time produced these devices for the Navy’s Trident missile system. When production of these devices was transferred to a commercial supplier, the facility maintained capability to produce microelectronics as a back-up to the commercial supplier. Two facilities focused only on production—one on leading edge technology and one on lagging edge technology. NSA’s microelectronics facility focuses on producing cryptographic microelectronics—devices not readily obtainable on the commercial market because of their unique and highly classified requirements. DMEA fills a unique role within DOD by providing solutions to microelectronics that are no longer commercially available. DMEA acquires process lines that commercial firms are abandoning and, through reverse-engineering and prototyping, provides DOD with these abandoned devices. In some cases, DMEA may produce the device. The type and complexity of research conducted or device produced largely determines a facility’s clean room class and size and its equipment replacement costs. For example, to produce cryptographic electronics, NSA has a 20,000 square foot class 10 clean room facility. In contrast, the Naval Research Laboratory conducts research in a 5,000 square foot class 100 clean room facility, with some class 10 modules where greater cleanliness is required. In general, research does not require state-of-the- art equipment to prove concepts, and tools can be purchased one at a time and are often second-hand or donated. Table 1 summarizes the eight facilities’ microelectronics focus, clean room class and size, and equipment replacement costs. Since we began our review, the SPAWAR facility closed on October 31, 2004, making Sandia the only backup to the two remaining commercial radiation-hardened suppliers to DOD. Officials from the facility told us that without funds from the Trident program, operating the facility became cost prohibitive. Further, NSA’s microelectronics facility is slated for closure in 2006. NSA estimated that it would cost $1.7 billion to upgrade its equipment and facility to produce the next generation of integrated circuits. NSA is contracting with IBM to take over production of the microelectronic devices produced at its facility. Part of the contract costs includes security requirements for IBM to produce classified circuits. There may be changes to other facilities pending the review of the Base Realignment and Closure Commission for 2005. As a result of prior commission recommendations, the Army constructed a new facility to consolidate Army specialized electronics research into one location. DOD has several mechanisms in place aimed at coordinating and planning research conducted by the Air Force, Army, Navy, and defense agencies. In electronics and microelectronics research, DOD works with industry to review special technology areas and make recommendations about future research. DOD’s Defense Reliance process provides the Department with a framework to look across science and technology (S&T) efforts of the Defense Advanced Research Projects Agency, Defense Threat Reduction Agency, and the Missile Defense Agency as well as the Army, Navy, and Air Force. Each service and defense agency updates its own S&T plans with the needs of each organization in mind. The Defense Reliance process is intended to improve coordination and determine if the overall DOD S&T vision and strategy are being met. The Defense Science and Technology Strategy document is updated periodically to provide a high-level description of what the science and technology programs aim to accomplish. The Defense Reliance process includes the development of three planning documents, which taken together provide a near-, mid-, and long-term look at DOD specific research needs (see table 2). The planning documents present the DOD S&T vision, strategy, plan, and objectives for the planners, programmers, and performers of defense S&T and guide the annual preparation of the defense program and budget. Figure 5 illustrates the relationship between the planning documents and overall reliance process. Science and technology efforts are planned and funded through service and defense agency plans. To obtain a perspective across DOD, a portion of the service and agency efforts are represented in the various Defense Reliance planning documents. DOD’s goal is to have about half of the investment in service and agency efforts represented in defense technology objectives. According to DOD officials, this goal is aimed at balancing flexibility—which services and defense agencies need to pursue research that is important to their organizations—with oversight and coordination. DOD officials stated that looking at a portion of the efforts provide an adequate perspective of the S&T research across the services and defense agencies to help ensure the goals of DOD’s S&T strategy are being met. These projects are generally considered high priority, joint efforts, or both. Two key components in the Defense Reliance process are the defense technology objectives and technology area review and assessments. Defense technology objectives are intended to guide the focus of DOD’s science and technology investments by identifying the following objectives, the specific technology advancements that will be developed or payoffs, the specific benefits to the warfighter resulting from the challenges, the technical barriers to be overcome; milestones, planned dates for technical accomplishments, including the anticipated date of technology availability; metrics, a measurement of anticipated results; customers sponsoring the research; and funding that DOD estimates is needed to achieve the technology advancements. Both the Joint Warfighting and Defense Technology Area plans are comprised of defense technology objectives that are updated annually. In its 2004 update, DOD identified 392 defense technology objectives —130 in the Joint Warfighting Science and Technology Plan across five joint capabilities, and 262 in the Defense Technology Area Plan across 12 technology areas. Microelectronics falls within the sensors, electronics, and electronic warfare area. There are 40 defense technology objectives in this area; five were identified as microelectronics (see fig. 6). However, according to DOD officials, research relating to microelectronics is not limited to these five defense technology objectives because microelectronics is an enabling technology found in many other research areas. For example, research in electronic warfare is highly dependent on microelectronics. To provide an independent assessment of the planned research, DOD uses Technology Area Review and Assessment panels. DOD strives to have a majority of the Technology Area Review and Assessment team members from outside DOD, including other government agencies, FFRDCs, universities, and industry. Most team members are recognized experts in their respective research fields. The Technology Area Review and Assessment panels assess DOD programs against S&T planning guidance, defense technology objectives, affordability, service-unique needs, and technology opportunities; and provide their assessments and recommendations to the Defense Science and Technology Advisory Group. For the electronics research area, additional industry and university insight is obtained through the Advisory Group on Electron Devices. DOD established this advisory group to help formulate a research investment strategy by providing ongoing reviews and assessments of government- sponsored programs in electronics, including microelectronics. The advisory group is comprised of experts representing the government, industry, and universities, who provide DOD with current knowledge on the content and objectives of various programs under way at industry, university, and government laboratories. Periodically, the advisory group conducts special technology area reviews to evaluate the status of an electronics technology for defense applications. The advisory group also serves as a bridge between electronic system and component developers within DOD by establishing regular, periodic interactions with system program offices, industry system developers, and government and industry components developers. We provided a draft of this report to DOD for review. In its response, DOD did not provide specific written or technical comments (see app. II). We are sending copies of this report to interested congressional committees; the Secretary of Defense; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 if you or your staff has any questions concerning this report. Major contributors to this report are listed in appendix III. To identify and describe DOD and FFRDC facilities that receive funding from DOD for microelectronics production or research prototyping, we visited all eight facilities identified by DOD as having capability to produce or prototype microelectronics. Using a set of structured questions, we interviewed officials at each facility to determine their microelectronics focus, clean-room and equipment characteristics, and types of research, production and/or research prototyping the facility provides. We also obtained and analyzed supporting documents and toured the facilities. We did not include in our scope universities or commercial firms that also conduct DOD research and have microelectronics facilities. Because microelectronics is a part of a much broader area of research, we looked at DOD’s overall research coordination in addition to microelectronics-specific areas. To determine how DOD coordinates its research investments, we interviewed officials from the Executive Staff of the Defense Science and Technology Reliance process; the Office of the Deputy Under Secretary of Defense for Science and Technology (Space and Sensor Technology); and the Advisory Group on Electron Devices. We also obtained and reviewed DOD’s defense research planning- documents—including the Basic Research Plan, the Defense Technology Area Plan, Joint Warfighting Science and Technology Plan, and the Defense Technology Objectives document. We also met with Defense Advanced Research Projects Agency officials to discuss their role in sponsoring DOD research and development activities. In addition, at the DOD service laboratories that we visited, we obtained information on microelectronics related research projects. We performed our review from November 2003 to January 2005 in accordance with generally accepted government auditing standards. In addition to the individual named above, Bradley Terry, Lisa Gardner, Karen Sloan, Hai Tran, Brian Eddington, and Steven Pedigo made key contributions to this report. | The Department of Defense's (DOD) ability to provide superior capabilities to the warfighter is dependent on its ability to incorporate rapidly evolving, cutting-edge microelectronic devices into its defense systems. While many commercial microelectronics advances apply to defense systems, DOD has some unique microelectronics needs not met by industry. Therefore, to maintain military superiority, DOD has the challenge of exploiting state-of-the-art commercial microelectronics technology and focusing its research investments in areas with the highest potential return for defense systems. Given the importance of advanced microelectronics to defense systems and the rapid changes in these technologies, Congress asked GAO to (1) identify and describe DOD and federally funded research and development center (FFRDC) facilities that receive funding from DOD for microelectronics production or research prototyping and (2) describe how DOD coordinates investments in microelectronics research. At the time of our review, eight DOD and FFRDC facilities that received funding from DOD were involved in microelectronics research prototyping or production. Three of these facilities focused solely on research; three primarily focused on research but had limited production capabilities; and two focused solely on production. The research conducted ranged from exploring potential applications of new materials in microelectronic devices to developing a process to improve the performance and reliability of microwave devices. Production efforts generally focus on devices that are used in defense systems but not readily obtainable on the commercial market, either because DOD's requirements are unique and highly classified or because they are no longer commercially produced. For example, one of the two facilities that focuses solely on production acquires process lines that commercial firms are abandoning and, through reverse-engineering and prototyping, provides DOD with these abandoned devices. During the course of GAO's review, one facility, which produced microelectronic circuits for DOD's Trident program, closed. Officials from the facility told us that without Trident program funds, operating the facility became cost prohibitive. These circuits are now provided by a commercial supplier. Another facility is slated for closure in 2006 due to exorbitant costs for producing the next generation of circuits. The classified integrated circuits produced by this facility will also be supplied by a commercial supplier. DOD has several mechanisms in place aimed at coordinating and planning research conducted by the military services and defense agencies. One key mechanism is identifying defense technology objectives--the specific technology advancements that will be developed or demonstrated across multiple joint capabilities and technology areas. As of February 2004, there were almost 400 defense technology objectives; five of these were identified as microelectronics. DOD also collaborates with industry to review and assess special technology areas and make recommendations about future electronics and microelectronics research. |
Wildland fires triggered by lightning are both natural and inevitable and play an important ecological role on the nation’s landscapes. These fires shape the composition of forests and grasslands, periodically reduce vegetation densities, and stimulate seedling regeneration and growth in some species. Over the past century, however, various land use and management practices—including fire suppression, grazing, and timber harvesting—have reduced the normal frequency of fires in many forest and rangeland ecosystems and contributed to abnormally dense, continuous accumulations of vegetation. Such accumulations not only can fuel uncharacteristically large or severe wildland fires but also—with more homes and communities built in or near areas at risk from wildland fire— threaten human lives, health, property, and infrastructure. Moreover, the introduction and spread of invasive nonnative species (such as cheatgrass), along with the expanded range of certain flammable native species (such as western juniper) have also altered natural fire regimes, contributing to wildland fires’ burning in some areas with uncharacteristic frequency or intensity. The Forest Service, Bureau of Indian Affairs, Bureau of Land Management, Fish and Wildlife Service, and National Park Service are responsible for wildland fire management. These five agencies manage about 700 million acres of land in the United States, including national forests, national grasslands, Indian reservations, national parks, and national wildlife refuges. The Forest Service and the Bureau of Land Management manage the majority of these lands. The Forest Service manages about 190 million acres; the Bureau of Land Management manages about 260 million acres; and the Bureau of Indian Affairs, Fish and Wildlife Service, and National Park Service each manage less than 100 million acres. Figure 1 shows the distribution of land among the five agencies. Each agency has from 7 to 12 regional or state offices that oversee field units. The federal wildland fire management program has three major components: preparedness, suppression, and fuel reduction. To prepare for a wildland fire season, the agencies acquire firefighting assets— including firefighters, fire engines, aircraft, and other equipment—and station them either at individual federal land management units (such as national forests or national parks) or at centralized dispatch locations. The primary purpose of these assets is to respond to fires before they become large—a response referred to as initial attack—thus forestalling threats to communities and natural and cultural resources. The agencies fund the assets used for initial attack primarily from their wildland fire preparedness accounts. When a fire starts, current federal policy directs the agencies to consider land management objectives—identified in land and fire management plans developed by each land management unit—and the structures and resources at risk when determining whether or how to suppress the fire. A wide spectrum of strategies is available to choose from, and the land manager at the affected local unit is responsible for determining which strategy to use. In the relatively rare instances when fires escape initial attack and grow large, the agencies respond using an interagency system that mobilizes additional firefighting assets from federal, state, and local agencies, as well as private contractors, regardless of which agency or agencies have jurisdiction over the burning lands. Federal agencies typically fund the costs of these activities from their wildland fire suppression accounts. In addition to preparing for and suppressing fires, the agencies attempt to reduce the potential for severe wildland fires, lessen the damage caused by fires, limit the spread of flammable invasive species, and restore and maintain healthy ecosystems by reducing potentially hazardous vegetation that can fuel fires. Approaches used for managing vegetation include setting fires under controlled conditions (prescribed burns), mechanical thinning, herbicides, certain grazing methods, or combinations of these and other approaches. The agencies fund these activities from their fuel reduction accounts. Congress, the Office of Management and Budget, federal agency officials, and others have expressed concern about mounting federal wildland fire expenditures. Federal appropriations to the Forest Service and the Interior agencies to prepare for and respond to wildland fires, including appropriations for reducing fuels, have more than doubled, from an average of $1.2 billion from fiscal years 1996 through 2000 to an average of $2.9 billion from fiscal years 2001 through 2007. Adjusting for inflation, the average annual appropriations to the agencies for these periods increased from $1.5 billion to $3.1 billion (in 2007 dollars). The Forest Service received about 70 percent, and Interior about 30 percent, of the appropriated funds; table 1 shows the agencies’ combined fire appropriations for fiscal years 1996 through 2007. Analysis of structures damaged during past fires and experimental research have identified a number of relatively simple steps that can reduce the risk of damage to structures from wildland fire. Minimizing or preventing damage requires understanding the different types of wildland fire and how they can ignite structures. Surface fires—which burn vegetation or other fuels, such as shrubs, fallen leaves, small branches, and roots, near the surface of the ground—can ignite a home or other building by burning nearby vegetation and eventually igniting flammable portions of the building, including exterior walls or siding; attached structures, such as a fence or deck; or other nearby flammable materials, such as firewood or patio furniture. Crown fires—which burn the tops, or crowns, of trees—place homes at risk because they create intense heat, which can ignite portions of structures even without direct contact from flames. Embers, or “firebrands”—which can be carried on the wind a mile or more from a fire—can ignite a structure by landing on the roof or by entering a vent or other opening. Figure 2 illustrates how each type of fire can take advantage of a structure’s vulnerabilities and those of its immediate surroundings. Wildland fires can ignite homes and structures in different ways. Surface fires (A) can ignite a home by burning nearby vegetation. Crown fires (B) create intense heat, which can ignite portions of structures, without direct contact from flames. Firebrands (C), or embers, can ignite a home by landing on the roof or entering a roof vent or other opening. As the Forest Service and the Interior agencies have improved their understanding of wildland fire’s role on the landscape, their approach to managing fire has evolved. Under the new approach, the agencies seek to make the landscape less susceptible to damage by wildland fire and to respond to fires in ways that protect communities and important resources while also considering the cost and long-term effects of that response. Historically, the Forest Service and the Interior agencies generally viewed fire as a damaging force that they attempted to suppress quickly—as exemplified by the “10 a.m. policy,” in which the goal was to contain every fire by 10:00 the morning after it was reported. For decades, the agencies were often successful in this approach. This emphasis on suppression led to a substantial decline in the average number of acres burned annually from the 1930s through the 1970s. A number of damaging fires in the 1990s, however, led the agencies to fundamentally reassess their understanding of wildland fire’s role on the landscape. Their view of fire’s ecological role began to expand, from seeing benefits in a few ecosystems, like certain grasslands and forest types, to realizing that in many locations—particularly the arid West—fire was inevitable. In addition, they recognized that by allowing brush, small trees, and other vegetation to accumulate, their past success in suppressing fires was in part responsible for making recent fires more severe. The agencies’ increased awareness of fire’s benefits, as well as of the unintended negative consequences of suppression, led them in 1995 to develop the Federal Wildland Fire Management Policy, a policy they reaffirmed and updated in 2001. Under the policy, the agencies abandoned their attempt to put out every wildland fire, seeking instead to (1) make communities and resources less susceptible to being damaged by wildland fire and (2) respond to fires so as to protect communities and important resources at risk but also to consider both the cost and long- term effects of that response. As a result, the agencies have increasingly emphasized firefighting strategies that focus on land management objectives, which may lead them to use less aggressive firefighting strategies that can not only reduce costs in some cases but also be safer for firefighters by reducing their exposure to unnecessary risks, according to agency fire officials. In recent years, the Forest Service and the Interior agencies have taken steps to help them better achieve the Federal Wildland Fire Management Policy’s vision. In an effort to make the landscape less susceptible to damage from wildland fire, for example, they have reduced hazardous fuels and fostered fire-resistant communities. They have also improved their ability to respond efficiently and effectively to wildland fires that occur, including taking steps to (1) implement the federal wildland fire management policy, (2) improve decisions regarding fire management strategies, and (3) improve how they acquire and use firefighting assets. In an effort to reduce damage to communities and resources from wildland fire, the agencies have continued to reduce fuels and foster fire- resistant communities, but the extent to which these efforts have reduced risk is unknown. Reducing hazardous fuels. Reducing hazardous fuels—to keep wildland fires from spreading into the wildland-urban interface and to help protect important resources by lessening a fire’s intensity—is one of the primary objectives of the National Fire Plan. The agencies reported reducing fuels on more than 29 million acres from 2001 through 2008. The agencies have also improved the data they use to help identify lands where fuels need to be reduced and have taken steps to improve their processes for allocating fuel reduction funds and setting priorities for fuels projects. The agencies have nearly completed their geospatial data and modeling system, LANDFIRE, as we recommended in 2003. LANDFIRE is intended to produce consistent and comprehensive maps and data describing vegetation, wildland fuels, and fire regimes across the United States. Such data are critical to helping the agencies (1) identify the extent, severity, and location of wildland fire threats to the nation’s communities and resources; (2) predict fire intensity and rate of spread under particular weather conditions; and (3) evaluate the effect that reducing fuels may have on future fire behavior. LANDFIRE data are already complete for the contiguous United States, and the agencies have reported they expect to complete the data for Alaska and Hawaii in 2009. Because vegetative conditions change over time for a variety of reasons, including landscape- altering events such as hurricanes, disease, or wildland fires themselves, the agencies also plan several processes for updating the data, including selective updates every 2 years to reflect changes due to fires, fuel treatments, and other disturbances and a comprehensive update every 10 years. In addition, the agencies have begun to improve their processes for allocating fuel reduction funds to different areas of the country and for selecting fuel reduction projects, as we recommended in 2007. The agencies have started moving away from “allocation by tradition” toward a more consistent, systematic allocation process. That is, rather than relying on historical funding patterns and professional judgment, the agencies are developing a process that also considers risk, effectiveness of fuel reduction treatments, and other factors. The Forest Service uses this process to allocate funds to its nine regions and has directed the regions to use it to allocate funds to their respective national forests. Interior uses a similar process to allocate funds to its four agencies, and the agencies in turn use it to allocate funds to their respective state or regional offices. The agencies have been increasing their use of this process, in fiscal year 2009 applying the results to help them determine how to allocate their fuel reduction dollars. Interior, however, kept its agencies and regions from gaining or losing more than 10 percent in funds relative to fiscal year 2008. Agency officials told us they expect to continue to improve their processes and to increasingly rely on them to allocate fuel reduction funds. Despite these improvements, further action is needed to ensure that the agencies’ efforts to reduce hazardous fuels are directed to areas at highest risk. The agencies, for example, still lack a measure of the effectiveness of fuel reduction treatments and therefore lack information needed to ensure that fuel reduction funds are directed to the areas where they can best minimize risk to communities and natural and cultural resources. Forest Service and Interior officials told us that they recognize this shortcoming and that efforts are under way to address it. The Joint Fire Science Program, for example, has funded almost 50 studies examining the effectiveness of fuel reduction treatments in different locations and has begun a comprehensive effort to evaluate the effectiveness of different types of fuel treatments, as well as the longevity of those treatments and their effects on ecosystems and natural resources. Efforts like these are likely to be long term, involving considerable research investment, and have the potential to improve the agencies’ ability to assess and compare the cost-effectiveness of potential treatments in deciding how to optimally allocate scarce funds. Fostering fire-resistant communities. Protecting the nation’s communities is both one of the key goals of wildland fire management and one of the leading factors contributing to rising fire costs. Increasing the use of protective measures to mitigate the risk to structures from wildland fire is a key goal of the National Fire Plan—a goal which also may help contain the cost of managing fires in the long term. This plan, developed by federal wildland fire agencies and state governors, encourages, but does not mandate, state and local governments to adopt laws requiring homeowners and homebuilders to take measures to help protect structures from wildland fires. Because these measures rely on the actions of individual homeowners and homebuilders or on laws and land-use planning affecting private lands, achieving this goal is primarily a state and local government responsibility. Nonetheless, the Forest Service and the Interior agencies have helped sponsor the Firewise Communities program, which works with community leaders and homeowners to increase the use of fire-resistant landscaping and building materials in areas of high risk. Federal and state agencies also provide grants to help homeowners pay for creating defensible space around private homes. A few relatively simple steps can reduce the risk of damage to structures from wildland fire. Experts from a symposium convened for us in 2004 by the National Academy of Sciences emphasized that the two most critical measures for protecting structures from wildland fires are (1) reducing vegetation and flammable objects within an area of 30 to 100 feet around a structure, often called creating defensible space (see fig. 3), and (2) using fire-resistant roofing materials and covering attic vents with mesh screens. Analysis of structures damaged during past fires and experimental research have shown these two steps to be key determinants of whether or not a structure is damaged by wildland fire. Nevertheless, use of protective measures is inconsistent. We reported in 2005 that many homeowners in the wildland-urban interface have not used such measures to mitigate fire risk because of the time or expense involved. State and local fire officials estimated that the price of creating defensible space, for example, can range from a negligible amount, if homeowners can do the work themselves, to $2,000 or more. Competing concerns also influence the use of protective measures. For example, although modifying landscaping to create defensible space has proved to be key in protecting structures from wildland fire, officials and researchers have reported that homeowners are more concerned about the effect of landscaping on their property’s appearance, privacy, or habitat for wildlife. Defensible space can, however, be created so as to alleviate many of these concerns. Leaving thicker vegetation away from a structure and pruning plants close by it, for instance, can help protect the structure and still be attractive, private, and wildlife-friendly. Misconceptions about fire behavior and the effectiveness of protective measures can also influence what people do to protect structures from wildland fires. For example, homeowners may not know that homes can be more flammable than the surrounding trees, shrubs, or other vegetation and therefore do not recognize the need to reduce the flammability of the structure itself (see fig. 4). Fire officials told us that few people realize that reducing tree density close to a structure can return a wildland fire to the ground, where it is much easier to keep away from structures, or that fire- resistant roofs and screened attic vents can reduce the risk of ignition from firebrands. Finally, homeowners may not use protective measures because they believe that firefighters are responsible for protecting their homes and do not recognize that they share this responsibility. Implementing the Federal Wildland Fire Management Policy. The Federal Wildland Fire Management Policy directs each agency to develop a fire management plan for all areas they manage with burnable vegetation. Without such plans, agency policy does not allow use of the whole spectrum of wildland fire response strategies, including less aggressive strategies—meaning that, for areas without such plans, the agencies must attempt to suppress fires regardless of any benefits that might come from allowing them to burn. We reported in 2006 that about 95 percent of the agencies’ 1,460 individual land management units had completed the required plans. We also reported, however, that the agencies may not always find it easy to update these plans; a Forest Service official told us, for example, that, if the introduction of new data into a fire management plan results in the development of new fire management objectives, the agency might need to conduct a new environmental analysis of that plan, requiring additional time and resources. Moreover, in examining 17 fire management plans, a May 2007 independent review of large wildland fires managed by the Forest Service in 2006 identified several shortcomings in the plans, including that most of them examined did not contain current information on fuel conditions and that many did not provide sufficient guidance on selecting firefighting strategies. If fire management plans are not updated to reflect the most current information on the extent and distribution of fire risks and the most promising methods for dealing with these risks, the plans will be of limited use to the agencies in managing wildland fire. The Federal Wildland Fire Management Policy also states that the agencies’ responses to wildland fires are to be based on the circumstances of each fire and the likely consequences to human safety and to natural and cultural resources. Interagency guidance on implementing the policy, adopted in 2009, clarifies that the full range of fire management strategies and tactics are to be considered when responding to every wildland fire and that a single fire may be simultaneously managed for different objectives. Previous guidance required each fire to be managed either for suppression objectives—that is, to put out the fire as quickly as possible— or to achieve resource benefits—that is, to allow the fire to burn to gain certain benefits such as reducing fuels or seed regeneration. Both the Department of Agriculture’s Inspector General and we criticized the previous guidance, in part because it did not allow the agencies the flexibility to switch between these strategies as fire conditions changed or to manage parts of a single fire differently. By allowing this flexibility, the new guidance should help the agencies achieve management objectives on more acres and help contain the long-term costs of fire management. Improving decisions about fire management strategies. The agencies have recently undertaken several efforts to improve decisions about firefighting strategies. In 2007 we reported that previous studies had found that officials may not always consider the full range of available strategies and may not select the most appropriate strategy, which would account for the cost of suppression; the value of structures and other resources threatened by the fire; and, where appropriate, any potential benefits to natural resources. Managers of the agencies’ individual land management units—typically known as line officers and including national forest supervisors, BLM district managers, and others—are responsible for making strategic decisions about how to manage a fire. A 2000 review by the National Association of State Foresters, however, concluded that many line officers have little wildland fire experience and may select fire management strategies that lead to unnecessarily high suppression costs. Fire officials told us such strategies may also have a low likelihood of success, unnecessarily exposing firefighters to risk of injury. The Forest Service initiated a program in 2007 designed to add to line officers’ knowledge and experience through a series of certifications at three competency levels, certifying the officer to manage a fire of low, medium, or high complexity. If a fire exceeds the line officer’s certification level, a more experienced officer is assigned to coach the less experienced officer; final decisions on strategies, however, remain the responsibility of the line officer of the unit where the fire is burning. To help line officers and fire managers in making on-the-ground decisions about how to manage a particular fire, the agencies in 2009 began to use a new analytical tool, known as the wildland fire decision support system. This new tool helps line officers and fire managers analyze various factors—such as the fire’s current location, adjacent fuel conditions, nearby structures and other highly valued resources, and weather forecasts—in determining the strategies and tactics to adopt. For example, the tool generates a map illustrating (1) the probability that a particular wildland fire, barring any suppression actions, will burn a certain area within a specified time and (2) the structures or other resources that may therefore be threatened. Having such information can help line officers and fire managers understand the resources at risk and identify the most appropriate response—for example, whether to devote substantial resources to attempt full and immediate suppression or instead to take a less intensive approach, which may reduce risks to firefighters and be less costly. The agencies have also established four teams, known as national incident management organization teams, staffed with some of the most experienced fire managers. These teams have several purposes, including managing some of the most complex and costly fires; identifying and disseminating best management practices throughout the agencies; and, during periods of low fire activity, working with staff at the national forests where large fires are particularly likely to occur, to better prepare staff to respond. Fire officials said that over time they expect these teams will improve decisions about firefighting strategies, both for fires the teams manage directly and those where they worked with staff ahead of time. In addition, the agencies, following congressional committee direction, require an independent review of all fires whose costs exceed $10 million, including an examination of the strategic decisions affecting suppression costs. Although these reviews may identify instances where the agencies could have used more cost-efficient firefighting strategies, and may provide long-term benefits by helping the agencies identify and disseminate best practices, the reviews are typically conducted after the fires have been suppressed and therefore are not intended to help fire managers change strategies while fires are still burning, before managers have taken ineffective or unnecessary suppression actions. To influence strategic firefighting decisions while fires are still burning, the Forest Service (which, among the responsible federal agencies, most often manages the most expensive fires) has experimented in recent years with several approaches for identifying ongoing fires where suppression actions are unlikely to be effective and for influencing strategic decisions made during those fires that might help contain costs and reduce risk to firefighters. A senior Forest Service official told us that these efforts have helped raise awareness of the importance of basing strategic decisions on (1) the resources at risk of damage and (2) the likelihood that suppression actions will be effective, but that the agency was still working to improve its ability to quickly identify strategic firefighting decisions likely to be ineffective. This official told us that the concept of reviewing strategic decisions while a fire is still burning is new, as is the concept of considering the probability of success, and not just the resources at risk, in making those decisions; he added that he believed the agency was making strides in implementing this approach but that it would be a long process dependent on managers in the field gaining a better understanding of the benefits of the new approach. According to this official, the agency’s approach in 2009 is to identify ongoing fires for which the cost of suppression—estimated using information from the wildland fire decision support system—is expected to be higher than the cost predicted by a measure known as the stratified cost index, based on the costs of previous fires with similar characteristics. On those fires for which suppression costs are expected to be substantially higher than costs for similar fires in the past, Forest Service officials will consult with local line officers and fire management officials to ensure that the most appropriate firefighting strategies are being implemented. The basis for this comparison—the stratified cost index of previous fires—is not entirely reliable, however; our 2007 report identified several shortcomings with it, including the lack of data from many fires where less costly firefighting strategies were used (because the agencies have only recently emphasized the importance of using less aggressive suppression strategies). As a result, using the index as the basis for comparison may not allow the Forest Service to accurately identify fires where more, or more-expensive, resources than needed are being used. Although these efforts are new and we have not fully evaluated them, we believe they have the potential to help the agencies strengthen how firefighting strategies are selected. The efforts, however, do not address certain critical shortcomings. We reported in 2007, for example, that officials in the field have few incentives to consider cost containment in making critical decisions affecting suppression costs, and that previous studies had found that the lack of a clear measure to evaluate the benefits and costs of alternative firefighting strategies fundamentally hindered the agencies’ abilities to provide effective oversight. Although the agencies have made progress in other areas, they still lack such a measure. Acquiring and using firefighting assets effectively. In 2007 we reported that (1) federal agencies lacked a shared or integrated system for effectively determining the appropriate type and quantity of firefighting assets needed for a fire season; (2) the agencies’ processes and systems for acquiring firefighting assets lacked controls to ensure that the agencies were procuring assets cost-effectively; and (3) the agencies sometimes used firefighting assets ineffectively or inefficiently, often in response to political or social pressures. Despite continued improvement, further action to address these shortcomings is needed. First, to address congressional committee direction that they improve their system for determining needed firefighting assets, the agencies in 2009 began deploying an interagency budget-planning system known as fire program analysis (FPA). FPA was intended to help the agencies develop their wildland fire budget requests and allocate funds by, among other objectives, (1) providing a common budget framework to analyze firefighting assets without regard for agency jurisdictions; (2) examining the full scope of fire management activities, including preparing for fires by acquiring and positioning firefighting assets for the fire season, mobilizing assets to suppress fires, and reducing potentially hazardous fuels; (3) modeling the effects over time of differing strategies for responding to wildland fires and treating lands to reduce hazardous fuels; and (4) using this information to identify the most cost-effective mix and location of federal wildland fire management assets. In 2008, we reported that FPA showed promise in achieving some of the key objectives originally established for it but that the approach the agencies have taken hampers FPA from meeting other key objectives, including the ability to project the effects of different levels of fuel reduction and firefighting strategies over time. We therefore concluded that agency officials lack information that would help them analyze the extent to which increasing or decreasing funding for fuel reduction and responding more or less aggressively to fires in the short term could affect the expected cost of responding to wildland fires over the long term. Senior agency officials told us in 2008 that they were considering making changes to FPA that may improve its ability to examine the effects over time of different funding strategies. The exact nature of these changes, or how to fund them, has yet to be determined. Officials also told us the agencies are currently working to evaluate the model’s performance, identify and implement needed corrections, and improve data quality and consistency. The agencies intend to consider the early results of FPA in developing their budget requests for fiscal year 2011, although officials told us they will not rely substantially on FPA’s results until needed improvements are made. As we noted in 2008, the approach the agencies took in developing FPA provides considerable discretion to agency decision makers and, although providing the flexibility to consider various options is important, doing so makes it essential that the agencies ensure their processes are fully transparent. Second, we also reported in 2007 that the agencies were planning improvements to their acquisition processes to ensure they were procuring assets cost-effectively. The agencies rely on private contractors to provide many firefighting assets and have begun implementing a new system for determining which contractors to use. This system considers the capabilities of the equipment or personnel, as well as the cost, and is intended to help the agencies identify the “best value” and not just the lowest cost or closest asset. A Forest Service official said that the agencies are also evaluating how the equipment and personnel from each contractor perform in the field and, once they have gathered enough data, plan to apply that information in selecting contractors. The agencies are already using this system to select contractors for many kinds of frequently used equipment, including firefighting crews, fire engines, aircraft, and water trucks, and plan to expand the system to include other equipment in future years. Third, the agencies have taken several steps to improve their efficiency in using firefighting assets. As we reported in 2007, for example, the agencies implemented a computer-based dispatching system called the resource ordering and status system. The agencies had been using a manual, paper-based system for requesting and assigning firefighting assets, and the new system was meant to allow them to more effectively and efficiently monitor firefighting assets during a fire or other incident. We reported that although the system’s benefits had not been quantified, it had likely reduced suppression costs by making it easier to use local firefighting assets—which could hasten response and thus perhaps reduce fire size—and by reducing the personnel needed to dispatch resources. The agencies can also use the system to identify individuals qualified and available to serve in various firefighting positions, which may help increase the agencies’ use of local incident commanders and reduce the need to mobilize more-costly incident management teams. We also reported in 2007 that the agencies required that an “incident business advisor” be assigned to fires expected to cost more than $5 million and recommended that an advisor be assigned to fires expected to cost more than $1 million. An incident business advisor represents the line officer’s interest in containing costs by observing firefighting operations and working with the incident management team to identify ways those operations could be more cost-effective. For example, an incident business advisor may observe the types and quantity of firefighting personnel and equipment assigned to a fire and how they are used; observe how equipment and supplies are procured; and, as a fire comes under control, ensure that the most expensive personnel and equipment are released first. In 2008, the agencies also changed how they determined where to send certain firefighting assets to ensure that assets perform the highest-priority work. Agency officials told us they instituted a new practice to increase the likelihood that firefighting assets perform the highest-priority actions. Under this practice, certain assets that are often in high demand (including some of the most experienced firefighting crews) are assigned to perform only the highest-priority actions on a particular fire or set of adjacent fires and are then reassigned to perform high-priority actions on other fires, rather than being assigned for several weeks to a single fire, as has been typical. This practice should help the agencies address a shortcoming previous studies have identified—that firefighting assets may sit idle at a fire rather than be released for use elsewhere because managers are concerned that they will be unable to recall an asset if they need it later, a practice that unnecessarily increases a fire’s cost and prevents those assets from helping to protect communities and resources from fires burning elsewhere. In addition, the officials said it can be important to ensure that assets are sufficiently flexible to respond to new fires, even if many fires are already burning. Responding quickly can substantially increase the likelihood that firefighters will be able to contain fires before they become large, which is particularly important when fires start in weather and fuel conditions that can cause them to burn intensely and spread rapidly. Agency officials also said they have improved their ability to predict when an unusually high number of fires might start and have emphasized the need to keep some firefighting assets in reserve to respond to new fires quickly. Previous studies also found that agencies sometimes use more, or more- costly, firefighting assets than necessary, often in response to political or social pressures to demonstrate they are taking all possible action to protect communities and resources. Consistent with these findings, fire officials told us they were pressured in 2008 to assign more firefighting assets than could be effectively used to fight fires in California. More generally, previous studies have found that air tankers may drop flame retardants when on-the-ground conditions may not warrant such drops. Aviation activities are expensive, accounting for about one-third of all firefighting costs on a large fire. Providing clarity about when different types of firefighting assets can be used effectively could help the agencies resist political and social pressure to call up more assets than they need. Despite the important steps the agencies have taken, much work remains. We have previously recommended several key actions—including development of an overarching investment strategy for addressing the wildland fire problem—that, if completed, would improve the agencies’ management of wildland fire. In addition to completing the overarching strategy (which we have termed a cohesive strategy), we have recommended that the agencies clarify the importance of containing costs relative to other, often-competing objectives and clarify financial responsibilities for fires that cross federal, state, and local jurisdictions. Finally, we have identified several steps the agencies should take, and Congress could consider, that could mitigate the effects on the agencies’ other programs of rising fire management costs. If the agencies and Congress are to make informed decisions about an effective and affordable long-term approach for addressing wildland fire problems that have been decades in the making, the agencies need a cohesive strategy that identifies the options and associated funding for reducing excess vegetation and responding to fires. By laying out various potential approaches for addressing the growing wildland fire threat, the estimated costs associated with each approach, and the trade-offs involved, a cohesive strategy would help Congress and the agencies make informed decisions about how to invest scarce funds. We first recommended a cohesive strategy for addressing excess vegetation in 1999. Subsequently, after evaluating a number of related wildland fire management issues, we reiterated the need for a cohesive strategy in 2005 and 2006 and broadened our recommendation to better address the interrelated nature of fuel reduction efforts and wildland fire response. The agencies have concurred with our recommendations to develop a cohesive strategy but have yet to develop a strategy that clearly formulates different approaches and associated costs. In July 2009, agency officials told us they had begun planning how to develop a cohesive strategy but were not far enough along to provide further information. We were therefore unable to determine the extent to which it might provide the information the agencies and Congress need to make fundamental decisions about the best way for the nation to respond to the growing wildland fire problem. Because of the critical importance of this step in improving the agencies’ overall management of wildland fire, we continue to believe that the agencies should complete a strategy, and begin implementing it, as quickly as possible. The Federal Land Assistance, Management, and Enhancement Act, introduced in 2009, would require the agencies to produce, within 1 year of its enactment, a cohesive strategy consistent with our previous recommendations. A document that included the critical elements of a cohesive strategy was created in 2002: an analysis by a team of Forest Service and Interior experts estimated the funds needed to implement each of eight different fuel reduction options for protecting communities and ecosystems across the nation over the next century. The team determined that reducing the risks to communities and ecosystems across the nation could require an approximate tripling of funding for fuel reduction, to about $1.4 billion annually, for an initial period of several years. These initially higher costs for fuel reduction would decline after fuels had been sufficiently reduced to make it possible to use less expensive prescribed burning methods in many areas. More important, the team estimated that the reduction in fuels would allow the agencies to suppress more fires at lower cost and would reduce total wildland fire management costs and risk after 15 years. Alternatively, the team concluded, maintaining the then-current level of investment in fuel reduction would increase costs, as well as risks to communities and ecosystems in the long term. The Office of Management and Budget raised concerns about the accuracy of the long-term funding estimates used by the strategy document, however, and Office of Management and Budget officials told us in 2006 that the agencies needed to have sufficiently reliable data before they could publish a strategy with long-term funding estimates. Since that time, the agencies have continued to improve their data by nearly completing their LANDFIRE and FPA projects, laying important groundwork for future progress. Our 2008 review of FPA, however, recognized that although FPA represented a significant step forward and showed promise in meeting several of its objectives, it had limited ability to examine the long-term effects of differing funding allocation strategies—a shortcoming that could limit FPA’s ability to contribute to the agencies’ development of a cohesive strategy. In addition, the agencies’ abilities to develop effective long-term options for reducing fuels will improve if they succeed in their current efforts to measure the effectiveness and durability of different fuel reduction treatments. We reported in 2007 that although the Forest Service and the Interior agencies had taken several steps intended to help contain wildland fire costs, they had not clearly defined their cost-containment goals or developed a strategy for achieving those goals—steps that are fundamental to sound program management. Since our 2007 review, the agencies have continued to implement individual cost-containment steps, including the wildland fire decision support system and updated guidance for implementing the federal wildland fire management policy, but they have yet to develop clear cost-containment goals or a strategy for achieving them, as we recommended in our 2007 report. Without such goals and a strategy, we believe the agencies will have difficulty determining whether they are taking the most important steps first, as well as the extent to which the steps they are taking will help contain costs. The Forest Service and Interior generally disagreed with the characterization of many of our 2007 findings. In particular, they identified several agency documents—including the 2001 Review and Update of the 1995 Federal Wildland Fire Management Policy and their 10-year strategy to implement the National Fire Plan—that they argued clearly define goals and objectives and that make up their strategy to contain costs. Although the documents cited by the agencies provide overarching goals and objectives, they lack the clarity and specificity needed by land management and firefighting officials in the field to help manage and contain wildland fire costs. Interagency policy, for example, established an overarching goal of suppressing wildland fires at minimum cost, considering firefighter and public safety and the importance of resources being protected, but the agencies have established neither clear criteria by which to weigh the relative importance of the often-competing elements of this broad goal, nor measurable objectives by which to determine if the agencies are meeting the goal. As a result, despite improvements the agencies continue to make to policy, decision-support tools, and oversight, we believe that managers in the field lack a clear understanding of the relative importance that the agencies’ leadership places on containing costs and—as we concluded in our 2007 report—are therefore likely to continue to select firefighting strategies without due consideration of the costs of suppression. Forest Service officials told us in July 2009 that although they are concerned about fire management costs, they are emphasizing the need to select firefighting strategies on the basis of land management objectives and reducing unnecessary risks to firefighters, an emphasis that, in the long run, may also help contain costs. Nonetheless, we continue to believe that our recommendations, if effectively implemented, would help the agencies better manage their cost- containment efforts and improve their ability to contain wildland fire costs. In 2006, we reported that federal and nonfederal officials had concerns about how costs for suppressing fires that burn across federal, state, and local jurisdictions were shared among federal and nonfederal entities and that these concerns may reflect a more fundamental issue—that those entities had not clearly defined their financial responsibilities for wildland fire suppression, particularly those for the often costly efforts to protect the wildland-urban interface. Nonfederal entities—including state forestry entities and tribal, county, city, and rural fire departments—play an important role in protecting communities and resources and responding to fires. We reported in 2006, however, that federal officials were concerned that the existing framework for sharing suppression costs among federal and nonfederal entities, coupled with the availability of federal emergency assistance, insulated state and local governments from the cost of providing wildland fire protection in the wildland-urban interface. As a result, state and local governments had less incentive to adopt laws (such as building codes requiring fire-resistant building materials in areas at high risk of wildland fire) that, in the long run, could help reduce the cost of suppressing wildland fires. We therefore recommended that the federal agencies work with relevant state entities to develop more-specific guidance as to when particular cost-sharing methods should be used and to clarify their respective financial responsibility for fires that burn, or threaten to burn, across multiple jurisdictions. The agencies have updated their guidance on when particular cost-sharing methods should be used, although we have not evaluated the effect of this guidance. Still, the agencies have yet to clarify the financial responsibility for fires that threaten multiple jurisdictions. Our 2006 report identified two primary ambiguities regarding financial responsibilities for fire suppression. Federal wildland fire management policy states that protecting structures is the responsibility of state, tribal, and local entities but it also states that, under a formal fire protection agreement specifying the financial responsibilities of each entity, federal agencies can assist nonfederal entities to protect the exterior of structures threatened by wildland fire. Forest Service guidance defines actions to protect the exterior of structures to include removing fuels in the vicinity of structures and spraying water or retardant on structures or surrounding vegetation. Federal and nonfederal officials agreed that federal agencies can assist with such actions, but they did not agree on which entities are responsible for bearing the costs. Federal officials told us that the purpose of this policy is to allow federal agencies to use their personnel and equipment to help protect homes but not to bear the financial responsibility of providing that protection. Nonfederal officials, however, said that these actions are intended to keep a wildland fire from reaching structures, and financial responsibility should therefore be shared between both federal and nonfederal entities. The Forest Service developed new “structure protection principles” in 2009, but these principles do not clarify the financial responsibilities for suppression actions intended to protect structures. The presence of structures adjacent to federal lands can substantially alter fire suppression strategies and raise costs. A previous report and agency officials have questioned which entities are financially responsible for suppression actions taken on federal lands but intended primarily or exclusively to protect adjacent wildland-urban interface areas. Fire managers typically use existing roads and geographic features, such as rivers and ridgelines, as firebreaks to help contain wildland fires. If, however, homes and other structures are located between a fire and such natural firebreaks, firefighters may have to construct other firebreaks and rely more than they would otherwise on aircraft to drop fire retardant to protect the structures, thereby increasing suppression costs. Nonfederal officials in several states questioned the appropriateness of assigning to nonfederal entities the costs for suppression actions taken on federal lands. They said that an accumulation of fuels on federal lands is resulting in more-severe wildland fires and contributing to the increased cost of fire suppression. They also said that federal agencies are responsible for keeping wildland fires from burning off federal land and should therefore bear the costs of doing so. Federal officials recognized this responsibility, but some also said that with the growing awareness that wildland fires are inevitable in many parts of the country, policy should recognize that wildland fires will occur and are likely to burn across jurisdictional boundaries. In their view, those who own property in areas at risk of wildland fires share a portion of the financial responsibility for protecting it. Previous federal agency reports have also recognized this issue and called for clarifying financial responsibility for such actions. Agency officials, in conjunction with officials from nonfederal entities, including the National Association of State Foresters, have initiated an effort intended to help clarify federal and nonfederal financial responsibilities, although it is too early to determine if the effort will succeed. Yet the continuing expansion of the wildland-urban interface and the rising costs for protecting these areas make resolving these issues ever more urgent. Unless the financial responsibilities for multijurisdictional fires are clarified, the concerns that the existing framework insulates nonfederal entities from the cost of protecting the wildland-urban interface from fire—and that the federal government therefore continues to bear more than its share of that cost—are unlikely to be addressed. Rising wildland fire costs have led the Forest Service and the Interior agencies to transfer funds from other programs to help pay for fire suppression. The year-to-year variability in the number, location, and severity of fires makes it difficult to estimate needed suppression funds accurately, and when appropriated funds are insufficient to cover actual suppression expenditures, the agencies are authorized to use funds from their other programs to pay for emergency firefighting activities. We reported in 2004 that from 1999 through 2003, the agencies transferred more than $2.7 billion from these other programs. The Forest Service transferred funds from numerous programs, including construction and maintenance; the national forest system; and state and private forestry programs that provide grants to states, tribes, communities, and private landowners for fire and insect management, among other purposes. Interior transferred funds primarily from its construction and land acquisition programs. We have not examined this issue in detail since our 2004 report, but in 2009 we testified that funding transfers continued, with the agencies transferring funds in fiscal years 2006, 2007, and 2008. Although the agencies received additional appropriations to cover, on average, about 80 percent of the funds transferred, we found that the transfers caused the agencies to cancel or delay some projects and fail to fulfill certain commitments to their nonfederal partners. We reported, for example, that funding transfers delayed planned construction and land acquisition projects, which in some cases led to higher project costs due to revised budget and construction plans or higher supply and land acquisition costs. In one instance, the Forest Service delayed purchasing a 65-acre property in Arizona that it had planned to acquire for approximately $3.2 million in 2002; it was able to purchase the property about a year later, but the cost of the property had increased by $195,000. Also, although funds were transferred to help the agencies suppress wildland fires, among the delayed projects were ones to reduce fuels to lower the fire risk to communities, construct new firefighting facilities, and provide firefighting training courses. To help the agencies address the impacts of funding transfers on other agency programs, we recommended in 2004 that they take a number of steps, including improving their methods for estimating annual suppression costs. Typically, the agencies base their estimates of needed suppression funding on the 10-year rolling average of fire costs—a method with known problems. Although we noted in our 2004 report that better estimates of the costs of suppressing fires in a given year would reduce the likelihood that the agencies would need to transfer funds from other accounts, the agencies continue to use the 10-year rolling average as the foundation of their budget requests. Interior stated at the time that it believed that the 10-year average was a “reasonable and durable basis for suppression budgeting.” The Forest Service, however, concurred with our recommendation. Nevertheless, a Forest Service official told us in 2008 that the agency had analyzed alternative methods for estimating needed suppression funds but determined that no better method was available. While we recognize that the accuracy of the 10-year average is likely to improve as recent years with higher suppression costs are included in that average, the need to transfer funds in each of the last 3 years suggests that the agencies should continue to seek a more accurate method for estimating needed suppression costs. In addition to the actions we believe the agencies need to take, we have also suggested that Congress consider legislating alternative approaches to funding wildland fire suppression that could help reduce the need for the agencies to transfer funds. As we reported in 2004, for example, Congress could consider alternative funding approaches for wildland fire suppression, including establishing a reserve account that the agencies could access when their suppression accounts are depleted. Such an account could provide either a specified amount (a definite appropriation) or as much funding as the agencies need to fund emergency suppression (an indefinite appropriation). Each approach has advantages and disadvantages. Establishing a reserve account with a definite appropriation would provide the agencies with incentives to contain suppression costs within the amount in the reserve account, but depending on the size of the appropriation and the severity of a fire season, suppression costs could still exceed the funds reserved, and the agencies might still need to transfer funds from other programs. An account with an indefinite appropriation, in contrast, would eliminate the need for transferring funds from other programs but would offer no inherent incentives for the agencies to contain suppression costs. Furthermore, both definite and indefinite appropriations could raise the overall federal budget deficit, depending on whether funding levels for other agency or government programs are reduced. The Federal Land Assistance, Management, and Enhancement bill would establish a wildland fire reserve account; the administration’s budget for fiscal year 2010 also proposes a $282 million reserve account for the Forest Service and a $75 million reserve account for Interior to provide funding for firefighting when the appropriated 10-year average is exhausted. The agencies responsible for managing wildland fires on federal lands have unquestionably improved their understanding of the nation’s wildland fire problem and have positioned themselves to respond to fire more effectively. Noteworthy advances include more flexible firefighting strategies, better information and decision-making tools, and a more coordinated approach. Yet it is not clear how much ground the agencies have gained through these improvements, because at the same time the agencies have been working to improve, the conditions contributing to the nation’s fire problem have worsened—with increasing development in the wildland-urban interface, a continued excess of accumulated fuels, and growing evidence of the effects of climate change. The agencies have recognized that additional, strategic action is needed if they are to get ahead of the fire problem rather than simply react to it, but they have yet to take the bold steps we believe are necessary to implement such a strategic approach. Such steps—including implementation of a cohesive strategy and efforts to better predict, contain, and share firefighting costs—could, over time, allow the agencies to better prepare for, and respond to, the severe wildland fire seasons to come. Without such steps, the agencies risk failing to capitalize on the important, but incomplete, improvements they have made—and risk losing ground in their fight to manage the wildland fire problem. We are making no new recommendations at this time. As noted, however, we believe that our previous recommendations—which the agencies have generally agreed with—could, if implemented, substantially assist the agencies in capitalizing on the important progress they have made to date. We provided a draft of this report to the Departments of Agriculture and the Interior for comment. Both the Forest Service and the Department of the Interior agreed with the findings in the report. The Forest Service’s and Interior’s written comments are reproduced in appendixes II and III, respectively. We are sending copies of this report to interested congressional committees; the Secretaries of Agriculture and the Interior; the Chief of the Forest Service; the Directors of the Bureau of Indian Affairs, Bureau of Land Management, Fish and Wildlife Service, and National Park Service; and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-3841 or nazzaror@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To determine progress the Forest Service and Department of the Interior agencies have made in managing wildland fire, we reviewed pertinent agency documents and our previous reports and testimonies. To identify the agencies’ overall approach to managing wildland fires and any changes they have made to that approach, we reviewed key agency documents, including the 1995 and 2001 federal wildland fire management policies and the key documents making up the National Fire Plan. To identify key steps the agencies have taken to address the growing wildland fire problem and any improvement resulting from these steps, we reviewed agency documents and previous GAO reports and testimonies related to wildland fire. To further our understanding of the changes in the agencies’ approach to managing wildland fire and the steps they have taken to address the problem—and to identify any additional agency efforts to improve their wildland fire programs—we interviewed various agency officials, including officials in Washington, D.C., and at the National Interagency Fire Center in Boise, Idaho. To determine the key actions we previously recommended and believe are still needed to improve the agencies’ management of wildland fire, we reviewed our previous reports and testimonies and identified steps we had previously recommended the agencies take to improve their wildland fire programs. In many cases, our earlier recommendations were based on our review of agency documents and of independent analysis of the agencies’ programs (including reviews by the National Academy of Public Administration and the National Association of State Foresters). To determine the status of the agencies’ implementation of our recommendations, we reviewed relevant agency documents and interviewed agency officials. We conducted this performance audit from January 2009 to September 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Steve Gaty, Assistant Director; David P. Bixler; Ellen W. Chu; Jonathan Dent; Richard P. Johnson; and Kim Raheb made key contributions to this report. Wildland Fire Management: Actions by Federal Agencies and Congress Could Mitigate Rising Fire Costs and Their Effects on Other Agency Programs. GAO-09-444T. Washington, D.C.: April 1, 2009. Forest Service: Emerging Issues Highlight the Need to Address Persistent Management Challenges. GAO-09-443T. Washington, D.C.: March 11, 2009. Wildland Fire Management: Interagency Budget Tool Needs Further Development to Fully Meet Key Objectives. GAO-09-68. Washington, D.C.: November 24, 2008. Wildland Fire Management: Federal Agencies Lack Key Long- and Short- Term Management Strategies for Using Program Funds Effectively. GAO-08-433T. Washington, D.C.: February 12, 2008. Forest Service: Better Planning, Guidance, and Data Are Needed to Improve Management of the Competitive Sourcing Program. GAO-08-195. Washington, D.C.: January 22, 2008. Wildland Fire Management: Better Information and a Systematic Process Could Improve Agencies’ Approach to Allocating Fuel Reduction Funds and Selecting Projects. GAO-07-1168. Washington, D.C.: September 28, 2007. Natural Hazard Mitigation: Various Mitigation Efforts Exist, but Federal Efforts Do Not Provide a Comprehensive Strategic Framework. GAO-07-403. Washington, D.C.: August 22, 2007. Wildland Fire: Management Improvements Could Enhance Federal Agencies’ Efforts to Contain the Costs of Fighting Fires. GAO-07-922T. Washington, D.C.: June 26, 2007. Wildland Fire Management: A Cohesive Strategy and Clear Cost- Containment Goals Are Needed for Federal Agencies to Manage Wildland Fire Activities Effectively. GAO-07-1017T. Washington, D.C.: June 19, 2007. Wildland Fire Management: Lack of Clear Goals or a Strategy Hinders Federal Agencies’ Efforts to Contain the Costs of Fighting Fires. GAO-07-655. Washington, D.C.: June 1, 2007. Department of the Interior: Major Management Challenges. GAO-07-502T. Washington, D.C.: February 16, 2007. Wildland Fire Management: Lack of a Cohesive Strategy Hinders Agencies’ Cost-Containment Efforts. GAO-07-427T. Washington, D.C.: January 30, 2007. Biscuit Fire Recovery Project: Analysis of Project Development, Salvage Sales, and Other Activities. GAO-06-967. Washington, D.C.: September 18, 2006. Wildland Fire Rehabilitation and Restoration: Forest Service and BLM Could Benefit from Improved Information on Status of Needed Work. GAO-06-670. Washington, D.C.: June 30, 2006. Wildland Fire Suppression: Better Guidance Needed to Clarify Sharing of Costs between Federal and Nonfederal Entities. GAO-06-896T. Washington, D.C.: June 21, 2006. Wildland Fire Suppression: Lack of Clear Guidance Raises Concerns about Cost Sharing between Federal and Nonfederal Entities. GAO-06-570. Washington, D.C.: May 30, 2006. Wildland Fire Management: Update on Federal Agency Efforts to Develop a Cohesive Strategy to Address Wildland Fire Threats. GAO-06-671R. Washington, D.C.: May 1, 2006. Natural Resources: Woody Biomass Users’ Experiences Provide Insights for Ongoing Government Efforts to Promote Its Use. GAO-06-694T. Washington, D.C.: April 27, 2006. Natural Resources: Woody Biomass Users’ Experiences Offer Insights for Government Efforts Aimed at Promoting Its Use. GAO-06-336. Washington, D.C.: March 22, 2006. Wildland Fire Management: Timely Identification of Long-Term Options and Funding Needs Is Critical. GAO-05-923T. Washington, D.C.: July 14, 2005. Natural Resources: Federal Agencies Are Engaged in Numerous Woody Biomass Utilization Activities, but Significant Obstacles May Impede Their Efforts. GAO-05-741T. Washington, D.C.: May 24, 2005. Natural Resources: Federal Agencies Are Engaged in Various Efforts to Promote the Utilization of Woody Biomass, but Significant Obstacles to Its Use Remain. GAO-05-373. Washington, D.C.: May 13, 2005. Technology Assessment: Protecting Structures and Improving Communications during Wildland Fires. GAO-05-380. Washington, D.C.: April 26, 2005. Wildland Fire Management: Progress and Future Challenges, Protecting Structures, and Improving Communications. GAO-05-627T. Washington, D.C.: April 26, 2005. Wildland Fire Management: Forest Service and Interior Need to Specify Steps and a Schedule for Identifying Long-Term Options and Their Costs. GAO-05-353T. Washington, D.C.: February 17, 2005. Wildland Fire Management: Important Progress Has Been Made, but Challenges Remain to Completing a Cohesive Strategy. GAO-05-147. Washington, D.C.: January 14, 2005. Wildland Fires: Forest Service and BLM Need Better Information and a Systematic Approach for Assessing the Risks of Environmental Effects. GAO-04-705. Washington, D.C.: June 24, 2004. Federal Land Management: Additional Guidance on Community Involvement Could Enhance Effectiveness of Stewardship Contracting. GAO-04-652. Washington, D.C.: June 14, 2004. Wildfire Suppression: Funding Transfers Cause Project Cancellations and Delays, Strained Relationships, and Management Disruptions. GAO-04-612. Washington, D.C.: June 2, 2004. Biscuit Fire: Analysis of Fire Response, Resource Availability, and Personnel Certification Standards. GAO-04-426. Washington, D.C.: April 12, 2004. Forest Service: Information on Appeals and Litigation Involving Fuel Reduction Activities. GAO-04-52. Washington, D.C.: October 24, 2003. Geospatial Information: Technologies Hold Promise for Wildland Fire Management, but Challenges Remain. GAO-03-1047. Washington, D.C.: September 23, 2003. Geospatial Information: Technologies Hold Promise for Wildland Fire Management, but Challenges Remain. GAO-03-1114T. Washington, D.C.: August 28, 2003. Wildland Fire Management: Additional Actions Required to Better Identify and Prioritize Lands Needing Fuels Reduction. GAO-03-805. Washington, D.C.: August 15, 2003. Wildland Fires: Forest Service’s Removal of Timber Burned by Wildland Fires. GAO-03-808R. Washington, D.C.: July 10, 2003. Forest Service: Information on Decisions Involving Fuels Reduction Activities. GAO-03-689R. Washington, D.C.: May 14, 2003. Wildland Fires: Better Information Needed on Effectiveness of Emergency Stabilization and Rehabilitation Treatments. GAO-03-430. Washington, D.C.: April 4, 2003. Major Management Challenges and Program Risks: Department of the Interior. GAO-03-104. Washington, D.C.: January 1, 2003. Results-Oriented Management: Agency Crosscutting Actions and Plans in Border Control, Flood Mitigation and Insurance, Wetlands, and Wildland Fire Management. GAO-03-321. Washington, D.C.: December 20, 2002. Wildland Fire Management: Reducing the Threat of Wildland Fires Requires Sustained and Coordinated Effort. GAO-02-843T. Washington, D.C: June 13, 2002. Wildland Fire Management: Improved Planning Will Help Agencies Better Identify Fire-Fighting Preparedness Needs. GAO-02-158. Washington, D.C.: March 29, 2002. Severe Wildland Fires: Leadership and Accountability Needed to Reduce Risks to Communities and Resources. GAO-02-259. Washington, D.C.: January 31, 2002. Forest Service: Appeals and Litigation of Fuel Reduction Projects. GAO-01-1114R. Washington, D.C.: August 31, 2001. The National Fire Plan: Federal Agencies Are Not Organized to Effectively and Efficiently Implement the Plan. GAO-01-1022T. Washington, D.C.: July 31, 2001. Reducing Wildfire Threats: Funds Should be Targeted to the Highest Risk Areas. GAO/T-RCED-00-296. Washington, D.C.: September 13, 2000. Fire Management: Lessons Learned From the Cerro Grande (Los Alamos) Fire. GAO/T-RCED-00-257. Washington, D.C.: August 14, 2000. Fire Management: Lessons Learned From the Cerro Grande (Los Alamos) Fire and Actions Needed to Reduce Fire Risks. GAO/T-RCED-00-273. Washington, D.C.: August 14, 2000. Federal Wildfire Activities: Issues Needing Future Attention. GAO/T-RCED-99-282. Washington, D.C.: September 14, 1999. Federal Wildfire Activities: Current Strategy and Issues Needing Attention. GAO/RCED-99-233. Washington, D.C.: August 13, 1999. Western National Forests: Status of Forest Service’s Efforts to Reduce Catastrophic Wildfire Threats. GAO/T-RCED-99-241. Washington, D.C.: June 29, 1999. Western National Forests: A Cohesive Strategy Is Needed to Address Catastrophic Wildfire Threats. GAO/RCED-99-65. Washington, D.C.: April 2, 1999. Western National Forests: Nearby Communities Are Increasingly Threatened By Catastrophic Wildfires. GAO/T-RCED-99-79. Washington, D.C.: February 9, 1999. | The nation's wildland fire problems have worsened dramatically over the past decade, with more than a doubling of average annual acreage burned and federal appropriations for wildland fire management. The deteriorating fire situation has led the agencies responsible for managing wildland fires on federal lands--the Forest Service in the Department of Agriculture and four agencies in the Department of the Interior--to reassess how they respond to wildland fire and to take steps to improve their fire management programs. GAO reviewed (1) progress the agencies have made in managing wildland fire and (2) key actions GAO previously recommended and believes are still necessary to improve wildland fire management. GAO reviewed previous GAO reports and agency documents and interviewed agency officials. GAO prepared this report under the Comptroller General's authority to conduct evaluations on his own initiative. The Forest Service and the Interior agencies have improved their understanding of wildland fire's ecological role on the landscape and have taken important steps toward enhancing their ability to cost-effectively protect communities and resources by seeking to (1) make communities and resources less susceptible to being damaged by wildland fire and (2) respond to fire so as to protect communities and important resources at risk but to also consider both the cost and long-term effects of that response. To help them do so, the agencies in recent years have reduced hazardous fuels, in an effort to keep wildland fires from spreading into the wildland-urban interface and to help protect important resources by lessening a fire's intensity; sponsored efforts to educate homeowners about steps they can take to protect their homes from wildland fire; and provided grants to help homeowners carry out these steps. The agencies have also made improvements that lay important groundwork for enhancing their response to wildland fire, including adopting new guidance on how managers in the field are to select firefighting strategies, improving the analytical tools that assist managers in selecting a strategy, and improving how they acquire and use expensive firefighting assets. Despite the agencies' efforts, much work remains. GAO has recommended several key actions--including development of an overarching fire management strategy--that, if completed, would substantially improve the agencies' management of wildland fire. Nonetheless, the agencies have yet to: (1) Develop a cohesive strategy laying out various potential approaches for addressing the growing wildland fire threat, estimated costs associated with each approach, and the trade-offs involved. Such information would help the agencies and Congress make fundamental decisions about an effective and affordable approach to responding to fires. (2) Establish a cost-containment strategy that clarifies the importance of containing costs relative to other, often-competing objectives. Without such clarification, GAO believes managers in the field lack a clear understanding of the relative importance that the agencies' leadership places on containing costs and are therefore likely to continue to select firefighting strategies without duly considering the costs of suppression. (3) Clarify financial responsibilities for fires that cross federal, state, and local jurisdictions. Unless the financial responsibilities for multijurisdictional fires are clarified, concerns that the existing framework insulates nonfederal entities from the cost of protecting the wildland-urban interface--and that the federal government would therefore continue to bear more than its share of the cost--are unlikely to be addressed. (4) Take action to mitigate the effects of rising fire costs on other agency programs. The sharply rising costs of managing wildland fires have led the agencies to transfer funds from other programs to help pay for fire suppression, disrupting or delaying activities in these other programs. Better methods of predicting needed suppression funding could reduce the need to transfer funds from other programs. |
The Department of Defense (DOD) spends about $8 billion annually to provide housing for families of active-duty military personnel. Seeking to provide military families with access to adequate, affordable housing, DOD either pays a cash allowance for families to live in private-sector housing or provides housing by assigning families to government-owned or government-leased units. The housing benefit is a major component of the military’s compensation package. DOD policy manual 4165.63M states that private-sector housing in the communities near military installations will be relied on as the primary source of family housing. The policy states that government housing may be programmed when the local communities cannot meet the military’s need for acceptable and affordable housing. Government housing is also provided for a small number of personnel, often fewer than 15, that reside on an installation for reasons of military necessity. DOD policy further states that installation commanders are responsible for the housing program at their installations and have broad authority to plan, program, and determine the best use of resources. About 605,000, or two thirds, of the military families in the United States live in private housing. These families receive cash housing allowances to help defray the cost of renting or purchasing housing in the communities near their installations. Housing allowances, which totaled about $4.4 billion in fiscal year 1995, cover about 80 percent of the typical family’s total housing costs, including utilities. The families pay the remaining portion of their housing costs out of pocket using other sources of income. Military families receive assistance in locating private housing from housing referral offices operated at each major installation. Housing allowances consist of the basic allowance for quarters and the variable housing allowance. The basic allowance amount varies by military paygrade and is paid to all service members in the United States that do not occupy government quarters. The variable allowance varies by paygrade and by geographic location and is paid to members who receive the basic allowance and live in high-cost areas of the United States. The variable housing allowance was designed to equalize members’ out-of-pocket housing costs across locations in the United States. In 1985, the Congress adjusted housing allowances so that the out-of-pocket costs would be 15 percent of average housing costs. However, the typical out-of-pocket amount today is about 20 percent because increases in housing allowances have not kept pace with housing costs. The remaining 293,000, or one third, of the military families in the United States live in government-owned or -leased housing. These families forfeit their housing allowances but pay no out-of-pocket costs for housing or utilities. In fiscal year 1995, DOD spent about $2.8 billion to operate and maintain government-owned and -leased family quarters. In addition, about $724 million was authorized to construct and renovate government family housing units in fiscal year 1995. In fiscal year 1996, the authorization amount increased to $939 million. According to DOD, the majority of the existing inventory of government-owned family housing is old; lacks modern amenities; and needs major repair, renovation, or replacement. DOD estimates that over 200,000 of the existing houses do not meet current suitability standardsand need to be fixed up or closed. DOD estimates that the cost to modernize the existing family housing inventory is about $20 billion. Table 1.1 summarizes by service the number of families living in private-sector and government housing. Separate DOD organizations manage the two key components of the family housing program—allowances and government housing. Housing allowances are the responsibility of the Under Secretary of Defense for Personnel and Readiness and primarily are managed centrally at DOD headquarters by the organization responsible for all compensation issues, including basic pay and other types of allowances. This organization is the focal point for policy matters and initiatives related to housing allowances. Government-owned and -leased housing is the responsibility of the Under Secretary of Defense for Acquisition and Technology. Although DOD headquarters establishes overall management policy for government housing, primary management responsibility is delegated to the individual services, their major commands, and individual installations. As the nation’s largest landlord, the DOD infrastructure supporting the oversight, operation, maintenance, and construction of government family housing is very large, involving many thousands of government and contract employees. DOD recently placed increased emphasis on improving the quality of life of military members and their families. Because housing is viewed as a key factor affecting quality of life, DOD initiated plans to improve the family housing program. For example, with the support of the Congress, housing allowances were increased for fiscal year 1996 to reduce the average out-of-pocket amounts paid for housing in the civilian community. Increased funding also was approved for construction and modernization of government housing units. The Congress is considering similar actions for fiscal year 1997. Recognizing that the majority of existing government housing units are old and need major improvements, the Congress also approved DOD’s request for new authorities in fiscal year 1996 to test ways to encourage private developers to build and improve housing that would be available to the military. By promoting use of private capital, the goal is to improve military housing more quickly than could be achieved through normal military construction funding. DOD established the Housing Revitalization Support Office to oversee implementation of the new program, and the first projects are expected to be approved by the end of fiscal year 1996. In an October 1995 report, the Defense Science Board Task Force on Quality of Life reported that the military’s housing delivery system was intrinsically flawed. Among many recommendations to improve the housing program, the task force recommended establishment of a military housing authority to manage DOD housing using private housing industry management principles and practices. DOD has evaluated the report and has begun planning to implement some of the recommendations such as improved housing referral services. DOD did not have immediate plans to request authority to implement a military housing authority. We reviewed DOD’s military family housing program in the United States to determine whether (1) DOD’s policy of relying primarily on private housing to meet military family housing requirements is cost-effective, (2) the military services are complying with this policy, and (3) DOD’s family housing policies result in equitable treatment for all military families. We performed our review at the DOD, Air Force, Army, Navy, and Marine Corps headquarters offices responsible for overseeing housing allowances and for managing military family housing. We also performed work at the Housing Revitalization Support Office; the Per Diem, Travel and Transportation Allowance Committee; the DOD offices responsible for quality of life issues, retention, and recruitment; the Air Force’s Air Combat Command; the Army’s Training and Doctrine Command; the Commander-in-Chief, Atlantic Fleet; and the Naval Facilities Engineering Command, Atlantic and Southwest Divisions. At each location, we interviewed responsible agency personnel and reviewed applicable policies, procedures, and documents. We obtained and analyzed detailed housing requirements and availability information from 21 Air Force, Army, Navy, and Marine Corps installations that manage government housing. The 21 installations were selected judgmentally to obtain a cross section of installations by service, by geographic location, and by reported family housing availability. We also included 6 of the 10 installations identified by DOD in May 1995 as areas with the greatest shortfall of affordable housing. We visited 8 of the 21 installations, 2 from each service, to review in greater depth the family housing conditions at each installation, including family housing requirements, available private-sector and government housing, housing referral services, and government housing construction and renovation plans. We also toured government-owned housing at these installations. Appendix I identifies the installations included in our review. To determine the cost-effectiveness of DOD’s policy of relying primarily on private-sector housing, we compared the government’s reported costs for families to live in private housing with the government’s reported costs for families to live in government quarters. To do this, we reviewed prior DOD and other agency reports on military housing costs and analyzed reported military costs for each housing alternative in fiscal year 1995. To determine the military services’ compliance with the DOD housing policy of relying on private housing, we analyzed Army and Air Force summary data on installation family housing requirements, civilian housing availability, and government-owned housing inventories. The Navy and the Marine Corps did not accumulate comparable data. We also reviewed the family housing situation in detail at the 21 selected installations to determine whether the installations were relying first on private-sector housing to meet military housing requirements. For installations that did not appear to be in compliance with the DOD policy, we explored the reasons for noncompliance. To determine whether DOD’s family housing policies result in equitable treatment for all military families, we compared housing costs for military families that live in private housing and in government housing. We also reviewed the government’s reported cost of the housing benefit provided to service members in the same paygrade when living in private and government housing. Our review was performed between May 1995 and June 1996 in accordance with generally accepted government auditing standards. Studies by the Congressional Budget Office (CBO) and DOD show that the cost to the government is significantly less when military families are paid a housing allowance and live in private housing. These studies and our analysis estimate that the cost difference to the government for each family that lives in private housing, instead of government housing, ranges from about $3,200 to $5,500 annually. The difference is primarily due to three reasons. First, the government pays about 80 percent of the housing costs for a family that lives in private housing compared to paying 100 percent for a family in government housing. Second, the government pays significantly less federal school impact aid for military dependents when they live in private housing, which is subject to local property taxes. Third, the private sector generally can build, operate, and maintain a family housing unit at less cost than the government. Prior to the 1960s, military personnel normally lived in government housing. Most enlisted personnel were single and lived in government barracks, and married officers of sufficient rank usually lived in government family housing. However, the advent of the Cold War with a large peacetime military force, decisions to make government family housing available to most married personnel, and a significant increase in the percentage of married enlisted personnel resulted in a tremendous increase in the demand for military family housing. The majority of DOD’s current family housing inventory was built between the late 1950s and the late 1960s to help meet this increased housing demand. But DOD recognized that it could not afford to construct enough housing for all personnel with dependents. Thus, in the mid-1960s, DOD adopted the policy of relying primarily on private-sector housing in areas where affordable housing was available and paying service members allowances to help defray their housing costs. In 1993, CBO issued a report on military family housing in the United States that addressed many issues related to reducing the cost of the family housing program. The report included an analysis comparing the average annual cost of a military housing unit with the cost of a private-sector housing unit obtained by a military family. The comparison showed that in the long run the government spent $5,500 more annually when military housing was provided instead of paying an allowance for a family to live in private-sector housing. In response to the report, DOD performed an analysis comparing the same costs. Because of some differences in the assumptions and data used for some cost elements such as long-term capital investment and school impact aid costs, the DOD estimates differed somewhat from the CBO study. However, DOD also concluded that the reduced costs to the government by using private-sector housing was significant—about $3,200 per family annually. Key details of the two analyses are shown in table 2.1. A primary reason for the cost difference is that military families pay a portion of the housing costs out of pocket when they live in private housing. Families living in private housing typically pay about 20 percent of their housing costs out of pocket because housing allowances cover about 80 percent of average housing costs. CBO estimated that a family’s out-of-pocket cost would be $1,700 annually and DOD estimated the cost would be $1,929. The out-of-pocket amount must be paid from other sources, such as other military compensation or spousal income. Families living in government quarters do not pay out-of-pocket costs because the military pays all housing and utility costs. The difference in federal impact aid paid by the Department of Education and DOD is another key factor explaining the difference in the costs to the government. Impact aid is paid to local governments to help cover the cost of educating dependents of military members. The impact aid for each dependent is significantly higher for students that live with their families in government quarters because government housing is not subject to local property taxes. When military families live in private housing, a much smaller amount is paid for each student because the housing unit is subject to local property taxes. The CBO analysis found a third factor contributing to the difference in government housing costs. For a variety of reasons, CBO and others have concluded that the private sector can build, operate, and maintain housing more economically than DOD. For example, table 2.1 shows that CBO estimated that the cost of a government housing unit, excluding school impact aid, was $11,100 annually. In comparison, the estimated annual cost of a private housing unit was $9,200. (This amount is the sum of the housing allowance and the out-of-pocket cost.) According to CBO, the difference of $1,900 represents the extra costs that the military incurs to deliver a housing unit and is caused by the government’s long planning and budgeting cycle, project oversight costs, higher labor costs, and detailed regulations and constraints on housing design and construction. We performed a similar analysis using fiscal year 1995 costs. The analysis showed that the government spent $4,957 less annually for each family that lived in private housing (see table 2.2). We based our estimate of the cost of a government housing unit on reported DOD operation and maintenance costs for fiscal year 1995 and DOD’s estimates of the costs for capital investment, school impact aid, and referral services. We based our estimate of the cost of a private housing unit on the reported housing allowances paid in that year and DOD’s estimates of the costs for school impact aid and referral services. Similar to the results of the CBO analysis, our analysis also shows that there are three primary causes for the cost difference to the government. First, military families pay a portion of their housing costs out of pocket when living in private-sector housing. We estimated that these costs would amount to $2,016 in fiscal year 1995. Second, the government pays more school impact aid when military dependents live in government housing. Third, the government’s cost is greater than the private sector’s cost to deliver a family housing unit. Although DOD’s policy of relying first on private housing to meet military family housing needs is stated in military housing instructions and is cited in congressional hearings on military housing, DOD and the services have not taken full advantage of the policy. Until the recent introduction of new initiatives to encourage private investment in military housing, DOD had placed little emphasis on increasing reliance on private housing. Instead, even at installations where surrounding communities can meet additional military housing needs, the services continue to operate old housing that does not meet suitability standards and, in some cases, improve or replace government housing. As a result, opportunities for reducing housing costs have been lost because DOD has not taken advantage of the significant savings available from use of private housing. DOD and the services have not maximized use of private housing for a variety of reasons, including a reliance on housing requirements analyses that often underestimate the private sector’s ability to meet family housing needs; a concern over quality of life, although there is little evidence that family quality of life is better served through use of government housing; a reluctance to designate a greater portion of existing government housing for use by junior personnel who are less able to afford private housing than senior personnel; and a housing allowance system that results in available private housing being unaffordable in some areas. Current initiatives to increase housing allowances and to encourage private investors to build housing for military families have the potential for reducing costs while meeting military family housing needs. However, additional steps are needed to ensure that maximum use of private housing is made before renovating or replacing government housing that has reached the end of its economic life. Information reported by the Army and the Air Force shows that many military installations in the United States have not maximized the use of private housing to meet military family housing needs. For example, the Army reports that over 34,000 government family units at 59 Army installations are occupied but are considered surplus—meaning that the communities near these installations have available and affordable housing that could meet these requirements. To illustrate, the Army reports that Forts Knox, Polk, and Eustis have 3,387, 2,422, and 824 surplus government housing units, respectively. Similarly, the Air Force reports that over 4,000 government units at 13 Air Force installations are surplus. For example, Andrews, Langley, and Seymour Johnson Air Force Bases have 985, 398, and 262 surplus housing units, respectively. The Navy and the Marine Corps do not accumulate comparable summary housing information for their installations. However, housing referral officials at some Navy and Marine Corps installations included in our review stated that affordable private housing was readily available in the local civilian communities. For example, housing referral officials at Cherry Point Marine Corps Air Station stated that the civilian communities near the installation had hundreds of available and affordable family housing units. Referral officials at the Norfolk naval complex stated that the local community could support additional housing needs for officers that currently live in government quarters. Although the services report some surplus government housing units, our analyses indicate that the private sector can meet significantly more of the military’s family housing needs. We found that systemic problems in the housing requirements determination process can understate the private sector’s ability to meet military needs and result in a self-perpetuating requirement for government housing, even at locations where affordable private housing is available. Our evaluation of the services’ housing requirements analyses for the 21 installations included in our review showed that methodology problems understated the private sector’s ability to meet military needs for 13 installations. As a result, the installations planned to continue operating and, in some cases, improve government housing instead of saving money by relying more on private-sector housing. The services periodically perform housing analyses for each major installation to forecast military housing requirements and the availability of government and private housing units to meet the requirements. Each analysis normally includes a detailed estimate of the installation’s housing requirements considering individual paygrade and bedroom needs based on family size. The services’ analyses estimates and compares military family housing requirements with the inventory of government-owned and -leased housing and with the estimated number of available private housing units that meet the military’s criteria for suitability and affordability. The process predicts whether an installation will have a housing surplus or deficit in the near future. Predicted deficits can form the basis for justifying government housing construction and renovation projects. Predicted surpluses can indicate a need to close government units. The housing analysis process is complex because many variables are considered and because each analysis attempts to predict future housing needs and housing availability. Also, the services use different methods to compute housing requirements. The Army determines requirements centrally for all installations using a computer model. The Air Force, the Navy, and the Marine Corps use private contractors to perform housing analyses for each installation. We found two key methodology problems in the military’s housing requirements process that tend to perpetuate the need for government housing, including housing that does not meet DOD’s suitability standards, by understating the private sector’s ability to meet military needs. The first problem is that the housing analyses match military family housing requirements against government housing units before considering private housing units. Regardless of the private sector’s ability to meet military housing needs, private-sector housing is considered only if housing deficits remain after the government housing inventory is fully used. The second problem is that many housing analyses assume that only a small portion of a community’s vacant rental units will be available for military families to occupy. As a result, the analyses underestimate private housing availability because they exclude from consideration hundreds of suitable vacant units. From a requirements determination perspective, matching housing needs against government housing units, including those that DOD considers unsuitable, before available private housing units is inconsistent with DOD’s policy of relying first on the private sector. For example, if all family housing requirements can be met by government housing units, then private-sector housing is not considered, even though private housing may have been sufficient to meet some, or even all, of the military’s requirements. This situation can result in a self-perpetuating requirement for government housing. The following examples illustrate the problem. The Army’s housing requirements model estimated that 844 of Fort Eustis’ 1,330 family housing units were surplus. If the model had matched housing requirements against private-sector housing before matching them against government housing, the model would have estimated that 1,170 government units were surplus, an increase of 326 surplus units that could be closed rather than replaced when they reach the end of their economic life. Fort Eustis officials stated that most of the installation’s housing was old and did not meet current suitability standards. At the time of our visit in July 1995, Fort Eustis had ongoing projects to demolish 367 housing units constructed in the 1950s and to upgrade 8 senior officer quarters. The officials estimated that it would cost about $42 million to improve the remaining housing inventory. At Fort Knox, the Army’s model estimated that 3,273 of the 4,364 family housing units were surplus. However, if housing requirements had been matched against private-sector housing before matching them against government housing, the model would have estimated that 3,878 government units were surplus, an increase of 605 surplus units. During our visit in April 1996, Fort Knox officials stated that most of the installation’s housing was old and did not meet current suitability standards. They stated that they planned a net reduction of 812 government units by the year 2000 and would like to bring the remaining government housing inventory up to current standards at a cost of about $127 million. Army headquarters officials stated that they have decided to change their model so that military requirements are matched first against private-sector housing. The officials stated that the change will result in reporting a greater number of surplus government housing units and will help Army officials identify installations where old government housing units could be closed to save operating costs. Air Force, Navy, and Marine Corps officials did not indicate that they planned to change the requirements determination process to consider private-sector housing first. The second problem in the requirements determination process that can understate the private sector’s ability to meet military housing needs is the methods that are used to estimate how many vacant rental units will be available to military families. The Air Force and other housing experts consider that the natural rental vacancy rate in most markets is about 5 percent. This vacancy rate provides for vacancies caused by normal rental turnover and by rental units undergoing repairs or renovations. Vacant rental units above the 5-percent level often are called excess vacancies and normally are considered available for rent. Air Force officials stated that all suitable excess vacancies should be considered as being available to the military. However, the market analyses for five of the six Air Force installations included in our review concluded that only a small portion of the suitable excess vacancies would be available to military families. Similarly, the market analyses for six of the nine Navy and Marine Corps installations included in our review concluded that only a small portion of the suitable excess vacancies would be available to military families. The Army’s model does not predict the existence of excess vacancies because the model assumes that in the long run, rental housing supply will match rental housing demand. The following examples illustrate the problem with Air Force, Navy, and Marine Corps analyses. The Langley Air Force Base market analysis reported that 398 of the installation’s 1,606 government housing units would be surplus in the year 2000. However, the analysis estimated that only 888 of 7,727 suitable excess vacancies in the private sector would be available and affordable to Langley families in that year. The analysis assumed that the remaining vacancies would be available to families from nearby Army and Navy installations. However, the requirements analyses from these installations estimated that they would only use 1,223 rental vacancies. If the Langley analysis had assumed that most of the remaining suitable and affordable excess vacancies were available to Langley families, the analysis would have predicted that the private sector could meet all of Langley’s family housing needs. Langley officials stated that most of the installation’s housing did not meet current suitability standards. In 1994, an Air Force contractor estimated that improving the housing to current standards would cost $99 million. At the time of our visit to Langley in August 1995, an $8.5-million project was underway to renovate 144 family units. Another project, which will cost about $16 million, had begun to replace 180 family units constructed in 1976 with 148 new units. Although only about 20 years old, Langley officials stated the units being demolished had experienced maintenance problems and did not meet current suitability standards. The Cherry Point Marine Corps Air Station market analysis reported that the installation would have a deficit of 600 family units in the year 1999. The analysis estimated that only 140 of 1,944 suitable and affordable vacancies in the private sector would be available for Marine families. If the analysis had assumed that most suitable and affordable excess vacancies were available to military families, the analysis would have predicted that Cherry Point would have a surplus of government housing units instead of a deficit in the year 1999. Further, the analysis assumed that 2,352 additional families would be moving to Cherry Point because of Base Realignment and Closure Commission decisions in 1993. In 1995, the Commission changed this decision, and the additional personnel will not be moving to the installation. Cherry Point officials stated that most of the installation’s housing was old and did not meet current suitability standards. They estimated that about $187 million would be required to renovate the existing inventory of government housing units. At the time of our visit to Cherry Point in September 1995, 165 officer and enlisted housing units were being renovated at a cost of $8.3 million. The questionable quality of the services’ housing requirements determination process has been a long-standing problem that has been cited in several past audit reports. For example, a 1992 report by the DOD Inspector General found that the Navy and the Air Force overstated family housing requirements and understated the amount of private-sector housing available to satisfy requirements for several proposed housing projects. A 1994 report by the Naval Audit Service concluded that the Navy overstated housing requirements at eight installations because the requirements determination process was based on flawed procedures, poor implementation of those procedures, and inaccurate data. More recently, the House Committee on National Security’s report accompanying the National Defense Authorization Act for Fiscal Year 1996 directed the Secretary of Defense to study the different methods used by the services to determine housing requirements and develop a departmentwide standard methodology. In response, the DOD Inspector General is auditing the various methods used by the services to determine whether the methods identify housing requirements in an accurate and economical manner. DOD plans to assess the findings and recommendations of this audit and respond to the Congress by October 15, 1996. In addition to DOD’s underestimating the private sector’s ability to meet military needs, other factors have contributed to the continued use of government housing, even when private housing is available. These factors include concerns over quality of life, a reluctance to designate more government housing for junior personnel, and housing allowance amounts that make available private housing unaffordable. Over the past 2 years, DOD has placed increased emphasis on improving the quality of life of military members and their families. Citing a direct relationship between quality of life and readiness, DOD has pursued efforts to improve key quality of life elements such as compensation, family separation time, community and family services, and housing. In the housing area, many DOD housing officials stated that the quality of life of military families is better served through use of government housing. As a result, they stated that many installations have continued to operate government housing at locations where the private sector could meet additional military housing needs. DOD officials noted that by living in government housing, families have the nearby support of other military families, enjoy a sense of greater security and safety, can save transportation costs, and are closer to on-base amenities such as commissaries, child care, and recreation facilities. Further, on-base housing is always affordable since families do not pay out-of-pocket costs for their housing, utilities, or maintenance. Some officials stated that if additional families were forced to live in private housing, then more families would pay out-of-pocket housing costs, which could reduce overall quality of life and adversely affect morale, retention, and recruitment. Because quality of life is somewhat intangible depending largely on individual preferences and perceptions, it is difficult to identify, measure, and assess the factors that most affect a service member’s quality of life. For the most part, we found little quantifiable evidence that supports the view that quality of life is better served through military housing. Without such information, DOD does not know whether decreased reliance on government housing would result in adverse consequences. DOD officials agreed that there is little quantifiable data available to show that military families prefer to live in military housing. However, they stated that the high demand for government quarters is strong evidence that families prefer military housing and that continued operation and improvement of this housing will enhance quality of life. DOD officials noted that government housing has a very high occupancy rate and that most installations have a waiting list of families that desire to move from private housing to government quarters. Some evidence, however, indicates that the current demand for government housing may not be an accurate indication of member preferences for housing because military families have a significant financial incentive to seek government quarters. For example, some Army officials stated that the demand for government quarters in all likelihood would be far less if families paid the same out-of-pocket costs for government housing that they pay for private housing. Also, in its 1993 report on military family housing, CBO reported that “not all, or even most, families who value the on-base life-style would choose to live in DOD housing if they were faced with paying its full cost . . . It seems likely that without the implicit price subsidy for DOD housing, many more families would choose to live in the private sector.” Although little data is available on service member housing preferences, we identified some information that cast doubt on the view that members prefer government housing. For example, a May 1995 report on quality of life in the Air Force concluded that “more Air Force personnel live off-base and given the means, prefer to live off-base.” The report stated that the factors, ranked in order, that affected members’ decisions to live in government or private housing included where they were stationed, safety, cost, and housing quality. Similarly, a January 1995 report on quality of life in the Marine Corps reported that members living in private housing were more satisfied with their residence than those living in on-base family housing. During our visit to the Commander-in-Chief, Atlantic Fleet, in August 1995, housing officials and top enlisted personnel stated that the quality of life of military families was better served through use of private housing. They stated that most military families prefer to live in civilian communities, particularly when they can afford to purchase homes. Concerning the impact that housing conditions have had on military quality of life and readiness, we found little evidence that retention or recruitment has been affected adversely by existing housing conditions. For example, in DOD’s annual report on personnel readiness for fiscal year 1995, DOD reported that skillful management of the force drawdown since fiscal year 1992 has allowed DOD to improve the quality of the force and its readiness. Also, a DOD news release in November 1995 reported that the armed services had met their fiscal year 1995 recruiting goals “while maintaining the high quality necessary to maintain a capable, ready force.” Similarly, a Navy survey of enlisted personnel who left the service in fiscal year 1995 showed that the top three reasons for leaving were basic pay, promotion and advancement opportunities, and amount of family separation. A comparable fall 1995 Army survey of enlisted personnel reported that the top four reasons for leaving the Army were the amount of basic pay, quality of Army life, promotion opportunity, and separation from family. Less than 1 percent of the respondents in each survey cited housing quality or housing availability as a reason for leaving the service. Some military installations reserve government housing for senior personnel at locations where private-sector housing is available and affordable for senior personnel. When the private housing near these installations is too expensive for junior personnel, this practice can increase reported housing deficits because junior personnel may be living in private housing that is considered too expensive or otherwise unsuitable for their paygrades. For example, the Navy’s November 1993 housing analysis for the Norfolk naval complex forecasted a deficit of 2,171 housing units for 1998. The deficit primarily reflected the shortage of private housing that was affordable to junior enlisted personnel. However, the analysis also reported that enough private housing was available and affordable to meet the housing needs for all officers and most senior enlisted personnel. If senior personnel and their families lived in private housing and if the government housing currently reserved for their use were redesignated for use by junior personnel, the reported deficit would be 775 units, a decrease of 1,396 units. This problem with housing for junior enlisted personnel is caused by two factors. First, because housing allowances increase with rank, junior enlisted personnel are least able to afford private-sector housing. In the Washington, D.C., area, for example, the monthly housing allowance for the family of a junior enlisted member in paygrade E-3 is $659. The allowance for an E-6 family is $893, or 35 percent more than the E-3 family’s allowance. Second, junior enlisted families are more likely to live in private housing than higher graded personnel because proportionally less government housing is assigned to junior members and their families. In fiscal year 1995, about 22 percent of junior enlisted members’ families in paygrades E-1 through E-3 lived in government housing compared to 37 percent of the families in paygrades E-4 through E-6 and 30 percent of the families in paygrades E-7 through E-9. DOD officials agreed that the relatively low proportion of government housing assigned to junior personnel is a problem. They stated that one reason the situation exists is because, prior to the early 1970s, junior enlisted personnel with dependents normally were not authorized to live in government family housing. As a result, housing constructed before that time was designed for more senior personnel and designated for their exclusive use. Many members continue to view assignment to government quarters as a privilege traditionally available to more senior, career personnel. DOD officials also stated that, although more government housing has been made available for junior enlisted personnel over the past few years, some installations continue to reserve government housing for senior personnel in areas where senior personnel can afford to live in private housing. They stated that some installation commanders are reluctant to designate more government housing for junior personnel because of the perceived adverse impact on senior personnel. Specifically, some commanders are concerned that requiring senior personnel to live in private housing so that junior personnel can live on base will be viewed by senior members as a reduction in their benefits and quality of life. Although we understand the reasons supporting the reluctance to designate more quarters for junior personnel, the practice can reduce the savings available to the government by maximizing use of private housing. Also, to lessen the potential impact on senior personnel and their families, housing redesignations could be accomplished over a phased period of time as the families move from government housing when transferred to other duty stations. Another factor that contributes to the continued reliance on government housing is the high cost of private housing. DOD officials stated that many communities surrounding military installations have housing that is available to military families but often the housing is not affordable. DOD considers private housing to be unaffordable to a family if the housing costs, including utilities, exceed an amount equal to the sum of a member’s basic housing allowance, variable housing allowance, and an additional 50 percent of the basic allowance. Our analysis of the housing situation at 21 installations identified several cases where private housing was available but was considered to be unaffordable. For example, the Navy’s October 1994 housing analysis for the San Diego naval complex reported that affordability, rather than availability, of private housing was a key reason causing a military housing deficit in the area. The analysis reported that in 1994, 41 percent of all one- and two-bedroom rental units and 98 percent of all three-bedroom rental units were not affordable to service members in paygrades E-1 through E-3. For all enlisted paygrades, 20 percent of all one- and two-bedroom rental units and 75 percent of all three-bedroom units were not affordable. Partly because similar affordability problems were predicted for 1999, the analysis estimated that a housing deficit would continue. Affordability of private housing was also a key problem reported in the March 1994 housing analysis for the Marine Corps’ Twentynine Palms installation. The analysis predicted that in 1998, 45 percent of all one- and two-bedroom rental units and 100 percent of all three-bedroom rental units would be unaffordable to service members in paygrades E-1 through E-3. The analysis also reported that over 1,200 military families who lived in private rental housing in 1993 were unacceptably housed primarily because of high costs. However, service members in most paygrades at Twentynine Palms do not receive the variable portion of the housing allowance because the installation is not considered to be in a high-cost area. Assuming that the market analyses correctly reported the actual costs of private-sector housing, situations such as these raise questions about the adequacy of the housing allowance program. In addition to helping defray housing costs when living in the private sector, housing allowances are intended to equalize housing costs paid by service members in the same paygrade, regardless of where they live in the United States. In other words, housing allowances are designed so that members in the same paygrade should pay the same amount of their housing costs whether they live in a rural, low-cost area or in an urban, high-cost area. For example, for fiscal year 1996, housing allowances were established at levels so that an E-5 family living in private housing will pay about $153 a month out of pocket for housing costs, regardless of where the family lives in the United States. The housing allowance is supposed to cover the remaining portion of the typical total housing costs for each geographic area. If housing allowances functioned as intended, it appears unlikely that an installation’s housing analysis would find that most private-sector housing was unaffordable to military families. Yet, this is the case in some locations. According to DOD officials, the primary explanation for this is that a different measure of housing costs is used to determine housing allowances than is used to determine private housing affordability in housing analyses. To illustrate, the variable portion of the housing allowance is based on the amount service members actually spend for housing in a geographic area. In contrast, the housing analyses determine private housing affordability on the basis of average rents charged in an area for housing of the size and quality that members are entitled to. Use of different measures of housing costs can cause allowances to spiral downward relative to actual housing costs in an area, making private housing less affordable. For various reasons, such as keeping housing costs to a minimum, members in a geographic area may choose less housing than they are entitled to. For example, a family entitled to four bedrooms if living on base may rent a three-bedroom apartment or a house in a less desirable neighborhood. These members report lower housing expenditures than they would if they obtained the size and quality of housing they are entitled to if they lived in on-base housing. Because allowances are based on expenditures, the result is that allowances for some areas are established at levels lower than they would be if they were based on an area’s average housing costs. Lower housing allowance amounts in subsequent years can result in families obtaining even less quality or quantity of housing, causing a downward spiral to develop. A key issue is that this problem results in some installations reporting larger housing deficits due to the unaffordability of private housing. Under the current system, the deficits can be used to justify the need to build new or renovate existing government housing. Although we did not perform a detailed analysis, it appears plausible that at some installations increased allowances would make more private housing affordable and be a more cost-effective alternative than continued use of government housing. Our point is that DOD normally does not consider options involving changes to housing allowances when considering solutions to housing problems at specific locations. DOD officials stated that a working group has been formed to study the housing allowance program to determine whether changes are needed. Any proposals for change would be submitted for congressional consideration during the fiscal year 1998 budget process. Whether through cash allowances or government housing, the nontaxable housing benefit provided by DOD’s housing program is a significant part of the military compensation package. However, when viewed from a compensation perspective, the program contains inequities for military members and their families. Specifically, the program allows significant differences in the value of the housing benefit that is provided to members of the same paygrade depending on whether they live in private or government housing. The differences exist both in the out-of-pocket amounts paid by military families for housing and in the housing costs paid by the government to provide housing benefits. About two thirds of all military families in the United States own or rent housing in the private sector. These families receive a housing allowance that covers about 80 percent of typical private housing costs. The families pay the remaining 20 percent of their housing costs out of pocket from other sources of income. The other one third of military families live in government housing. These families forfeit their housing allowances but pay no out-of-pocket costs for housing and utilities. The difference in out-of-pocket costs creates an equity issue because service members of the same paygrade that perform the same job for the military can have different amounts of disposable income depending on whether they live in government or private housing. For example, the out-of-pocket housing cost for a typical E-6 family that lives in private housing is about $2,050 for fiscal year 1996. In other words, an E-6 family in private housing will pay about $171 more each month for housing than is covered by the housing allowance. In contrast, another E-6 family that lives in government housing will not have to pay any out-of-pocket costs for housing or utilities. As a result, in comparison to an E-6 family living in private housing, an E-6 family in government housing will have $2,050 more each year to use for other purposes. As another example, an O-4 family that lives in private housing in fiscal year 1996 typically will spend $2,760 out of pocket for housing because the housing allowance covers only 80 percent of housing costs. However, another O-4 family that lives in government housing will not pay any out-of-pocket housing costs and could use this $2,760, or $230 each month, for other purposes. The average out-of-pocket costs paid by service members living in private housing are determined through the process that establishes housing allowances. To set housing allowances, DOD conducts an annual survey of all service members that live in private housing. The survey obtains information on the actual amount each member pays for housing—the actual monthly rental cost or equivalent mortgage payment. DOD estimates total housing costs by adding a standard amount for utility costs to the reported monthly housing cost. In each geographic area, DOD averages total housing costs and sets allowances so that the median out-of-pocket costs for members in the same paygrade are the same in both high- and low-cost areas of the country. The out-of-pocket housing costs paid by service members in selected paygrades in fiscal year 1996 are shown in table 4.1. In a November 1994 draft report on family housing, DOD noted that the difference in out-of-pocket housing costs between families living in government housing and in private housing created a basic inequity that was recognized by all service members. However, DOD officials stated that the inequity did not appear to be a significant factor affecting morale. The officials stated that most members accepted the difference as a fact of military life. Another way to consider the equity question in the housing program is to examine the amount that the government pays to provide military families with housing. As discussed in chapter 2, the government spends from $3,200 to $5,500 more annually for a family that lives in government housing than it spends on allowances for a family that lives in private housing. This difference can create an equity issue because the government’s housing expenditure is significantly different for service members of the same paygrade depending on whether they live in government or private housing. For example, an E-3 family could be assigned to a government housing unit that costs the military over $100,000 to construct and over $8,000 annually to operate and maintain. At the same installation, another E-3 family could live in private housing. For this family, the military typically would pay about $5,680 in housing allowances annually. As another example, the Navy leases 300 family units from a private company at 1 complex in the Norfolk naval installation area. For each leased unit, the Navy spends about $14,600 annually in rent, utilities, and other related costs. Some E-4 families are assigned to these leased units. Because the complex normally is fully occupied, other E-4 families live in private housing in the surrounding community and receive $6,258 annually in housing allowances. In this situation, the military’s housing expenditure is about $8,300 more for one service member than another member, although both are the same paygrade and may even perform the same job for the Navy. Similarly, the Marine Corps leases 600 family units at the Twentynine Palms installation. Each unit costs about $14,600 annually, including utilities and related costs. Some E-5 families are assigned to these government quarters. Other E-5 families live in private housing and receive $5,380 annually in housing allowances. Because of differences in where they live, the military’s expenditure for housing is about $9,200 more for one service member than another in the same paygrade. It is not easy to find a cost-effective solution to the equity problem created by DOD’s housing program. Past DOD studies related to family housing issues, such as the Defense Science Board’s Quality of Life report and the Seventh Quadrennial Review of Military Compensation, generally accepted continuation of the current program. However, the reports recommended that the difference in housing costs be reduced somewhat by increasing allowances so that families pay only 15 percent out of pocket. The Quality of Life report also made a long-term recommendation that potentially could eliminate the housing cost inequity. The report recommended establishing a military housing authority to build, maintain, and operate all military housing. Details on how the concept would be implemented or estimates of its cost were not provided in the report. However, under this concept, housing could be provided for all military families who would forfeit their allowances but pay no out-of-pocket costs. This approach would result in greater equity because no families would pay out-of-pocket costs. DOD officials stated that eliminating the out-of-pocket costs for all families probably would be unaffordable. One of several proposals for reducing military housing costs presented in the 1993 CBO report also could eliminate the housing cost inequity. Under the proposal, all military families would receive a housing allowance and families that lived in government housing would pay rent and the cost of their utilities. In addition to eliminating the financial incentive to seek government housing, CBO noted that this alternative could result in greater equity because all families would pay a portion of their housing costs out of pocket. CBO stated that the intent of the proposal was to not increase the out-of-pocket costs of military personnel. Thus, CBO suggested that any difference between DOD’s total rent and utility receipts and the cost of providing housing allowances to families living in government housing would be used to finance an increase in allowance levels for all military families. DOD officials stated that the proposal would reduce the quality of life of military families. DOD’s policy of relying first on private-sector housing to meet military family housing needs is cost-effective. Although there is some variation in the estimates, CBO, DOD, and our estimates show that the cost to the government is significantly less when military families are paid a housing allowance and live in private housing than when they live in government housing. In addition to being cost-effective, there are other advantages to relying on private housing. In the current environment of constrained defense budgets, the short-term flexibility offered by housing allowances appears preferable to the long-term commitments required by military construction. For example, family housing requirements fluctuate as changes occur in the missions and in the number of personnel assigned to each installation. Generally, such changes in housing requirements can be accommodated more easily through use of housing allowances compared to the construction, operation, and maintenance of fixed inventories of government housing units. Further, housing allowances also can offer service members a greater selection of housing options to fit their needs instead of limiting them to what is available in government housing. Although DOD’s policy of relying first on private-sector housing to meet family housing needs is cost-effective, the military services have not taken full advantage of the significant savings available through greater use of private housing. This is because DOD and the services have (1) relied on housing requirements analyses that often underestimate the private sector’s ability to meet family housing needs, (2) believed that quality of life of military families is better served through use of government housing, (3) been reluctant to designate a greater portion of existing government housing for use by junior personnel who are less able to afford private housing than senior personnel, and (4) used a housing allowance system that results in available private housing being considered unaffordable in some areas. Many military installations in the United States have a justifiable need for government quarters. But steps are needed to ensure that government quarters are provided only at installations where the local communities cannot meet the housing needs of military families. In addition to formulating detailed goals and plans to achieve maximum use of private housing, such steps include standardizing and improving the housing requirements determination process, measuring members’ preferences for family housing, designating more government housing for junior personnel, and evaluating potential changes to the housing allowance system to foster greater reliance on private housing. In locations where the local communities can meet additional military family housing requirements, existing government housing units can be closed, rather than renovated or replaced, when the units reach the end of their economic life. The military family housing program also does not result in equitable treatment for all families. Two thirds of all military families in the United States live in private-sector housing and pay about 20 percent of their housing costs out of pocket. The remaining one third live in government housing and do not pay any out-of-pocket costs for housing. As a result of the differences in out-of-pocket costs, the families that live in private housing typically have less disposable income than families of service members of the same paygrade that live in government housing. Further, because of the difference in the government’s cost to provide government housing and to pay housing allowances, the military spends significantly more to house families in government quarters than it spends to house families of service members of the same paygrade in private housing. Changes are needed in the housing program to address the inequities in the housing benefits provided to service members and their families. Because two thirds of military families already pay out-of-pocket costs and to eliminate the financial incentive for members to seek government housing, we believe that all families should pay the same portion of their housing costs whether they live in government or private housing. Under this approach, families living in government housing could (1) receive their housing allowances and begin to pay fair market rent for the housing or (2) continue to forfeit their housing allowances and begin to pay an amount equal to the average out-of-pocket costs paid by families living in private housing. In either case, this approach would reduce the housing benefit for service members living in government housing because they would begin to pay a portion of their housing costs. However, to avoid reducing the total military family housing benefit, the total out-of-pocket amount paid by families living in government housing could be used to fund an offsetting increase in the housing allowance for all service members. In other words, housing allowances could be increased by an amount equal to the total out-of-pocket amount paid by families in government housing. In addition to helping the two thirds of military families that currently receive allowances, this would benefit those in government housing because an increase in the housing allowance would reduce average out-of-pocket costs for all members. We recommend that the Secretary of Defense take the following actions. Establish a long-term goal to reduce the use of government family housing in the United States to the minimum possible level. The goal should limit government housing to families assigned to locations where no adequate private housing alternative exists and to the small number of families that reside on base for military necessity. Revise the housing requirements process by issuing guidance to ensure that the process (1) matches military housing requirements with available private housing before matching the requirements with government housing and (2) considers suitable, affordable rental vacancies in excess of normal market levels to be available to the military. The revised guidance also should take into account the results of the DOD Inspector General’s review of the housing requirements process. Develop information to better quantify the relationship between quality of life and family housing. The information should reflect service members’ desires and preferences for private versus government housing under various circumstances, such as if housing allowances were increased, if rent or utilities were charged for government housing, and if no changes were made to the current program. Direct installation commanders to redesignate, to the maximum practical level, government housing reserved for senior personnel for use by junior personnel in areas where private housing is available and affordable for senior personnel but not for junior personnel. To reduce the potential impact on the families of senior personnel, this action could be accomplished over a phased period of time. Ensure that the DOD working group on housing allowances considers housing allowance changes that could result in greater flexibility in addressing housing problems and cost savings through greater reliance on private housing. For example, the group should consider whether (1) housing allowances should be based on average housing costs in an area, rather than actual member housing expenditures and (2) housing allowances could be used in new, innovative ways to solve specific housing problems more economically than constructing or renovating government housing. Develop plans to reduce the difference in the average amounts paid for housing by families of service members in the same paygrade by requiring families that live in government housing pay a portion of their housing costs. These plans should include milestones for implementation. DOD partially concurred with our findings and recommendations (see app. II). At locations where adequate private housing is available, DOD stated that it will not support construction of new government housing and will carefully review proposals to replace deteriorated government housing. DOD also stated that it is pursuing initiatives to promote greater private investment in housing for military families and is studying potential changes to the housing allowance program that could result in correcting housing inequities and other problems. Further, DOD stated that it plans to revisit procedures for determining housing requirements and has chartered a study that will address the relationship between quality of life and family housing. With regard to our recommendation that the Secretary of Defense direct installation commanders to redesignate, to the maximum practical level, government housing reserved for senior personnel for use by junior personnel, DOD stated that current policy allows installation commanders to give priority to lower grades and that DOD will not superimpose an overall policy that would obstruct local retention objectives and operational effectiveness. Although we understand DOD’s concern, we believe that it would be beneficial for DOD to remind installation commanders of the junior member housing problem and to encourage the commanders to consider the current policy when evaluating use of existing on-base housing. DOD did not agree with our draft recommendation to equalize the average amounts paid by service members living in private and government housing. DOD stated that requiring the one third of military members that live in government housing to pay a portion of their housing costs would reduce their benefits, and as a result, could have severe consequences for military retention and readiness. DOD also stated that it would cost too much to equalize the average amounts paid for housing by eliminating the out-of-pocket costs for the two thirds of military members that live in private housing. We agree that the recommendation would have an adverse impact on the military families that live in government housing since they would begin paying a small portion of their housing and utility costs. However, allowing the housing compensation inequity to continue also has adverse impacts, such as increasing demand for government housing even in areas where private housing is available. Further, because out-of-pocket costs paid by families living in government housing could be used to fund offsetting increases in housing allowances, housing benefits for the two thirds of military families that live in private housing actually could increase. To clarify the intent of the recommendation and to ease its impact by allowing time for phased implementation, we changed the recommendation to say that DOD should reduce, rather than equalize, the difference in the average amounts paid for housing by requiring families that live in government housing pay a portion of their housing costs. | Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) military family housing program to determine whether the program: (1) is cost-effective; and (2) provides equal housing benefits to all military families. GAO found that: (1) DOD reliance on private-sector housing to meet military family housing needs is cost-effective; (2) the government's cost is significantly less when military families receive housing allowances to live in private housing; (3) the cost difference for each military family living in private housing ranges from $3,200 to $5,000 annually; (4) families living in private housing pay a portion of their housing costs and have a greater selection of housing options to meet their needs; (5) DOD is not maximizing its use of private housing due to its reliance on inaccurate housing requirements, concerns with military quality of life standards, reluctance to designate more government housing for use by junior personnel, and inaccurate categorization of affordable private housing; (6) the housing benefits afforded to service members within the same pay grade differ depending on whether the members live in government or private housing; (7) members living in private family housing have less disposable income than members in the same pay grade living in government family housing; (8) DOD has taken initiatives to increase housing allowances and encourage private family housing to reduce service members' reliance on government housing; and (9) DOD needs to take additional steps to ensure the maximum use of private housing and the equitable distribution of benefits among military families. |
DOD inventory control points are responsible for managing insurance items. We performed our review at the Aviation Supply Office (ASO), one of two Navy inventory control points, and the Defense Industrial Supply Center (DISC), one of six DLA inventory control points. As of March 1994, ASO managed insurance inventories valued at $193 million and, as of April 1994, DISC managed insurance inventories valued at $3 million. Spare parts and other supplies normally are designated as insurance items during the initial provisioning process. Initial provisioning is designed to provide parts until there is a requisitioning history from which relatively accurate forecasts of future demands can be made. Typically, these parts support a weapon system during the first 2 years of operation. At ASO, contractors or manufacturers recommend which parts should be stocked for insurance purposes, ASO reviews these recommendations, and the Naval Air Systems Command approves the recommendations if it agrees with the contractor and ASO. DISC classifies items on the basis of submissions by the using military service during the initial provisioning process. We analyzed ASO and DISC records to identify insurance items and determine if they were properly classified. We found that most of the items were not mission essential and, therefore, should not have been classified as insurance items. Table 1 summarizes the results of our analysis. Because only a small percentage of the insurance items were fully justified in the inventory control point records, we asked item managers to verify the classification of the insurance items. We randomly sampled 329 ASO items and 110 DISC items and sent questionnaires to item managers asking them to validate the records. According to the ASO item managers surveyed, 51 percent of the items were not mission essential. Table 2 summarizes the sample results. We did not make a similar analysis for DISC because none of the item managers had responded to the questionnaire at the time our fieldwork was completed. Significant numbers of nonessential parts and supplies continue to be stocked as insurance items because ASO and DISC do not have the internal controls to periodically review insurance items to identify those that are unneeded because they do not meet essentiality criteria. As noted in tables 1 and 2, only 10.5 percent of ASO’s insurance items were mission essential according to ASO records and only 22 percent were mission essential according to item manager responses to our questionnaire. At DISC, 42.3 percent were mission essential according to its records. ASO assigns mission essentiality codes on the basis of reports from end users on how the failure of a part affects mission capability. These codes range from one where item failure results in minor mission impact to one where item failure results in loss of a primary mission capability. DISC assigns essentiality codes, called weapon system indicator codes, on the basis of data provided by the using military service. Neither ASO nor DISC systematically reviews insurance items to validate the essentiality codes. ASO does require an annual review to ensure that the data elements used to prevent automatic purchases of insurance items are correctly loaded in the computer. However, this review does not verify that insurance items are mission essential. DISC does not require a review of insurance item essentiality. The absence of essentiality reviews contributed significantly to the low percentage of mission essential items identified in our review. In addition to the 22 percent ASO item managers said were mission essential, they indicated that 51 percent of the insurance items were not mission essential and that they either could not or did not determine essentiality for the remaining 27 percent. The following examples illustrate the error conditions identified. ASO stocks three skin assembly units used on the AV-8B aircraft as insurance items. These units, which are valued at a total of $158,927, have a nonessential coding in ASO’s records. In responding to our questionnaire, the item manager agreed with the coding in the record and indicated that the units were not mission essential. These assemblies have been in the Navy supply system since the weapon system was provisioned in 1986. In another case, ASO stocks 12 manual control levers used on the F/A-18 aircraft as insurance items. These levers, which are valued at a total of $997,020, have been in the supply system since 1983. Again, the item manager indicated that the lever, although categorized as an insurance item in the records, was not mission essential. Although DOD Material Management Regulation 4140.1-R, dated January 1993, states that only one replacement unit of an item may be stocked for insurance purposes, we found that ASO and DISC stocked many of the insurance items in quantities greater than one unit. This condition was true for both mission essential items and nonessential items. At ASO, 4,997 insurance items, valued at $126 million, or 50 percent, of the 9,937 insurance items were stocked in quantities greater than one unit. Of the 1,042 mission essential items included in these totals, 510 items had excessive quantities valued at $49 million. At DISC, 1,602, or 48 percent, of the 3,335 insurance items were stocked in quantities greater than one unit, including 784 of 1,410 mission essential items. The reasons for the excessive quantities are similar to the reasons that nonessential items are stocked as insurance items. That is, much of the excessive buildup occurred during the initial provisioning process. DOD downsizing and weapon system obsolescence and retirement also contributed to the stock buildup. However, neither ASO nor DISC has established the internal controls to periodically review insurance items to ensure that quantities are kept at the allowable stock level of one unit. An additional factor contributing to the excessive quantities is the inventory control points’ stock retention policies. ASO and DISC have computer programs to identify and recommend excess stock for disposal. ASO programs search for stocks in excess of retention levels and are run for all stocked items, not just insurance items. However, irrespective of retention levels, the programs will not recommend disposal action on quantities that fall below a floor of five units at ASO. The DISC programs identify disposal prospects on a selective basis and have not been run for insurance items. The computer programs have not been effective in reducing excess insurance stocks at ASO for two major reasons. First, contrary to DOD regulations, ASO has established retention levels for many insurance items that exceed the allowed stockage quantity of one unit. Second, the requirement that any disposal recommendation leave an on-hand quantity of five units precludes reducing the stockage level to one unit. As a result, only 330 of the 4,997 insurance items that we found to be overstocked were identified as such by ASO’s computer program. The following examples illustrate the overstockage conditions identified. ASO stocks 20 aircraft seat structures used on the A-7 aircraft as insurance items. These structures, which are valued at a total of $2,559,586, have been in the supply system since 1979. In responding to our questionnaire, the item manager indicated that 14 of these units were removed from aircraft as a result of design changes and were unserviceable. The remaining six units were serviceable but exceeded the allowed insurance stock level of one unit. In another case, ASO stocks two electrical equipment racks used on the E-2C aircraft as insurance items. These racks, which are valued at a total of $687,480, exceed the allowed insurance stock level of one unit but will not be reviewed for potential disposal because the quantity falls below ASO’s on-hand stockage floor of five units. The item manager agreed that the racks were in an excess position but would not recommend this item for disposal because of the on-hand stockage floor. In addition to unneeded procurement costs, DOD incurs large costs to manage and maintain excess inventories, particularly items with low demand or years of supply on hand. DOD expresses these holding costs as a percentage of the value of on-hand inventory. Holding costs include investment cost, or the cost of having funds tied up in inventory; storage costs; and obsolescence costs. The holding cost rate varies by inventory control point and averages 22 percent at ASO and 18 percent at DISC. In commenting on our draft report, DOD stated that the holding cost rates we used may be correct before a purchase decision is made, but once material is in inventory the risk of obsolescence is represented as a sunk cost and the opportunity to spend the funds on an alternative investment has been foregone. DOD also stated that the holding cost rates that should have been applied for material in stock is at least an order of magnitude less than the rates used in the report. DOD did not give an alternative percentage or amount and DOD’s accounting systems are not designed to capture actual holding costs. In commenting on another report (GAO/NSIAD-94-110, June 29, 1994), DOD agreed that unnecessarily large inventories increase holding costs and acknowledged that holding cost rates that only cover storage costs may not be appropriate. For example, reducing inventories by quantities sufficient to close warehouses would result in savings that exceed storage costs. While it is difficult to precisely determine the costs to manage and maintain nonessential and excessive insurance stocks, our review and DOD’s comments indicate that these costs would be millions of dollars a year. We recommend that the Secretary of Defense direct the Secretary of the Navy and the Director, Defense Logistics Agency, to (1) periodically review insurance items to ensure that they are mission essential and stocked in allowable quantities and (2) dispose of existing nonessential and excess insurance stock. We further recommend that the Secretary of the Navy direct the Commanding Officer, ASO, to set the retention level for insurance items at one unit and change the disposal computer program so that the on-hand stockage floor for these items also is one unit. DOD generally agreed with the thrust of our recommendations but did not agree with most of our report findings (see app. I). We have evaluated DOD’s comments and continue to believe that our basic position is sound; that is, the insurance inventories contain nonessential and excessive stocks. Our comments on some of DOD’s specific statements are at the end of appendix I. With regard to our recommendations, DOD stated that it would issue a memorandum by June 30, 1995, (1) reemphasizing the need to review insurance requirements prior to stock replenishment and (2) directing the disposal of nonessential stocks. DOD also stated that the Navy will direct ASO to reduce insurance stocks where the stockage is not in compliance with DOD regulations. The promised actions will be helpful, but they do not go far enough. Because insurance items are not expected to fail, most will not be reviewed if DOD only reviews those in need of stock replenishment. We believe that DOD should review all insurance items periodically to identify nonessential and excessive stocks. Over one half of the ASO insurance items have been in the supply system more than 10 years, and 87 percent have been in the supply system more than 5 years. Since then, requirements may have changed due to DOD downsizing and weapon system modification, obsolescence, or retirement. Unneeded insurance stocks tie up warehouse space and increase managerial burdens. To determine the adequacy of internal controls in the management of insurance items, we reviewed DOD, Navy, and DLA procedures; interviewed agency officials; and analyzed ASO and DISC computer files that contained insurance item data as of March and April 1994. ASO files included the master data file and disposal file. DISC files included the combined file (similar to a master data file) and contract file. By reviewing the files, we identified all insurance items managed by ASO and DISC. We then analyzed these items to determine which were classified as mission essential and which were stocked in quantities greater than one unit. We did not assess the reliability of these files. However, to validate insurance item data, we randomly sampled items that were not essential or exceeded authorized stock levels. The sample included 329 items from ASO files and 110 items from DISC files. We sent a questionnaire to the ASO and DISC item managers responsible for the sampled items. We asked the managers to validate and update the file information, provide opinions on the essentiality of the items and causes of excess stock buildups, and define the extent that excess stock was disposable. Using this data from the ASO managers, we projected the results to the universe from which the sample items were drawn at a 95-percent confidence interval. None of the DISC item managers had responded to the questionnaire at the time our fieldwork was completed. We performed our review between February and September 1994 in accordance with generally accepted government auditing standards. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of the report. A written statement also must be sent to the Senate and House Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of the report. We are sending copies of this report to the appropriate congressional committees; the Director, Office of Management and Budget; the Secretary of the Navy; and the Director, Defense Logistics Agency. Please contact me at (202) 512-5140 if you have any questions. The major contributors to this report are listed in appendix II. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated November 28, 1994. 1. The item mission essentiality codes we used in our analysis are assigned to items to indicate their level of impact on the mission of applicable equipment in the event stocks are depleted. The military essentiality codes DOD said we should have used are assigned to indicate the military importance of a part in relation to a higher component, equipment, or weapon. Both sets of codes should provide the same indication of mission essentiality and be based on input from technical personnel. We analyzed the item mission essentiality codes because the Aviation Supply Office’s (ASO) records showed these codes for 92 percent of the insurance items. We could not analyze the military essentiality codes because these codes were not shown on the records ASO provided us for over 99 percent of the insurance items. After receiving our draft report, DOD asked the Navy to determine the distribution of military essentiality codes. This analysis showed that 58 percent of the ASO insurance items were assigned a mission essential code, less than 1 percent were assigned a not mission essential code, and the remaining 41 percent were blank and not assigned a code. The Navy agreed that insurance items that are not coded as mission essential must be validated. 2. Although engineers may make essentiality determinations, we opted to send the questionnaire to the managers that have overall responsibility for the items. In making this decision, we consulted with ASO officials and asked them to review the questionnaire. We made their suggested changes and pretested the questionnaire with item managers before it was finalized. At no time in the process did ASO officials indicate that the questionnaire should be sent to engineers rather than item managers. Furthermore, we did not ask the item managers to refrain from consulting with engineers, equipment specialists, end users, or others with greater technical knowledge in preparing the responses. In fact, the responses indicated that such consultations did take place in some cases. 3. We did receive responses to our questionnaire. In July 1994 we asked the Defense Industrial Supply Center (DISC) to complete the questionnaire for 110 insurance items, but DISC did not respond to the request by the time our fieldwork was completed. However, in October 1994, after receiving our draft report, the Defense Logistics Agency (DLA) provided responses for 64 of the 110 items managed by DISC. The responses indicated that 14 percent of the insurance items were mission essential, 43 percent were not mission essential, and the item managers did not know if the items were mission essential for the remaining 43 percent. Also, the responses indicated that 57 percent of the insurance items were stocked in quantities that exceeded the authorized level of one unit. 4. At least two sections of the cited regulation state that one unit of an item may be stocked for insurance purposes. For example, page 3-3 states that essential items with no forecast of failure may be stocked as insurance items in quantities not to exceed one replacement unit. 5. We have modified the report to address DOD’s comments on holding costs. 6. At the completion of our fieldwork, we furnished ASO and DISC with written summaries of our findings and potential recommendations. We held an exit conference with ASO officials and gave them the opportunity to comment on the summary. We gave DISC officials the same opportunity, but they did not provide any comments. All of these actions were taken before the draft report was submitted to DOD for formal review and comment. In addition, prior to the ASO exit conference and the DISC exit conference offer, we had numerous discussions with ASO and DISC officials during the course of the review. Edward Rotz, Regional Management Representative David Pasquarello, Evaluator-in-Charge James Kurtz, Evaluator Wayne Turowski, Computer Specialist The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the Navy's and the Defense Logistics Agency's (DLA) management of their spare parts and supplies inventories, focusing on whether their insurance stocks are limited to: (1) mission-essential parts; and (2) one replacement unit as required by Department of Defense (DOD) regulations. GAO found that: (1) the Navy and DLA stock millions of dollars of unnecessary insurance items that are not mission-essential; (2) the Navy and DLA frequently exceed their authorized maximum stock levels, contrary to DOD regulations; (3) the Navy and DLA do not periodically review insurance items to ensure that they are mission-essential and stocked in appropriate quantities because they lack the internal controls necessary to prevent excessive stock buildup; (4) DOD downsizing, weapon system obsolescence and retirement, and stock retention policies have also contributed to excessive inventories; and (5) the excessive inventories cost DOD millions of dollars to procure, manage, and maintain. |
Despite the increasing importance of a high school education, only an estimated two thirds of students graduate from high schools nationwide. Students in certain subgroups, such as the economically disadvantaged and certain racial and ethnic groups, have historically graduated from high school at substantially lower rates than their peers. Students who do not graduate from high school are at a serious disadvantage compared to their peers who do. They are much less likely to obtain good jobs or attend college. The NCLBA includes several requirements for states to improve school and student performance, including measuring high school graduation rates. NCLBA expanded the requirements of the Improving America’s Schools Act of 1994 (IASA) for states, school districts, and schools to demonstrate that their students are making adequate progress toward their state’s academic goals. IASA required testing in each of three grade spans to determine whether a school made adequate yearly progress (AYP). NCLBA requires, by the 2005-06 school year, that annual tests in math and reading be administered to students in grades 3 through 8 and once in high school; by 2007-08, students must also be tested in science. In order to make AYP, schools are to show that increasing numbers of students reach the proficient level on state tests and that every student is proficient by 2014. NCLBA also designated specific groups of students for particular focus. These four groups are students who (1) are economically disadvantaged, (2) represent major racial and ethnic groups, (3) have disabilities, and (4) are limited in English proficiency. For a school to make AYP, its student body as a whole and each of the student groups must, at a minimum, meet the state targets for testing proficiency. Under NCLBA, schools must also use at least one other academic indicator, in addition to annual tests, to measure AYP. High schools must use graduation rate as one of their other academic indicators. The law defines graduation rate as the percentage of students who graduate from secondary school with a regular diploma in the standard number of years. Education officials told us that standard number of years is determined by a state and is generally based on the structure of the school. For example, a high school with grades 9 through 12 would have 4 as its standard number of years while a school with grades 10 through 12 would have 3 as its standard number of years. NCLBA regulations specifically require a high school, in order to make AYP, to meet or exceed its other academic indicators, including what the state has set as the graduation rate for public high schools. NCLBA does not specify a minimum graduation rate that states must set. States have used a variety of methods to measure AYP on their graduation rate indicator. For example, states have set graduation rate targets or goals or have allowed schools to show progress toward a target or goal as a way for schools to meet the graduation rate indicator requirement. The law does not require states to increase their graduation rate over time. The law requires states to demonstrate that their definitions produce graduation rates that are valid and reliable. A valid rate would be one that measures what it intends to measure. A reliable rate is one which, with repeated data collections and calculations, produces the same result each time such collections and calculations are performed. A key aspect of the reliability of graduation rates is the quality of the data used to calculate them. The National Center for Education Statistics (NCES), Education’s chief statistical agency, has funded a document that describes the following dimensions for ensuring that data are of high quality: Accuracy. The information must be correct and complete. Data entry procedures must be reliable to ensure that a report will have the same information regardless of who fills it out. Security. The confidentiality of student and staff records must be ensured and data must be safe. Utility. The data must provide the right information to answer the question asked. Timeliness. Deadlines are discussed, and data are entered in a timely manner. This document suggests that school staff members are responsible for entering data accurately and completely and maintaining data security. It provides ideas for assisting staff to accomplish these tasks, such as sharing best practices with a peer and implementing school-district policies on data security, such as changing passwords frequently. If schools receiving funding under Title I, Part A of the act do not make AYP—including meeting the state’s requirements for graduation rates—for 2 consecutive years or more, they are “identified for improvement.” They must take certain actions such as offering parents an opportunity to transfer students to a school that had made AYP (school choice). If these schools continue not to make AYP, they must take additional actions, such as providing supplemental services to students—including transportation, tutoring, and training. States and school districts are required to provide funding for such actions up to a maximum specified in law. However, according to Education officials, most high schools do not receive Title I funding, and therefore, if these schools do not make AYP, they are not required to take improvement actions, such as offering school choice or supplemental services. However, NCLBA requires each school district receiving Title I funds to prepare a report card that must contain graduation rates for high school students and is available to the public. Education has responsibility for general oversight of Title I of NCLBA. As part of its oversight effort, Education has implemented the Student Achievement and School Accountability Program for monitoring each states’ administration of Title I programs. This monitoring effort was designed to provide regular and systematic reviews and evaluations of how states provide assistance in terms of funding, resources, and guidance to school districts to ensure that they administer and implement programs in accordance with the law. Monitoring is conducted on a 3-year cycle and addresses high school graduation rates among other requirements. Teams of federal officials visit state offices, interview state officials, and review documentation on how states comply with federal law and regulations. NCLBA also requires the Secretary of Education to report to the Congress annually regarding state progress in implementing various requirements, including the number of schools identified for improvement. Education has required states to report their graduation rates for the state as a whole and for designated student groups. All states submitted plans to Education as required under NCLBA, which were to include their definitions of graduation rates. By June 2003, Education reviewed and approved all state plans, including their definitions of graduation rates and their statements regarding how such rates were valid and reliable. Education provided many states with approval to use a definition of their choosing until they are able to develop ones that better meet the law’s requirements for defining and measuring graduation rates. Education has also reviewed and approved many amendments to plans submitted by states, including those that make changes to the state’s definition of its graduation rate. Additionally, NCES commissioned a task force to review issues about definitions, data, and implementation. In its report, the Task Force discussed the data challenges faced by states in calculating their graduation rates. Regarding data used to measure student performance generally, GAO and Education’s Inspector General have commented on the importance of data accuracy. To attempt to improve graduation rates in high schools or keep students from dropping out of school, Education, state governments, school districts, schools, and foundations have funded or implemented various interventions to address the educational needs of students. Such interventions are based on the idea that many factors influence a student’s decision to drop out of school, such as low grades, socio-economic challenges, and disciplinary problems. These factors may be evident as early as elementary school, and therefore some interventions are designed for these students. During the late 1980s and through the mid-1990s, Education supported dropout prevention programs across the country. In an attempt to determine which programs effectively reduced the drop out rate, Education conducted several evaluations of these programs. The largest of these was the evaluation of the second phase (1991 to 1996) of the School Dropout Demonstration Assistance Program. This evaluation looked at more than 20 dropout prevention programs including school within a school, alternative middle and high schools, restructuring initiatives, tutoring programs, and GED programs. While two of these programs showed promise in reducing dropout rates—alternative high schools and middle schools—the major finding was that most programs did not reduce dropping out. In our 2002 report, we identified three intervention approaches to prevent students from dropping out of school: Restructuring schools. This approach modifies a school or all schools in a district through such initiatives as curriculum reform or dividing schools into smaller, more individualized learning communities. Providing supplemental services. This approach provides additional services such as tutoring or mentoring in language and math; interventions attempt to raise student academic achievement and self esteem. Creating alternative learning environments. These interventions target at-risk students and attempt to create personalized learning environments, such as career academies that focus the entire school around a specific career theme. However, our 2002 report found that additional research was needed to document which interventions were particularly successful for certain groups of students. Education agreed that additional rigorous evidence is needed and that it would consider commissioning a systematic review of the literature. A majority of states used or planned to use a graduation rate definition based on the group of students entering high school who graduate on time, referred to as the cohort definition. Education has assisted states, approved their graduation rate definitions, and given some states more time to develop planned definitions intended to produce more precise results. However, states faced challenges in resolving common data issues and in providing information on how to modify definitions to better account for certain students, such as for those with disabilities. According to state plans, 12 states used a definition that followed a group of students over time from when they entered high school until they left— referred to as the cohort definition. An additional 18 states using other definitions planned to adopt the cohort definition no later than the 2007-08 school year. The cohort definition compares the number of 12th grade graduates with a standard diploma, with the number of students enrolled as 9th graders 4 years earlier, while also taking into account those who left the cohort, such as those who transferred in and out. A study commissioned by NCES found that a cohort definition designed to track individual students over time—from when they enter high school until they leave—could result in a more precise high school graduation rate than one calculated with other definitions. The data in figure 1 show a hypothetical high school class from the time students enrolled in 9th grade until they graduated with a standard diploma, including those who dropped out, transferred, received alternative degrees, continued in school, or took 5 years to graduate. If the school was in a state that used the cohort definition and considered 4 years to be on-time graduation, its graduation rate would be 60 percent. The 60 percent figure comes from using the number of students who started (100), the net number of transfers over the 4 years, and the number who graduate in 4 years (60). Figure 2 shows the formula of the cohort definition. The year students in the cohort graduate is denoted by “y,” while “T” signifies the net number of students who transfer in and out in any given year. The cohort definitions actually used by states may vary somewhat from the basic definition. For example, Kansas used dropout and transfer data in its definition. Additionally, some states track individual students, while others track groups of students based on the entering 9th grade cohort. According to state plans, 32 states used a definition of high school graduation rate, referred to as the departure classification definition, based primarily on the number of dropouts over a 4-year period and graduates. Essentially, this definition looks back from a 12th grade class at those who (1) graduated (regardless of when they started high school), (2) dropped out in 9th, 10th, 11th, and 12th grades (including those who enrolled in GED programs) and (3) did not graduate, but received some form of alternative completion certificate. So, using this definition, the data from the high school shown in figure 1 would result in a graduation rate of 65 percent. The 65 percent figure comes from using the number of students who graduated (65), the number who received an alternative certificate (5), and the number who dropped out (30), as shown in Figure 3. Unlike the cohort definition, this definition does not take into consideration the number of students entering high school 4 years earlier. As noted earlier, some of these states (13) planned to adopt the cohort definition by school year 2007-08. The departure classification definition includes students who drop out. Each of the “D” designations refers to the number of dropouts during one year. For example “D y-2g10” stands for the number of students who dropped out in the 10th grade. Prior to NCLBA, many states had been using a similar version of this formula, which NCES developed in collaboration with several states. However, earlier definitions used by states may have also included as graduates those who receive GED certificates. Under NCLBA, Education required states to modify the formula so that GED recipients were not counted as graduates. Different data systems accommodated the use of different definitions. The departure classification definition allowed many states to continue using existing data systems, according to Education officials. Such systems generally collect aggregate data, rather than data at the student level. The cohort definition generally requires states to implement a state-level student tracking system, often with a mechanism that can uniquely identify each student. Such a system identifies students in the 9th grade and tracks them throughout high school, indicating whether they graduate, transfer or drop out. This system also allows for students who transfer into a school to be placed in the proper cohort. The more specific information required by the cohort definition may result in the calculation of more precise graduation rates than those produced by the departure classification definition. Since the cohort definition follows students entering high school, either by individual students or groups of students, it can better be used to include only on-time graduates. However, how it is implemented may affect the level of precision of the rate calculated. Tracking individual students may result in a more precise rate than tracking groups of students. In our analysis of one state’s school year 2002-03 data, we found that the variations in data collection and calculations between the two types of definitions, produced different graduation rates. Our analysis showed that the departure classification definition produced a graduation rate that was 12 percent greater than when we used the cohort definition. Because the departure classification definition does not track the entering cohort, it does not account for students who were held back, and therefore differences may result. Our findings are consistent with observations made by other researchers that show differences in graduation rates based on the definition used. In addition, NCES plans to complete a study this year that examines high school graduation rate definitions and how rates differ depending on the definition used. According to state plans, the remaining eight states that did not use either a departure classification or cohort definition used a variety of other definitions. Five of these states plan to adopt cohort definitions no later than 2007-08. Figure 4 shows the definitions each state used as April 2005 and planned to use by school year 2007-08. Wash. Mont. N.Dak. Oreg. Minn. Vt. N.H. S.Dak. Wisc. N.Y. Mass. Wyo. Mich. R.I. Nev. Nebr. Pa. Conn. N.J. Ill. Ind. Calif. Colo. Del. Kans. Mo. W.Va. Va. Ky. Md. D.C. Tenn. N.C. Ariz. Okla. N.Mex. Ark. S.C. Miss. Ala. Ga. Tex. La. Fla. Cohort definition (12) Departure classification definition (32) Other definition (8) Wash. Mont. N.Dak. Oreg. Minn. Vt. N.H. S.Dak. Wisc. N.Y. Mass. Wyo. Mich. R.I. Nev. Nebr. Pa. Conn. N.J. Ill. Ind. Calif. Colo. Del. Kans. Mo. W.Va. Va. Ky. Md. D.C. Ariz. Okla. Tenn. N.C. N.Mex. Ark. S.C. Miss. Ala. Ga. Tex. La. Fla. Cohort definition (30) Departure classification definition (19) Other definition (3) Most states set graduation rate targets, and many allowed schools to show progress toward these targets as a way for schools to make AYP. NCLBA requires that states set a graduation rate indicator. Most states have set such rates to help determine which schools make AYP. Additionally, many states allow schools to make AYP even if their graduation rates are not as high as the state’s required rate, so long as the school shows progress toward the required rate. States’ graduation rate targets ranged from 50 percent in Nevada to 100 percent in South Carolina, with about half at 80 percent or greater, as shown in figure 5. Valid comparisons of graduation rate targets across states cannot be made, in part, because of differences in rates used. For example, Alabama and North Carolina both had targets of 90 percent graduation rates. However, Alabama arrived at its target by using a departure classification definition that accounted for dropouts, while North Carolina used a definition that did not account for dropouts. According to state plans, 36 states considered their schools as meeting their graduation rate requirements if the schools increased their graduation rates from the previous year, known as “showing progress.” In addition, two states allowed their schools to meet such requirements if they maintained the previous year’s rates. A majority of states that allowed progress as a way for schools to demonstrate they met state graduation rate requirements had set no minimum rate of progress. We found instances in which very little progress, less than 1 percent, enabled a school to meet such requirements. Table 1 shows the number of states that allow schools to show progress toward the state goals as a means of meeting state graduation rate requirements, for all states as of the time we completed our review. By showing progress toward state graduation rate targets, schools can still make AYP even though they do not meet target rates. For example, our analysis of one state’s data from the 2002-03 school year showed that 46 out of 444 high schools made AYP by increasing their graduation rates toward the state graduation rate target of 66 percent rather than by meeting or exceeding this target. Specifically, these schools met or exceeded the state’s requirement for 1 percentage point progress in increasing the graduation rate, even though the schools were below the 66 percent target. Another 232 schools made AYP for the year by meeting or exceeding the target of 66 percent. In addition, allowing schools to use progress as the NCLBA graduation rate indicator could result in schools making AYP annually, while not meeting state graduation rate targets for decades, if at all. For example, a hypothetical school with a graduation rate of 56 percent can meet the state high school graduation indicator by increasing its graduation rate by 0.1 percent each year. At this rate, the school would not make the state graduation rate target of 66 percent for 100 years. Education provided states with assistance with their graduation rate definitions; however, Education’s guidance did not specify modifications available to account for certain types of students. To help states with their definitions, Education developed some guidance and provided support such as on-site peer reviews, conferences, and information posted on its Web site. Education also commissioned a task force that published a report identifying the advantages and disadvantages of different definitions. In addition, Education officials told us they granted states time to develop definitions that met the law’s requirements better for defining and measuring graduation rates. Education has provided information on how to account for students in special programs and students with disabilities to states that have requested it. Education’s approach has been to provide such information on a case-by-case basis rather than to all states. Education officials stated that they preferred to work with each state’s specific circumstances. However, we found that issues raised, such as students enrolled in 5-year programs, were common to many states. States varied in how they included students enrolled in these programs in their graduation rate definitions. For example, one state counted students in 5 year programs who graduated as dropouts until it received approval to count them as graduates. Another state planned to count such students as graduates without requesting approval to do so. Officials in that state said that since it was unclear what the actual requirements for counting graduates were, they were doing what they believed was allowable under the law. Without guidance on how to account for students in special programs and students with disabilities, there is less consistency among states in how students in these programs are included in graduation rates. Education also has not provided information to all states on how their definitions can be modified to better accommodate students with disabilities. State plans in 16 of the 52 states indicated that Education approved these states to allow students with disabilities more than the standard number of years to graduate based on the number of years in their Individualized Education Plans. In the 20 states we contacted, we found that they varied in whether they sought approval from Education on how to include students with disabilities in their graduation rate definitions. For example, six of the states we contacted had sought approval from Education to include students with disabilities who need more than the standard number of years to graduate in their graduation rate definitions. In contrast, officials in seven other states contacted told us they did not seek approval for the same issue. Officials in the remaining seven states provided no information on this topic or said it did not apply to them. State, school district, and school officials and experts we interviewed reported several factors that affect the accuracy of data used to calculate graduation rates, especially student mobility. While Education has taken steps to assist states and districts in improving the quality of their data, the Department has not reviewed the accuracy of all states’ data, because, at the time of our review, many states were in the process of implementing new definitions, data collection strategies, or both. Officials in six schools, three school districts, and three states we visited and several experts we interviewed cited challenges in tracking student mobility, the key factor in calculating accurate high school graduation rates. Some inaccuracies may lead to the reporting of lower graduation rates, such as recording all students with “unknown” status as dropouts or counting students who drop out, return to school, and then drop out again as a dropout each time, as may happen in schools in states that use the departure classification definition. Other inaccuracies may lead to the reporting of higher graduation rates, such as schools’ recording students who drop out as transfers. This may occur when school staff record such students as transfers before they receive documentation that the student actually enrolled in a different school. Since the number of dropouts counts against a school in calculating its graduation rate in many states, schools that record such students as transfers—because they were unaware that the students had actually dropped out—may be reporting inflated graduation rates. A second factor that affects data accuracy is how staff members understand and follow policies and procedures for recording students as transfers to other schools. For example, staff members in schools in two states reported that they electronically record a student as having transferred to another school on the day that student withdraws from their schools. However, the policy in these states is that a student is to be recorded as having transferred only upon receiving a request for records from the school to which the student transfers. In one of these schools, staff assigned to record student data reported contradictory practices and beliefs about state policy regarding when to record a student as a transfer. One staff member stated that the policy and her practice was to record the student as a transfer upon receiving the records request while another staff member said that no such policy existed and that she recorded the student as a transfer on the day of withdrawal. Therefore, how a student transferring out the school was counted depended on which staff member recorded the student’s data. The accuracy of data may be further compromised when schools have large numbers of students who transfer in a given year because the more students come and go, the more difficult it is for schools to accurately account for them. Some schools are in areas where families tend to move more frequently. For example, officials in one school we visited near an Army base reported that their school had an enrollment of about 1,200 students and that 187 students had left the school by December of the academic year. The status of 19 of those 187 students was recorded as “unknown” because of difficulty in maintaining contact with these families. The policy in that state was for students whose status is “unknown” (because they could not be contacted) to be counted as dropouts, even if, in fact, the student had transferred to another school. Staff in another school reported the presence of several children from another country. Their experience has been that these particular students report plans to return to their country of origin, but they often do not know the status of these students once they leave the school. The school’s procedure is to record such students as having an “unknown” status, and these are eventually counted as dropouts, unless another school requests their records. Research has shown higher mobility rates among certain subgroups of students compared to all other students, including those who are African-American, Hispanic, Native American, and those classified as having limited English proficiency and as children from migrant families. Consequently, schools with higher concentrations of these subgroups would likely report less accurate graduation rates. Another factor affecting the accuracy of graduation rate data is the absence of state audits or verification checks. For example, in our survey of state officials, over half (27) reported that their states did not audit the data received from local officials that the state used to calculate high school graduation rates. The lack of such auditing or verification implies that states were likely to be unaware of the extent of certain errors in data—such as students’ indicating they were transferring to another school but not actually doing so—and consequently were unable to ensure that data they received from schools and districts were accurate. Officials in only one of the six schools we visited reported that their data on student transfers had been audited or verified by an outside party, leaving the accuracy of transfer data in the other schools uncertain. A fourth factor that contributes to challenges in assuring accurate data is the lack of a unique identifier for each student. In our survey, officials in 22 states reported that their state did not have a unique identifier for each of their students. Concerns about using student identifiers include the cost of implementing data systems that support such identifiers and privacy issues. The lack of a unique identifier for students made it difficult to obtain accurate data. Officials in one state that did not use unique identifiers stated that they had to compute graduation rates based on aggregating student data and as a result, they could not track on-time graduates. Officials in another state estimated that they were only 90 percent accurate in identifying students, because, without a unique identifier for each student, they had to use other information. Using this information, such as the student’s name or birth date, can lead to identifying more than one student with the same characteristics, resulting in inaccurate data used in calculating graduation rates. A fifth factor we found that may affect data accuracy is variation in security and accountability practices. For example, we found that while some schools restricted the ability to change student enrollment information (such as transfers) to one or two people in the building (e.g., a registrar), others allowed many staff members to do so. Further, while some schools’ data systems kept a record of each person who accessed a student’s record and the changes made, other systems did not maintain such information. Without sufficient security and record monitoring, there is a greater risk of inaccurate data being entered and used to calculate graduation rates. We analyzed data from one state to estimate the effect of errors of various sizes in reporting dropouts on school graduation rates and found that such errors could raise or lower a school’s graduation rate substantially. This state used a high school graduation definition that incorporated the number of graduates and dropouts in calculating its graduation rate. For example, its median high school in school year 2002-03, with 924 students, reported 41 dropouts and had a graduation rate of 75 percent. We re- estimated its graduation rate after assuming that the school had more dropouts, up to twice as many more than reported. In this case, if the school had 82 dropouts, its graduation rate fell to 64 percent. We also re- estimated its graduation rate after assuming that it had fewer dropouts, as few as half as many dropouts as reported. Thus, if it had 21 dropouts, its graduation rate rose to 88 percent. Figure 6 shows how the estimates of graduation rates were affected by assumed errors in counting dropouts for this school. Our analysis was performed for all high schools in the state. As expected, when we assumed the number of dropouts was higher than what schools reported, their estimated graduation rates decreased. Our analysis also found the extent to which schools miscount their dropouts affects their likelihood of reaching the state’s graduation rate target. We estimated that an additional 70 of 444 high schools in the state in school year 2002-03 would not have reached the state target if they were in fact reporting only half of their dropouts. On the other hand, an additional 77 high schools would have reached the state target if in fact their dropout counts were overreported at twice the actual level. According to the NCLBA, high schools that do not meet the state’s requirements for its graduation rate are designated as not making AYP. Such designations if made for 2 or more consecutive years would result in the district’s providing technical assistance to schools that receive Title I funding. Thus, schools that undercount their dropouts may be precluded from receiving the attention and assistance from the state they need to improve students’ school retention and graduation while those with over counts may receive such services unnecessarily. Education has taken steps to help states address data collection issues. First, Education helped states prepare information to address how their graduation rate definitions were valid and reliable. Education gave instructions in its regulations and in a template given to each state to help states prepare the accountability plans they were to submit to Education for approval in 2003. Education also worked with states on an as-needed basis when state officials had questions about what information the Department needed to review. Education officials indicated that they reviewed information in each state’s plan when they conducted site visits to states as part of the state plan approval process. According to Education, most states were in some stage of transition in calculating their graduation rates: some were implementing plans to transition from their current definition to a cohort indicator; others were improving their data systems; and some were collecting information on designated student groups for the first time. For these states, Education reported that it was unable to meaningfully examine the reliability of data used to calculate the graduation rate because such definitions of such rates had not been in place for a sufficient number of years necessary to determine whether the rate would produce consistent results. Second, Education, as part of its state monitoring, introduced a data review component to examine data states used for graduation rates, among other aspects of their participation in the Title I program. As of August 2005, Education had monitored and reported on 29 states, and expected to monitor the remaining states by the end of fiscal year 2006 as part of its 3-year monitoring plan. This monitoring consisted of broad questions intended to collect information about how states corrected or addressed errors in student data received from districts and schools, including data used to calculate graduation rates. The monitoring was also designed to identify written procedures states used to confirm the accuracy of their data, the extent to which these procedures were communicated to districts, and how data validity issues related to schools and districts have been addressed. According to Education officials, their reviews of the nine states identified no significant problems with data systems these states used to calculate high school graduation rates. Third, in response to recommendations from GAO and Education’s Inspector General, Education contracted with a firm to develop a guide to help states improve data collection processes. According to Education officials, this guide is to consist of three parts. One part is designed for state officials and is to focus on the design and implementation of data systems. A second part, which focuses on data management issues such as methods for verifying the accuracy of data, is designed for district and school officials. A third part summarizes the first two parts and is to be suitable for oral presentation to state, district, and school officials. According to department officials, this guide will be issued by the end of 2005. Although Education monitors states to determine if they have written procedures for ensuring data quality and have methods to address data quality issues, it does not evaluate other methods of ensuring data accuracy. For example, it does not assess whether states ensure that districts and schools have effective controls to accurately record student status, including transfers. Further, Education’s monitoring approach does not capture whether states ensure that schools have computer controls that allow only authorized staff to make changes to student data. Department officials said that the guide it is developing is planned to address these issues. However, departmental efforts have not resolved immediate data accuracy problems. In July 2005, Education announced that it planned to calculate and report interim graduation rate estimates for each state to provide a nationwide, comprehensive perspective. Education stated that the interim rate that it developed, based on data NCES collects from states, will provide more accurate on-time graduation rates. Some states’ graduation rates rely on the same data reported to NCES, while other states rely on different data. However, these states also provide data that are requested by NCES. The quality of the data states provide to NCES varies across states depending, in part, on the extensiveness and rigor of their internal controls and other data verification checks. Because Education plans to rely on state-reported data to calculate interim graduation rates, the accuracy of such data is critical. While states and school districts have implemented numerous interventions designed to increase high school graduation rates, few of these programs have been rigorously evaluated, and Education has done little to evaluate and disseminate existing research. Several of the interventions that have been rigorously evaluated have shown potential to increase graduation rates. In addition to these interventions, schools are trying other approaches to enhance students’ chances of success, though the effectiveness of these approaches has not been demonstrated. About one third of students who enter high school do not graduate and are likely to earn less money, are more frequently unemployed, and are more likely to receive public assistance compared with those who graduate from high school. In response, some schools and districts have implemented programs to address the factors that influence a student’s decision not to complete high school. Research has shown that a student’s decision to leave school may be affected by experiences that begin as early as elementary school. For example, studies have shown that students who are not at least moderately skilled at reading by the end of 3rd grade are less likely to graduate from high school. Besides basic literacy skills, there are a variety of other academic and family-related factors that contribute to whether a student graduates. For example, poor grades and attendance, school disciplinary problems, and failure to advance to the next grade can all gradually lead to disengagement from school and result in a student not finishing high school. In addition to these academic factors, students from low-income backgrounds, students with low levels of self esteem, or students with a learning or behavioral disability drop out at a much higher rate than other students. Schools and districts have implemented a range of interventions to address these factors and they vary in scope from redesigning the structure of an entire school to an individual school’s mentoring program. While there is variability among interventions, most generally fall into one of the three following categories that we identified in our 2002 report: (1) school wide restructuring efforts; (2) alternative forms of education for students who do not do well in a regular classroom; and (3) supplemental services, such as mentoring or tutoring services, for at-risk students. While most of the schools we visited used interventions from only one of the three categories identified above, some schools combined aspects of these categories. (See table 2 for a complete list). Several of the programs at schools we visited have conducted evaluations of how they affect high school completion, while others are reporting positive results on other outcomes such as attendance or academic performance. We identified and reviewed five intervention evaluations that used a rigorous research design and have shown potential to increase graduation rates. We visited schools that had implemented three of these programs. In addition, we visited other schools that were trying other interventions that experts and Education officials noted were promising for improving high school graduation rates. While the effectiveness of these approaches to increase graduation rates had not been demonstrated, research does point towards the possibility that these interventions may help increase high school completion. The three schools we visited whose programs displayed positive results all used a rigorous research design. However, evaluations of the effectiveness of these interventions are not as strong as they need to be for results to be conclusive. For example, design limitations or data collection concerns were evident during our review of these evaluations. It is worth keeping in mind that research of this nature is limited in the education field due to a variety of factors, and these studies represent some of the most promising research on graduation rate interventions available. In our visits to 16 school programs in 6 states, we observed 3 interventions where research has indicated potential for improving high school graduation rates. These interventions addressed a variety of student risk factors and provided services to students in elementary through high school. One school we visited in Minneapolis, Minnesota, had implemented the Check and Connect program which provides mentoring services in an alternative-learning environment. The program began in 1990 with a model developed for urban middle school students with learning and behavioral challenges. It has since been expanded to serve additional at-risk populations as well. This intervention is designed around a mentor who acts as both an advocate and service coordinator for students who have been referred into the program due to excessive absences combined with poor academic performance and behavioral problems. Program officials noted that the mentors offer around-the-clock services including monitoring school performance, regularly checking student data (attendance, grades, and suspensions), and identifying and addressing out of school issues. The mentor also regularly communicates with the student’s parents or relatives to ensure that the whole family is engaged in the student’s education. The mentoring is built into a program model that relies on several inter- related features including relationship building, individualized and timely intervention, and long-term commitment. A complete listing of program features can be seen in table 3. The school we visited in Minneapolis had 220 students in the program during the 2004-05 school year. Program officials noted that students in the program were divided among four mentors and had two separate classrooms they could use to meet with their mentor or to study between classes. The program had no set schedule for the student—it was the responsibility of the mentor to make sure they followed up with the students, parents, teachers, courts or counselors on a regular basis. A student in the program noted that Check and Connect helps because it “provides someone who cares how you do and keeps after you about coming to school and doing well academically.” A school official remarked that both attendance and retention rates had improved significantly since the program was implemented. An evaluation of program impacts on students with emotional and behavioral disabilities showed that students participating in Check and Connect were more likely than students not participating to have either completed high school, including GED certification, or be enrolled in an educational program. While graduation rates are not available yet for the first Check and Connect cohort at the school we visited, a teacher at the school commented that the staff knows that the program is working “because the students are coming to class everyday.” School officials noted that the program is funded through a renewable grant from a private foundation. Another program we visited, Project GRAD (Graduation Really Achieves Dreams), seeks to ensure a quality public education for students in economically disadvantaged communities through school restructuring, curriculum reform, and social services. The goal of the program is to increase high school graduation rates in Project GRAD schools to at least 80 percent, with 50 percent of those students entering and completing college. Originally established in 1989 as a scholarship program, it has since developed into a replicable and comprehensive k-12 school reform model. The reform design relies on two components—a structural model and an instructional model. Structural components include an independent local organization to provide implementation oversight, and community involvement such as mentoring, tutoring, and financial support. Figure 7 shows Project GRAD’s structural components. Local Project GRAD sites—such as one located in Atlanta—also used the instructional component of the model, which emphasizes specific reading and math programs for students in kindergarten through 8th grade. Program officials commented that this component also incorporates campus based social services (which focus on dropout prevention as well as family case management), classroom management techniques, and college scholarships to all high school students who qualify. In 2004, the local Atlanta site served 29 schools and approximately 17,000 students in the inner city. Officials at one of Atlanta’s schools noted that the program provided additional outreach staff to advocate on behalf of students and address other issues that may interfere with a student’s ability to attend school and learn. Students at the school, commenting on the program’s effect on their lives, noted that the program should be expanded to all of the schools in the district because of the opportunities it offers students. Project GRAD-Atlanta officials noted that the effectiveness of the program has been demonstrated through higher test scores and increased college attendance since implementing Project GRAD in these schools. Additionally, the results of an independent evaluation of Project GRAD also suggest an increase in students’ test scores and graduation rates. However, aspects of the study’s design may limit the strength of study findings. The Project GRAD—Atlanta model relies on a mix of public funding and private local fundraising. As of school year 2003-04, Project GRAD had also been replicated in feeder systems in Akron, Ohio; Brownsville, Tex.; Cincinnati, Ohio; Columbus, Ohio; Houston, Tex.; Kenai Peninsula, Alaska; Knoxville, Tenn.; Lorain, Ohio; Los Angeles, Calif.; Newark, N.J. and Roosevelt, N.Y. We also visited a school that had implemented the language arts component of the HOSTS program, an intervention focused on literacy, an area that research has linked to students’ graduating. This program is a structured tutoring program in reading and language arts that targets low performing elementary students whose reading skills are below grade level. School officials at the elementary school we visited noted that they had been using the program for 7 years to increase at-risk student’s reading scores as well as raise their self esteem. The 90 students in the program worked individually with a tutor 4 days a week for 30 minutes each day. School officials considered the program a success because of the number of students who successfully transitioned into grade level reading in the regular classroom. The program, which has been replicated in schools or districts in 12 states, was cited in the report language of the NCLBA as a scientifically based intervention that has assisted schools in improving student achievement. A recent study of the program in nine Michigan elementary schools suggests reading improvement for students at schools participating in HOSTS programs. While this study displayed some promising results for elementary literacy, students were not tracked over time to determine its effect on high school graduation rates. Two recently completed rigorous program evaluations also displayed promising results for increasing graduations rates. These two programs, the Talent Development Model and First Things First, are both comprehensive school reform initiatives with numerous components. The Talent Development program in Philadelphia, Pennsylvania, is designed to improve large urban high schools that face serious problems with attendance, discipline, achievement scores, and graduation rates. The program has been implemented in twenty districts nationwide and consists of several different components including a separate career academy for all 9th graders, career academies for students in 10th through 12th grades, block scheduling (4 courses a semester, each 80-90 minutes long) and an after hours program for students with attendance or behavioral problems. An evaluation of the first five schools in Philadelphia to implement the Talent Development program suggest that it may have contributed to increasing the graduation rate for two high schools compared with other high schools in the district that did not implement the program. The First Things First program was first launched in Kansas City, Kansas, and has since been tested in 12 middle schools and high schools in four additional districts. The program has three central components: small learning communities of up to 350 students, a family advocate system that pairs students with a staff member who monitors their progress, and instructional improvement that aims to make lessons more rigorous and better aligned with state and local standards. A recent evaluation in Kansas City schools suggests that students in the four high schools with First Things First had increased reading and math scores, improved attendance, lowered dropout rates, and increased graduation rates compared with schools that did not participate in the program. For middle schools in Kansas City, the study found increased reading and math scores and somewhat improved attendance compared with other scores. However, the research did not show significance differences in the First Things First schools when compared with other schools in two other school districts. In addition to the 3 school programs we visited whose rigorous evaluations displayed potential for increasing graduation rates, we also visited 13 other school programs which experts, Education officials, and evaluations noted were promising. While the effectiveness of these approaches has not been demonstrated, research points toward the possibility that these interventions may help increase high school completion. These other school programs generally focused on one specific approach which generally fell into one of three categories—school restructuring, alternative learning environment, and supplemental services. Selected programs that illustrate these approaches are discussed below. Schools and districts used schoolwide restructuring to change a school or all schools in the district to provide a more personalized education and increase graduation rates. Schoolwide restructuring efforts are generally implemented in schools or districts that have a history of high dropout rates. One restructuring approach is to create many small schools from larger low performing schools. For example, the New Century High Schools Consortium for New York City is a New York Public School’s small schools initiative that is funded by the Bill and Melinda Gates Foundation, the Carnegie Corporation of New York, and the Open Society Institute. School officials commented that the project began in the Bronx with the conversion of six low performing high schools that served between 1,500 and 3,000 students each. This intervention began in 2001 and, as of September 2004, New York City had created 77 small schools. One of those schools, Morris High School, has been a part of this program since the small schools program begun in 2001. School officials noted that the school has been divided into several small schools including the Bronx International High School and the Bronx Leadership Academy, which serve 300 and 252 students respectively. While housed in the same building, each school has a different curriculum and student population. For example, the Bronx International High School provides an intensive English language program for recent immigrants while the Bronx Leadership Academy offers a science-based curriculum for college bound students. The core concepts for both these programs are the small school size, team approach to teaching, and school-based learning that also has relevance within their community. A student at the school noted that the small groups they work in allow students to help and support each other, something that did not happen in junior high school. School officials commented that teacher investment in the school is expected and is often displayed by working overtime, serving as counselors to students, and participating in school governance. Additionally, the project-based curriculum is developed by teacher teams who work collaboratively to plan activities for incoming students. School officials did not indicate a plan for a formal outcome-based evaluation of the schools; however, they did consider the intervention a success based on positive improvement in a number of areas including higher percentages of students meeting state standards, higher attendance rates, and higher passing grades. The New York City Department of Education reported similar results for small schools throughout the district including more students advancing from 9th to 10th grade and higher attendance rates. While these results provide a snapshot of some possible benefits of New York’s school reform initiative, it is still too early to look at student outcomes. The Gates Foundation has commissioned an 8-year evaluation of the small schools program. States and school districts are also using alternative learning environments for students at risk of school failure. These interventions are designed to foster a supportive school environment through small enrollments, one-on- one interaction, flexible schedules, and structures, and a curriculum that appeals to students’ interests. Often, enrollment is limited and the programs are tailored to individual students’ needs to ensure that they graduate. One type of alternative learning environment, the career academy, is focused on keeping students in school by providing an interesting curriculum focused on a specific career theme. For example, Aviation High School in Washington State is an aviation-themed public high school housed at a local community college. School officials noted that the school addresses a range of student risk factors, including those related to academics (learning and literacy), social issues (attendance and behavior), and family (counseling and strategies for living with drug addicted family members). With a 2004 enrollment of only 103 students, Aviation High School offers small class sizes, aviation themed curriculum, and mentoring opportunities. (See figure 8 for an example of a school event focused on aviation). Additionally, school officials report that each teacher at the high school serves as a student advisor who assists students with academic, social, and emotional development. Students noted that while transportation to the school was challenging due to its distance from their home, they still selected the program because of the aviation curriculum, the personalized attention they received, and the highly motivated students at the school. Aviation High School officials indicated that it is too soon to tell the impact of the program, but they noted that the school will be included in a national evaluation to be conducted by the Gates foundation. Research on career academies has demonstrated positive gains for employment and earnings for graduates, but also found that high school completion rates of career and non career academy students were not significantly different. Alternative learning environments may also allow students to tailor their learning experience to individual needs that are not being met in traditional schools. For example, we visited an alternative high school in Atlanta, Georgia, that uses a computer-based instructional program designed for students to learn the state-certified curriculum at their own pace. Students rotate through classrooms, each of which contains a different computer module for the particular subject being taught. Students received assistance from teachers as needed. According to officials, the school is made up of a team of 6 teachers and 75 at-risk 11th and 12th grade students (for the 2004-05 school year). The school’s enrollment is composed of students who were referred to the school either through other schools, court tribunals, or parents. School officials noted that the program also includes a motivational component. For example, each school morning begins with an assembly where students discuss the obstacles they have had to overcome and the people who have helped make a difference in the world. After the assembly, students get up and shake hands with each other and then move to their first hour class. School personnel stated that this allows students to begin each day with confidence and prepares them to learn. School officials noted that the school’s graduation rate, which they stated was consistently over 90 percent, indicated that the program was effective. Research on alternative programs in general has shown some promising outcomes. For example, an evaluation of 8 middle school dropout prevention programs showed some positive impacts on dropout rates, grade promotion, grades, and test scores for students in alternative programs. The same study also looked at five alternative high school programs and found limited evidence that these programs reduced dropout rates, but did note that alternative programs oriented toward GED certificates experience were more effective than those oriented toward high school diplomas. Several schools we visited used targeted supplemental services to provide at-risk students with extra help. These services aim to improve students’ academic performance, acclimate them to a new culture, or increase their self-esteem. Supplemental service programs are offered at all grade levels, with research showing the importance of building academic and social skills at an early age. Supplemental services can focus on the needs of a specific group of students, such as immigrant students or students with limited English proficiency. One such intervention we visited in Georgia was designed to provide educational and cultural services to immigrant students with low level English skills and limited formal schooling. These interventions, often referred to as “newcomer” models, provide intensive language development courses and may also offer a cultural orientation component. Newcomer programs can take place within a school or at a separate site and vary in the amount of time a student is enrolled. The benefits of the newcomer program is supported by research on English language learners that notes one major factor that decreases risk of dropping out of school is level of understanding and mastery of the English language. At the program we visited, international students who were new to the district were registered, tested, and placed depending on their skill level. Students with no English language skills were placed in an intensive 3 - to 6-week English program that helped ease the transition into school. Students who were 14 years or older and had fewer than 7 years of formal schooling in their native country were placed in the English for Speakers of Other Languages (ESOL) lab program. School officials noted that the lab served 132 students in school year 2004-05 and is designed to help students achieve grade level proficiency within 3 years. The ESOL lab focused on listening, speaking, reading, and writing English in addition to other core high school courses such as math, science, and social studies. Additionally, several district schools have added Saturday school tutorials for parents and students. Students can study language arts while their parents attend citizenship classes, orientation, and career awareness sessions. School officials noted that they believe the number of ESOL students graduating has increased, based on state-reported rates as well as the numbers of students who pass the ESOL tests and exit the program. Other supplemental services incorporate cultural elements as a means of addressing student self-esteem. For example, a k-8 school located on the Arapahoe Indian reservation in Wyoming offers all students services that include after-school academic programs, drug awareness events, and a 2- week summer cultural camp focusing on Native American traditions. School personnel emphasized that the path to high school graduation begins with helping students address their self-esteem issues. School officials mentioned that students already have a mindset that they are not going to graduate from high school and do not have a future on or off the reservation. The cultural element of the school’s programs is a significant component of building up the student’s self-esteem and instilling a pride about their Native American identity. Students commented that they participated in the program because of the Native American cultural activities offered, including clogging, dancing, and drumming. Program officials noted that since implementing interventions designed specifically to address the issues of Native Americans, they have noticed general improvement in student attitudes and performance. While studies suggest that self-esteem affects dropout rates, a study over time of the intervention programs used by the Arapahoe school would be needed to determine its effectiveness. Graduation rates have become increasingly important since the passage of NCLBA, but Education has done little to evaluate and disseminate knowledge about interventions that could help increase such rates. The increased interest in high school reform by the National Governors Association, combined with concerns about low graduation rates, have set the stage for designing strategies that encourage more students to graduate. While many types of interventions are available for school districts, most have not been rigorously evaluated, and there is little information on which are successful and for what student subgroups. Most officials from the 20 states we included in our study told us that such information would be useful. For example, one school official noted that little information exists on what interventions increase graduation rates among Native American students and that such information would be helpful in designing interventions. Education has made some efforts to address the problem of high school completion by sponsoring research and disseminating information through conferences and on its Web site. For example, Education officials noted that Education’s Office of Special Education Programs has supported research papers on dropout interventions for youth with disabilities. These studies are currently being completed and will be available in late 2005. In terms of dissemination, Education’s 2nd Annual High School Leadership Summit held in December 2004 included sessions on dropout prevention and recovery as well as strategies for creating higher-performing schools. Additionally, Education’s Office of Vocation and Adult Education has dedicated a part of its Web site to the High School Initiative. The pages on the Web site contain information on high school reform models, adolescent literacy initiatives as well as information on research based practices that may help high schools. While Education has made some efforts to help states and districts address the dropout problem, the agency has not acted on its commitment to implement the recommendation, contained in our 2002 report on interventions, that Education evaluate results from research. Agency officials have commented several times that they plan to evaluate the research on dropout prevention efforts and then disseminate the results through the agency’s What Works clearinghouse. However, the Web space for this effort still contains placeholder information. Agency officials indicated that reviews of other topics, such as elementary reading and math, have come before the reviews necessary for the dropout section of the Web site. The nation’s public school systems are responsible for educating 48 million students, the majority of our future workforce. Providing them with the skills needed to succeed is vital to the nation’s economic strength and ability to compete in a global economy. NCLBA was passed to ensure that all students have access to a high-quality education and to increase the likelihood that these students will graduate. In particular, the act seeks to make significant changes in public education by asking federal, state, and local education officials to reconsider how they assess the academic achievement of the nation’s students. NCLBA specifies that states must set high school graduation rate indicators as an additional benchmark, along with test results, for measuring schools’ progress. However, increasing and accurately calculating graduation rates have been formidable challenges for many states and districts. Many states have used flexibility to define their indicators as both numerical goals as well as progress toward those goals, where progress has generally ranged from no increase to a 1 percent increase from the previous year. Therefore, some states have set expectations that their schools may not graduate many more students than previously. Education has addressed these challenges by developing some guidance and providing support such as on-site peer reviews, conferences, and information on its Web site. However, because Education’s approach has been to provide guidance on how to deal with specific student circumstances on a case-by-case basis, not all states have received such guidance. Without guidance, state officials may not appropriately include students in these specific circumstances in their graduation rate definitions, resulting in graduation rates that may be inaccurate. Such inconsistent calculations raise questions about the quality of graduation rates reported by states. A key challenge for states is to ensure that student data used for calculating state graduation rates, as well as data provided to NCES, are accurate and that state systems have the internal controls and data verification checks to promote data reliability. As some states transition to new graduation rate definitions, it is important that they ensure that such controls are part of new student data systems. Student data accuracy is particularly important because Education plans to use those state data reported to NCES to develop interim graduation rate estimates, which are intended to promote consistency across states and provide a nationwide perspective. Finally, little is known about the success of interventions that are designed to increase high school graduation rates. While some programs have shown potential to increase such rates, few have been rigorously evaluated. Some interventions have conducted limited evaluations of a variety of different outcomes (attendance, test scores, job attainment), but more comprehensive evaluations are necessary to understand programs’ effects on graduation rates. As a result, schools and districts may not be using the most effective approaches to help their students stay in school and graduate. Education could play an important role in evaluating existing research, which was a recommendation we made in our 2002 dropout report. Although Education agreed with this recommendation, the agency has not established a clear plan or timetable for carrying it out. Additionally, Education should disseminate the results of research, since such information will be critical as high school reform moves forward. To assist states in improving their definitions of high school graduation rates and enhancing the consistency of these rates, we recommend that the Secretary of Education make information available to all states on modifications available to account for students in special programs and students with disabilities in their graduation rate calculations. This information could include fuller explanations or examples of available flexibilities. We recommend that the Secretary of Education, before developing interim graduation rate estimates, assess the reliability of data submitted by states used for this purpose. This assessment could include specific criteria that demonstrate that states’ data systems can produce accurate data. We recommend that the Secretary establish a timetable for carrying out the recommendation in our 2002 report that Education evaluate research on dropout interventions, including those interventions that focus on increasing graduation rates. In addition, we recommend that the Secretary disseminate research on programs shown to be effective in increasing graduation rates. We provided a draft of this report to Education for review and comment. In its letter, Education concurred with two of our three recommendations: (1) about making information available to all states on modifications available to account for students in special programs and students with disabilities in their graduation rate calculations and (2) about evaluating research on dropout interventions and disseminating such research on those programs shown to be effective in increasing graduation rates. Regarding our recommendation that that the department assess the reliability of data submitted by states that it plans to use to develop interim graduation rate estimates, Education noted that it has taken a number of steps to conduct such reliability assessments. However, it is not clear whether these efforts include those data that Education will be using to develop interim graduation rate estimates. Although data submitted to Education are publicly available and have been reported by states for years, their reliability has not been determined. We believe that Education should take additional steps to ensure the reliability of these data before they are used in calculating such estimates. Education officials also provided technical comments that we incorporated into the report where appropriate. Education's written comments are reproduced in appendix II. We are sending copies of this report to the Secretary of Education, relevant congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be made available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and major contributors are listed in appendix III. To address the objectives of this study, we used a variety of methodological approaches. We analyzed the plans states were required to submit to Education to identify the graduation rate definitions states used and graduation rate indicators set by states, reviewed updates to plans submitted through July 2005 and reviewed letters from Education to states regarding its decisions about state plans and updates. As part of another GAO review, we surveyed officials in 50 states, the District of Columbia, and Puerto Rico to obtain information about two issues—the extent to which (1) states verify school and district data used to calculate high school graduation rates and (2) have unique student identifiers. The surveys were conducted using self-administered electronic questionnaires posted on the World Wide Web. We sent e-mail notifications to all 52 state Performance Based Data Management Initiative coordinators (50 U.S. states, the District of Columbia, and Puerto Rico) beginning on November 15, 2004. We closed the survey on January 13, 2005, after the 50th respondent had replied. Washington state and the District of Columbia did not complete the survey in time to be included in our analysis. We selected 20 states for further analysis. States were selected to capture variation in high school graduation rate definitions, geographic location, and types of interventions with the potential to increase graduation rates. We conducted a case study in 1 state (Washington state) to calculate graduation rates; site visits in 3 states (Georgia, North Carolina, and Washington) to review site visits in 6 states (Georgia, Illinois, Minnesota, New York, Washington, and Wyoming) to observe interventions and interview program staff; and semi structured telephone interviews in all 20 states to obtain information on definitions used, implementation status, and guidance provided by Education. See table 4 for a list of states selected for site visits and phone interviews based on the research objective we studied. In our case study we used student data from Washington state for the 2002-03 school year, the most recent school year for which data were available at the time of our review. Using these data, we conducted an analysis comparing the results of calculating the high school graduation rate using two different graduation rate definitions—the cohort definition and the departure classification definition. Washington state used a modified cohort formula that was based on tracking student dropouts rather than on tracking student transfers. It also required all students with “unknown” status to be reported as dropouts. We also used these data to analyze the effects of allowing schools to make progress toward the graduation rate target as a means of making AYP and using an estimated miscount of the number of dropouts on the graduation rate. We interviewed experts to determine reasonable rates at which dropouts may be in error. We analyzed data using a set of 444 out of 547 of the state’s high schools. The 103 high schools that were not included in our analysis were those with graduation rates of 10 percent or less. These were generally alternative high schools, such as those designed to serve students who had committed serious crimes. We also interviewed a state official who confirmed our understanding of the omitted schools and agreed with the reasonableness of the criterion. Although our analyses were based on a 4-year period, we used the 1 year of student data and estimated information for the 3 prior years. We did not obtain student data from prior years because state officials told us that data accuracy had improved significantly in the 2002-03 school year. We assessed the reliability of the Washington state data by (1) performing electronic testing of required data elements for missing data and for obvious errors, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing Washington state officials knowledgeable about the data. However, we did not check the data to source information. We determined that the data were sufficiently reliable for the purposes of this report. To identify interventions with the potential to increase graduation rates, we used a “snowballing” approach. Using this approach, we reviewed the literature on interventions and interviewed Education officials and dropout prevention experts and reviewed Web sites, such as the National Dropout Prevention Centers Web site (http://www.dropoutprevention.org/), to identify those that have the potential to increase high school graduation rates. Based on the research we reviewed and on recommendations from experts, we selected several interventions at various locations around the country. For those interventions we selected to visit we reviewed available evaluations, including the findings related to outcomes, such as increased graduation rates and improved literacy. We also assessed the methodological approaches of these evaluations. Based on our review, we identified 3 interventions that had been rigorously evaluated and have shown potential to increase graduation rate and visited 3 schools that had implemented these programs. (Rigorous evaluations of 2 other interventions which showed promising results were released subsequent to our field work. We reviewed the results of these evaluations and reported their findings.) We also visited schools that had implemented 13 other interventions that experts and research showed promise in affecting factors that may improve grad rates. However, rigorous evaluations on these programs had not been done at the time of our review. To determine how Education assists states, we reviewed Education regulations, guidance, and other documents and interviewed Education and state agency officials. We also interviewed these officials to determine the degree to which Education’s actions have enhanced and disseminated knowledge about interventions. Finally, we interviewed officials from the National Governors Association, national education organizations, and other experts in the area of high school graduation rates and reviewed related research to obtain an understanding of the issues surrounding these rates and high school reform efforts to address them. We conducted our work between September 2004 and July 2005 in accordance with generally accepted government auditing standards. Harriet Ganson (Assistant Director), Julianne Hartman Cutts (Analyst-in- Charge), and Jason Palmer (Senior Analyst) managed all aspects of the assignment. Dan Klabunde made significant contributions to this report, in all aspects of the work. In addition, Sheranda Smith-Campbell, Nagla’a El- Hodiri, and Greg Kato provided analytic assistance. Jean McSween, Karen O’Conor, and Beverly Ross provided technical support. Jim Rebbe and Sheila McCoy provided legal support, and Corinna Nicolaou assisted in the message and report development. No Child Left Behind Act: Improvements Needed in Education’s Process for Tracking States’ Implementation of Key Provisions. GAO-04-734. Washington, D.C.: September 30, 2004. No Child Left Behind Act: Additional Assistance and Research on Effective Strategies Would Help Small Rural Districts. GAO-04-909. Washington, D.C.: September 23, 2004. Special Education: Additional Assistance and Better Coordination Needed among Education Offices to Help States Meet the NCLBA Teacher Requirements. GAO-04-659. Washington, D.C.: July 15, 2004. Student Mentoring Programs: Education’s Monitoring and Information Sharing Could Be Improved. GAO-04-581. Washington, D. C.: June 25, 2004. Title I: Characteristics of Tests Will Influence Expenses; Information Sharing May Help States Realize Efficiencies. GAO-03-389. Washington, D.C.: May 8, 2003. Title I: Education Needs to Monitor States’ Scoring of Assessments. GAO-02-393. Washington, D.C.: April 1, 2002. School Dropouts: Education Could Play a Stronger Role in Identifying and Disseminating Promising Prevention Strategies. GAO-02-240. Washington, D.C.: February 1, 2002. Elementary School Children: Many Change Schools Frequently, Harming Their Education. GAO/HEHS-94-45. Washington, D.C.: February 4, 1994. Burns, Matthew K., Barbara V. Senesac, and Todd Symington. “The Effectiveness of the HOSTS Program in Improving the Reading Achievement of Children At-Risk for Reading Failure.” Reading Research and Instruction, vol. 43, no. 2 (2004): 87-103 Dynarski, Mark, Philip Gleason, Anu Rangarajan, and Robert Wood. Impacts of Dropout Prevention Programs, Final Report. Princeton, New Jersey: Mathematica Policy Research, Inc., 1998. Dynarski, Mark, Philip Gleason, Anu Rangarajan, and Robert Wood. Impacts of School Restructuring Initiatives, Final Report. Princeton, New Jersey: Mathematica Policy Research, Inc., 1998. Dynarski, Mark and Philip Gleason. How Can We Help? What We Have Learned From Evaluations of Federal Dropout Prevention Programs? A Research Report from the School Dropout Demonstration Assistance Program Evaluation. Princeton, New Jersey: Mathematica Policy Research, Inc., 1998. Gingras, Rosano, and Rudy Careaga. Limited English Proficient Students at Risk: Issues and Prevention Strategies. Silver Spring, Maryland: National Clearinghouse for Bilingual Education, 1989. Greene, J. P. and Marcus A. Winters. Public School Graduation Rates in the United States (New York: Manhattan Institute for Policy Research, 2002), http://www.manhattan-institute.org/pdf/cr_31.pdf (accessed June 21, 2005). Kemple, James J. Career Academies: Impacts on Labor Market Outcomes and Educational Attainment. New York: Manpower Demonstration Research Corporation, December 2001. Kemple, James J., Corinne M. Herlihy, and Thomas J. Smith. Making Progress towards Graduation: Evidence from the Talent Development High School Model. New York: Manpower Demonstration Research Corporation, May 2005. Kerbow, David. “Patterns of Urban Student Mobility and Local School Reform.” Journal of Education for Students Placed at Risk. vol. 1, no. 2 (1996): 149-171. Lehr, Camilla A. and Cheryl M. Lange. “Alternative Schools Serving Students with and without Disabilities: What Are the Current Issues and Challenges.” Preventing School Failure, vol. 47, no. 2 (2003): 59-65. Opuni, K. A. Project GRAD Newark: 2003-2004 Program Evaluation Report, Houston, Texas: Center for Research on School Reform, February 2005. Quint, Janet, Howard S. Bloom, Alison Rebeck Black, LaFleur Stephens LaFleur, and Theresa M. Akey. The Challenge of Scaling Up Educational Reform: Findings and Lessons from First Things First, New York: Manpower Demonstration Research Corporation, July 2005. Rumberger, Russell, and Scott Thomas. “The Distribution of Dropout and Turnover Rates among Urban and Suburban High Schools.” Sociology of Education, vol. 73, no. 1 (2000): 39-69. Sinclair, M. F., S. L. Christenson, and M. L. Thurlow. “Promoting School Completion of Urban Secondary Youth with Emotional or Behavioral Disabilities.” Exceptional Children, (in press). Snow, Catherine E., M. Susan Burns, and Peg Griffin, Eds. Preventing Reading Difficulties in Young Children. Washington, D.C.: National Academy Press, 1998. Swanson, Christopher B. Keeping Count and Losing Count: Calculating Graduation Rates for All Students under NCLB Accountability. Washington, D.C.: Urban Institute, 2003, http://www.urban.org/url.cfm?ID=410843 (downloaded June 21, 2005). Shannon, Sue G., and Pete Bylsma. Helping Students Finish School: Why Students Drop Out, and How to Help Them Graduate. Olympia, Washington: Office of Superintendent of Public Instruction, 2003. U.S. Department of Education, National Center for Education Statistics, National Forum on Education Statistics. Forum Guide to Building a Culture of Quality Data: A School and District Resource. NFES 2005-801. Washington, D.C.: 2004. U.S. Department of Education, National Center for Education Statistics. National Institute of Statistical Sciences / Education Statistics Services Institute Task Force on Graduation, Completion, and Dropout Indicators. NCES 2005-105. Washington, D.C.: 2004. Wagner, Mary. Dropouts with Disabilities: What Do We Know? What Can We Do? A Report from the National Longitudinal Transition Study of Special Education Students. Menlo Park, California: SRI International, 1991. | About one third of students entering high school do not graduate and face limited job prospects. The No Child Left Behind Act (NCLBA) requires states to use graduation rates to measure how well students are being educated. To assess the accuracy of states' graduation rates and to review programs that may increase these rates, GAO was asked to examine (1) the graduation rate definitions states use and how the Department of Education (Education) helped states meet legal requirements,(2) the factors that affect the accuracy of graduation rates and Education's role in ensuring accurate data, and (3) interventions with the potential to increase graduation rates and how Education enhanced and disseminated knowledge of intervention research. As of July 2005, 12 states used a graduation rate definition--referred to as the cohort definition--that tracks students from when they enter high school to when they leave, and by school year 2007-08 a majority plan to use this definition. Thirty-two states used a definition based primarily on the number of dropouts over a 4-year period and graduates. The remaining states used other definitions. Because the cohort definition is more precise, most states not using it planned to do so when their data systems can track students over time, a capability many states do not have. Education has assisted states primarily on a case-by-case basis, but it has not provided guidance to all states on ways to account for selected students, such as for students with disabilities, thus creating less consistency among states in how graduation rates are calculated. The primary factor affecting the accuracy of graduation rates was student mobility. Students who come and go make it difficult to keep accurate records. Another factor was whether states verified student data, with fewer than half of the states conducting audits of data used to calculate graduation rates. Data inaccuracies can substantially raise or lower a school's graduation rate. Education has taken steps to help states address data accuracy issues. However, Education officials said that they could not assess state systems until they had been in place for a while. Data accuracy is critical, particularly since Education is using state data to calculate graduation rate estimates to provide consistency across states. Many interventions are used to raise graduation rates, but few are rigorously evaluated. GAO identified five that had been rigorously evaluated and showed potential for improving graduation rates, such as Project GRAD. In visits to six states, GAO visited three schools that were using such interventions. Other schools GAO visited were using interventions considered by experts and officials to show promise and focused on issues such as self esteem and literacy at various grades. Education has not acted on GAO's 2002 recommendation that it evaluate intervention research, a recommendation the agency agreed with, and has done little to disseminate such research. |
Under EPCA, as amended, covered product and equipment categories may need one or two rulemakings for the following reasons: Most often, if Congress established a standard in the law, DOE must publish a rule revising the standard or explaining why a revision is not justified. Generally, such statutes require two rulemakings: an initial revision and then a second revision, usually 5 years later. This type of rulemaking is associated with most categories. For several consumer products for which Congress did not set a standard in law, DOE must issue two rules—one rule to create a standard and a later rule to update the standard. For several industrial equipment categories for which Congress established a standard in law, DOE must review amendments to model standards set by a specified nongovernmental standard-setting entity. Based on DOE’s review, it must either publish a rule updating the statutory standards to reflect the amended model standards, or publish a rule demonstrating that a more stringent standard is justified. The statute specifically requires DOE to consider the standards set by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE). For three other industrial equipment categories, DOE must first publish a determination of whether a standard is needed. If DOE determines the need for a standard, it must then publish a rule setting such a standard 18 months after publishing the determination. However, DOE does not have a deadline for making a determination. Overall, DOE is required to determine that revisions to standards achieve the maximum improvement in energy efficiency that is “technologically feasible and economically justified.” In determining whether a standard is economically justified, DOE must consider the economic impacts of the revision on manufacturers and consumers, the savings in operating costs throughout the life of the product, the total projected amount of energy savings likely to result from the standard, and whether the standard would result in a product that is less useful or does not perform as well. Table 1 shows the number of deadlines and types of actions required for consumer product and industrial equipment categories with deadlines that have passed. In addition, DOE is obligated to issue rules adopting revised standards for another six industrial equipment categories: packaged terminal air conditioners and packaged terminal heat pumps; warm air furnaces; packaged boilers; storage water heaters; instantaneous water heaters; unfired water storage tanks. DOE has no mandated deadlines for issuing these rules. DOE has missed all 34 of the rulemaking deadlines that have come due for the 20 product categories with deadlines, completing 11 of these rules late and not yet completing the remaining 23. DOE has also not revised standards for one of the six industrial equipment categories that require updates but have no deadlines. LBNL estimates that delays in setting minimum energy efficiency standards for four categories of consumer products that DOE believes use the most energy will cost the nation at least $28 billion in forgone energy savings by 2030. Our panel members identified two additional significant effects of the delays: states attempting to set their own standards and businesses and utilities having difficulty in making business decisions and planning for the future. As table 2 shows, none of the 34 rules with passed deadlines was completed on time. For rules that have been completed, delays ranged from less than 1 year to about 10 years; and incomplete rules are as much as 15 years late. Table 3 shows the status of rules completed for consumer product and industrial equipment categories with deadlines that have passed. As the table shows, only three product or equipment categories—clothes washers; refrigerators, refrigerator-freezers, and freezers; and small furnaces—have had all their rules completed. As the table also shows, some categories have had one of two required rules completed, and others have had no rules completed. Appendix III provides additional information on the deadlines for these product and equipment categories. Furthermore, for the six industrial equipment categories that do not have deadlines, DOE has completed rules for five and has begun, but not completed, the rulemaking process for the remaining category, as table 4 shows. DOE does not have estimates of the energy savings lost because of delays in completing rules. However, LBNL staff provided us with estimates of delays for the four categories of consumer products that DOE believes use the most energy—refrigerators and freezers, central air conditioners and heat pumps, water heaters, and clothes washers. According to these estimates, the nation would have saved at least $28 billion in energy costs, even after paying higher equipment costs, by 2030 if these standards had been put in place when required—that is, 2.1 quadrillion British thermal units (Btu) of natural gas and 1.4 quadrillion Btus of electricity. Historically, LBNL, under contract to DOE, has performed most of the technical and economic analyses for proposed standards rulemakings. To estimate the cost of delays, LBNL staff used the estimates of savings they developed to support proposed standards for the four consumer products. According to our analysis, LBNL took steps to ensure the estimates were reasonably accurate by considering such factors as whether the technologies used for the analysis would have been available at the time of the deadlines for setting standards. The total forgone energy savings is equal to the annual primary energy consumption of approximately 20 million U.S. households. In addition, the delays will also result in 53 million tons of carbon dioxide emissions, an amount equivalent to about 1 percent of total estimated U.S. carbon dioxide emissions in 2004. Our panelists noted that they consider increased energy consumption to be one of the two most significant effects of DOE’s delays in revising efficiency standards. Similarly, delays for one type of industrial equipment, electric distribution transformers, have resulted in significant forgone energy savings. Distribution transformers reduce the voltage of an electric utility’s power distribution line to the lower voltages suitable for most equipment, lighting, and appliances. Nine years ago, DOE determined that standards for distribution transformers were warranted as technologically feasible and economically justified and were likely to result in significant savings. However, DOE did not publish proposed standards for distribution transformers in the Federal Register until August 2006. According to DOE, the energy savings from the proposed distribution transformer standards would eliminate the need for approximately 11 new 400-megawatt power plants by 2038, enough to provide a sufficient flow of electricity to about 3 million homes. These estimates account for only a portion of the forgone savings from the lack of timely rules for consumer products and industrial equipment; however, no estimates of the forgone savings are available for the remaining product and equipment categories. Equally important, because many energy-using products and equipment have long service lives, delays in setting standards lead to years of using the products and equipment that are less energy efficient than they could be, compounding the loss of the energy efficiency. For example, electric distribution transformers have a typical service life of about 30 years. With about 50 million transformers in the United States, each year of delay until a rule setting standard is completed means that more of these transformers will be replaced at the present energy efficiencies, rather than the proposed level, leading to many additional years of forgone savings. Other, nonquantifiable effects have also resulted, or can result, from delays in issuing energy efficiency rules. Our panel members noted the possibility that states would attempt to set their own appliance efficiency standards as the other most significant effect of delays. Indeed, states are dissatisfied with DOE’s delays. In 2005, 15 states and New York City sued DOE for “foot-dragging results in greater —–and avoidable—energy use.” The states cited, among other effects, high energy costs, increased environmental harm, and burdens on the electricity grid from DOE’s delays as justification for their actions. The suit was settled recently, with DOE agreeing to eliminate its backlog by 2011, the same date set in its report to Congress. According to officials from the California Energy Commission, California has begun to press Congress to lift the preemption that prevents the states from readily setting their own standards. While states had expressed dissatisfaction with the pace of rulemaking and before 1987 had petitioned DOE for waivers, the 1987 amendment to EPCA made it considerably more difficult to obtain a waiver, according to DOE officials. Since then, DOE has received only one petition for a waiver. Panel members commented that if states obtain waivers and pass individual standards, the result could be a patchwork of state standards, preventing economies of scale in manufacturing and raising costs for both consumers and manufacturers. Panel members also pointed out that delays make business planning difficult for manufacturers and utilities, which could also increase their costs and, therefore, costs to consumers. As one panel member noted, “Product manufacturers don’t know when new standards will take effect in advance, making it difficult to plan product redesigns and thereby increasing cost of compliance.” According to another panelist, “An uncertain future regulatory environment makes it very difficult for appliance and equipment manufacturers to make investment decisions.” For example, a manufacturer may be reluctant to invest large sums in a new technology if the new technology may be made obsolete by new federal efficiency standards or if new standards might not allow the manufacturer to gain a hoped-for competitive advantage via new technology. To minimize such uncertainty and its attendant risks, manufacturers want DOE to make regulatory decisions on time.” DOE has developed a catch-up plan to resolve the backlog of delayed energy efficiency standards. However, since DOE has not completely identified the root causes for the delays and because the plan lacks critical elements of an effective management approach, the likelihood of success is not clear. According to DOE’s January 2006 report to Congress, the department has identified four causes of delays in its efficiency standards rulemaking: (1) an overly ambitious schedule set in statute; (2) the sequential nature of the rulemaking process; (3) the consequences of the Process Rule, which the report states that DOE adopted in 1996 to address concerns about its analyses and stakeholder involvement; and (4) DOE’s internal document review and clearance process. Specifically: An ambitious statutory schedule. According to the report, Congress’s rulemaking schedule was “rigorous.” As a result, the program staff were unable to meet the deadlines from the beginning. These delays were exacerbated when Congress increased the number of products that required rulemakings. In 1994, DOE attempted to address the backlog by proposing standards for eight products in one rulemaking. However, according to DOE, this rulemaking effort met with strong opposition from industry, drawing over 5,000 responses during the comment period, and DOE withdrew the proposal. Following this experience, Congress imposed a 1-year moratorium on new or amended standards. The moratorium further exacerbated the backlog, according to DOE. Sequential nature of the rulemaking process. The elements of a rulemaking must occur sequentially, and, according to DOE, “this sequence-dependent nature of the analyses makes it vulnerable to un- recoverable delays.” The standards rulemaking process includes many overlapping requirements from EPCA, as amended; Executive Orders; and the Process Rule, which create a complex analytical and procedural challenge, according to the report. The standards rulemaking process typically consists of three stages––an advance notice of proposed rulemaking, a notice of proposed rulemaking, and a final rule––and each of these stages includes internal and external review and comment periods, as well as technical analyses that build on previous analyses. Most of these tasks cannot be done concurrently, so when delays occur, often the time lost cannot be made up because of these rigid requirements. Consequences of the Process Rule. Under DOE’s 1996 “Process Rule,” the potential energy savings, rather than statutory deadlines, determine which standards should be set first. Consequently, DOE reported to Congress, it analyzed the likely impacts of all pending energy efficiency rulemakings and used this analysis to categorize each rulemaking as high-, medium-, or low-priority, depending on energy-savings potential. Regardless of deadlines, high-priority rules received the bulk of the resources, medium- priority rules received some resources, and low-priority rules were not addressed at all. The Process Rule also called for increased stakeholder input and expert review, which added time to the rulemaking, according to DOE’s report. Finally, according to DOE’s 2006 report, the Process Rule increased the complexity of the technical analysis required, adding more time. Internal document review and clearance process. The quality of draft rulemaking documents was inconsistent, according to DOE’s 2006 report, which made the internal review process time consuming. In addition, reviews by the Office of General Counsel, Office of Policy and International Affairs, and other internal reviewers were not always managed effectively, according to the report. Consequently, issues were not identified and resolved early in the process, and draft rules often did not receive the timely reviews needed to approve them for issuance. While DOE identified these causes for rulemaking delays in its January 2006 report, DOE staff we spoke with did not agree on the causes. Program staff told us General Counsel’s legal reviews were excessively long, while General Counsel officials attributed their lengthy review to the poor quality of documents, which required extensive non-legal editing. DOE lacks program management data that would enable it to identify with specificity where in the agency’s internal review process delays are occurring. In addition, LBNL staff disagreed with the report’s contention that the Process Rule required more time for technical analysis. Rather, they said, the Process Rule’s requirement for more complex analysis and for more systematic stakeholder involvement addressed those parts of the rulemaking process earlier than before but took about the same amount of time. Our panel members, based on their past involvement or familiarity with standards rulemaking, agreed that the internal review process was problematic. Specifically, the most frequently cited cause for delays in developing energy efficiency standards were delays in the General Counsel review process. One panel member stated that the General Counsel review process was “one of the lengthiest and most opaque elements of the standards process.” In addition, about half of our panelists said the low priority historically given to the program, not only by DOE but by the Administration and Congress as well, was a great cause of delay in issuing the standards. Finally, panel members identified two additional major causes of delay that DOE did not, namely inadequate budget and insufficient technical staff. While some of these identified causes are beyond DOE’s control, such as the statutory deadlines, DOE reported that it could take actions to clear the backlog by 2011. DOE plans to do the following to ensure that rulemakings are more timely: Make the rulemaking process more efficient. DOE plans to stagger the start of rulemakings in order to make the best use of staff time and resources. In the past, DOE staff worked on one rule at a time. Under DOE’s plan, staff will work on several rules simultaneously, which should enable the staff to make better use of their time when drafts are out for review. In addition, DOE plans to combine several products with related technical and policy characteristics—such as water heaters, pool heaters, and direct heating equipment—into a single rulemaking, which should expedite the rulemaking process. Adhere to the deadline for closing public comments. DOE reported that it will only consider comments received before their deadlines in its current analysis. In the past, DOE continued to consider comments after the closing date stated in the Federal Register and responded to those comments with additional analysis, which delayed the issuance of the final rulemaking. Simplify the analysis for each rulemaking. Senior management officials are expected to approve the staff’s analytical approach and scope of effort earlier in the rulemaking process. In the past, rulemaking staff conducted their analysis for a product category without ensuring that senior management approved of their approach. As a result, according to the plan, management often called for a different approach when reviewing a draft analysis, which required significantly more time. In addition, DOE plans to conduct less exhaustive analysis for some rules, rather than conducting the same level of analysis for all rules. If all the stakeholders agree that a product category does not require DOE’s usual complex analysis, which would be the case when the key issues are clearly understood, DOE will perform less extensive analysis. DOE expects this change to shorten rulemaking times. Better ensure the quality of the proposed rulemaking and accountability of all staff and reviewers. DOE plans to take four actions toward this goal: (1) train staff in how to meet all regulatory procedural requirements and provide readily available comprehensive guidance in order to avoid procedural mistakes that lead to delays, (2) contract with a national laboratory to maintain a data management system for tracking rulemaking progress and use the resulting data to identify problems for quicker resolution, (3) match skill levels with tasks so that resources are used most efficiently, and (4) encourage stakeholders to negotiate a proposed standard in return for an expedited rulemaking process. Improve the document review and clearance process. DOE plans to emphasize better document quality so that reviewers can focus their efforts on legal and policy issues rather than on basic editorial issues. In the past, formats, styles, and approaches of documents were not consistent, which slowed down the review process. DOE has issued a style guide and a template for documents to better ensure consistency. In addition, DOE plans to have different reviewers examine the proposed rulemaking concurrently, rather than sequentially, throughout the rulemaking process. Adhere to a 36-month timetable for completing a rule. DOE will allocate approximately 16 months for analysis, 6 months for public review and comment, 8 months for its internal review, and 6 months for review by the Office of Management and Budget. In the past, while DOE had a 3-year limit for rulemaking, it virtually never issued rules within that period. Most panelists rated the components of DOE’s catch-up plan highly and expect that, if followed, it will likely help DOE meet its schedule for completing rules. The panelists particularly favored the parts of DOE’s catch-up plan to reform its internal review process, use an expedited process when stakeholders recommend standards on which they have reached consensus, and stagger rulemakings. They also emphasized the importance of having the Secretary of Energy and the administration provide more management attention and priority to the program. Finally, most agreed that certain aspects of DOE’s current rulemaking process should not be changed. Specifically, DOE should continue to perform complete technical and economic analyses and explain its justification for the standards it selects, include the public and stakeholders throughout the rulemaking process, and ensure that the process and analyses are transparent. Despite these favorable views, some panelists expressed concern that DOE might not have addressed what they consider the most relevant causes of delay. For example, according to one panelist’s observations, “the delays are an internal management problem at DOE, and the department’s internal procedures are a black box. It is hard to know with any assurance what the real problem is and whether the issue is budget or staffing or bureaucratic procedures.” According to another panelist’s review of DOE’s plan, the plan “focused too much on reducing analytical complexity and controlling stakeholder participation––neither of which were major contributors to delays––and too little on internal process improvements, without which delays will continue.” Although many of DOE’s actions appear reasonable, we agree that DOE may not have identified the root causes of its rulemaking delays. Consequently, DOE risks expending resources on the wrong factors or emphasizing minor or irrelevant causes. DOE has not developed the program management data it needs to identify bottlenecks in the rulemaking process. Even though DOE has work logs that compile limited data on some parts of the rulemaking process, such as the amount of time taken for internal reviews, the data are not detailed enough to identify the source of delays. Furthermore, DOE does not have data on the length of all stages of its rulemaking process. Because DOE managers lacked data to determine causes, they said they compiled information about possible causes during discussions with staff. Despite the problems with their data, managers told us that they believe that they have identified the root causes of delay. According to our work on leading performance management practices and the work of a government regulatory process expert, management plans should contain specific strategies to resolve problems and help congressional decision makers understand how the agency plans to improve its performance. Such plans also provide a basis for accountability. While DOE’s plan includes elements intended to make the rulemaking process more efficient, it lacks two critical elements to help ensure success of the plan—assurance of accountability and management’s allocation of adequate resources. Specifically: Assurance of accountability. While DOE has laid out a schedule for clearing its rulemaking backlog for standards, its past poor performance calls into question whether it is likely to be accountable to the schedule in the catch-up plan. According to an Assistant General Counsel who manages and tracks the regulatory process for the Department of Transportation (DOT), an agency with very extensive and effective electronic regulatory management, a successful rulemaking process holds its management and staff accountable to interim and final deadlines. For example, DOT publishes its deadlines on its Web site, making the agency’s actions to meet the deadlines transparent to all stakeholders. While DOT’s deadlines are target dates only, this transparency puts pressure on each participant to carry out his or her responsibilities on time or to provide legitimate reasons for any delays. DOE publishes a schedule of deadlines for some standard-setting rulemaking, including the interim deadlines, in its Semiannual Regulatory Agenda. However, when DOE misses these deadlines, it generally does not explain why, or how it plans to make up the lost time when it publishes revised deadlines. The catch-up plan does not ensure that the pattern of missing deadlines will be broken. Adequate resources. As far back as 1993 we reported that insufficient resources were a primary cause of DOE’s delays in updating energy efficiency standards. This may still be the case. While the DOE plan calls for a sixfold increase in workload, it does not increase program staffing and contractor budgets in the same proportion. Program managers told us they generally have had 7 to 14 staff working on energy efficiency rules, with 7 on the job as of fiscal year 2006. They plan to add 2 full-time staff and 1 from the Presidential Management Fellows (PMF) program, a nonpermanent position, for an increase to 10 staff in fiscal year 2007. Similarly, from fiscal years 2000 through 2006, DOE’s budget for contractor staff has averaged about $10 million per year. For fiscal year 2007, DOE requested $12 million for contractors, a 20 percent resource increase. DOE expects these limited resource increases to cover a 600 percent increase in workload. In the absence of further increasing resources, DOE said in its January 2006 report it plans to meet the increased workload by improving productivity. DOE’s program for energy efficiency standards has been plagued by delays for decades. Although many steps in DOE’s most recent January 2006 plan to address these delays appear to be reasonable, DOE does not definitively know whether the plan will address root causes and clear the backlog. Furthermore, DOE’s plan lacks important elements of effective management practices that would help assure success. Consequently, it is unclear whether DOE can carry out the ambitious schedule it has set for itself to update energy efficiency standards. If DOE does not succeed in clearing its backlog, the nation and consumers will continue to forgo the benefits of more energy-efficient consumer products and industrial equipment. The loss of such benefits will make the nation depend even more on imported energy. The continuing commitment of DOE’s top management to make standards rulemaking a top organizational priority is essential to DOE’s success in completing all energy efficiency rules. To increase the likelihood that DOE’s plan for updating minimum energy efficiency standards is successfully implemented, we recommend that the Secretary of Energy take the following actions: Employ the elements of leading management practices, including expediting the efforts DOE has begun to establish a tracking system to gather data that may be used to identify and address causes of delays to more effectively manage the rulemaking process; ensuring that the interim goals and time frames are transparent to all stakeholders, and that all internal stakeholders, including reviewers and program staff, are held accountable to the time frames; and allocating adequate resources within DOE’s appropriation. We provided the Department of Energy with a draft of this report for review and comment. Although DOE did not provide views on our recommendations, it expressed concerns in two areas. First, regarding our discussion of the causes of delays in setting standards, DOE stated that it is incorrect to assign blame for delays to any one office, official, decision, or process—and specifically to the Office of the General Counsel. DOE stated that doing so reflects a simplistic and largely incorrect understanding of the program’s complexity. DOE noted that the delays in setting standards have spanned administrations of both parties, several Secretaries of Energy, and various DOE offices and personnel; also, although DOE work logs may indicate that a specific office has a document for a certain period of time, during that time multiple individuals from different offices may have been working together on the document. We disagree with DOE’s characterization of our analysis. In establishing the context for our findings, we pointed out that the energy efficiency standards-setting process was complex and that there were multiple reasons for delays. To provide more definitive information on the root causes of the extensive delays that have been experienced, we sought data from DOE and the opinions of cognizant DOE staff. However, because DOE management could not provide data to conclusively document the reasons for the substantial delays, or the data provided by DOE as contained in internal work logs were inadequate to determine causality, and because representatives of the various DOE offices could not agree on the root causes, we turned to a well-recognized process for identifying causes in complex situations—a Delphi panel. Panel members were carefully, objectively selected individuals who have been closely involved in DOE’s rulemaking process for setting standards over an extensive period of time. They most frequently cited delays in the General Counsel review process as cause for delays in developing energy efficiency standards. We believe that our use of this method provided a clearer understanding of the causes of delays than DOE has been able to provide. As we noted earlier, in DOE’s January 2006 report to Congress and in our interviews with representatives of the offices involved in the standard-setting process, those associated with the program generally acknowledged that they could have done more but pointed to others as the cause of the delays and therefore have not fully accepted responsibility for the program’s failures. Second, DOE stated that our report did not capture many of the recent standards-setting activities undertaken since enactment of EPAct 2005. We agree that there has been a flurry of standards-related activity, as expressed by DOE in its letter commenting on our report, and we have noted this in our report. Although we recognize that DOE has taken a number of steps that should move the program forward, it has not yet published any additional final standards for the product and equipment categories included in the scope of our work and our report’s findings have not changed. DOE’s letter commenting on our report is presented in appendix V. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Energy and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or members of your staff have questions about this report, please contact me at (202) 512-3841 or wellsj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. States and their subdivisions, such as counties and cities, adopt building codes that establish minimum requirements for energy-efficient design and construction of commercial and residential buildings. The building codes regulate components that affect the amount of energy that a building will use, such as the building envelope, electrical power, and lighting. These codes vary from one state to another and sometimes within a state. They may be mandatory or voluntary codes, either requiring builder compliance or serving as guidelines. States and local jurisdictions may adopt model building codes developed by nonprofit organizations, such as the American Society of Heating, Refrigerating and Air-Conditioning Engineers’ (ASHRAE) Standard 90.1 and the International Code Council’s (ICC) International Energy Conservation Code (IECC). Both ASHRAE and ICC publish codes for commercial and residential buildings. ASHRAE uses a consensus and public hearing process to develop its model building codes. It involves the design community, including architects and lighting and mechanical designers; the code enforcement community, including building code officials and state regulatory agencies; building owners and operators; manufacturers and utility companies; and representatives from the Department of Energy (DOE), energy organizations, and the academic community. ICC uses a different process to develop its model building codes. Under its process, anyone can propose a code, and the IECC code development committee, which includes mostly building code officials, votes on the proposals. According to staff at the Pacific Northwest National Laboratory (PNNL), which monitors state building codes for DOE, although ASHRAE and ICC use different processes to develop their model building codes, the two organizations incorporate each other’s codes into their own when they revise them. As a result, ASHRAE and ICC codes that are revised at about the same time generally have similar energy efficiency provisions. 1995, the ICC succeeded CABO and, as such, the IECC replaced the MEC. Each time the ICC revises the IECC, DOE has 12 months to determine whether the revision will improve energy efficiency in residential buildings and publish a notice of that determination in the Federal Register. The Act does not specify what type of revision triggers the start of the 12-month period for either commercial or residential determinations; but, according to DOE officials, the 12-month period is triggered by ASHRAE’s and ICC’s publication of revised codes. The Act provides that if the Secretary determines that a revision to ASHRAE’s or ICC’s model building code will improve energy efficiency–– called a positive determination––states “shall” review their building codes. For commercial model building codes, each state has 2 years after DOE publishes a positive determination on a revised ASHRAE model building code to certify to DOE that it has reviewed and updated the provisions of its commercial building code in accordance with the revised code. For residential model building codes, each state also has 2 years after a positive determination for certification, but it must certify to DOE that it has reviewed the provisions of its residential building code and determined whether it is appropriate to update them to meet or exceed the revised code. Subsequent to enactment of these provisions, the Supreme Court ruled that the constitution does not allow Congress to require states to regulate a matter. DOE program managers told us that DOE does not require states to review their codes following a positive determination. Instead, the managers told us, DOE facilitates states’ efforts to adopt revised codes. PNNL officials told us they assist DOE on all aspects of the building code determinations and provide training and technical assistance to state and local officials responsible for building codes. As of August 2006, ASHRAE and ICC have published a combined total of nine revisions to their model building codes for DOE to evaluate. ASHRAE revised Standard 90.1 three times, and CABO revised the MEC twice before it was incorporated into ICC in 1995. The ICC issued its first version of the IECC in 1998 and has since revised it three times. Deadlines for DOE’s determinations have come due on all these revisions, except the 2006 IECC revision, which will be due in January 2007. We were asked to report on (1) whether DOE has met its statutory deadlines for determining if states should adopt revised commercial model building codes, (2) whether DOE has met its statutory deadlines for determining if states should consider adopting revisions to the residential model building code, and (3) whether and, if so, to what extent DOE tracks states’ building codes. This appendix contains information about these objectives. To address the commercial and residential building code determinations DOE has completed, we reviewed the requirements and deadlines for building code determinations contained in statute and DOE determinations published in the Federal Register. We also interviewed and obtained documents from officials at DOE, PNNL, ASHRAE, ICC, and the American Council for an Energy Efficient Economy. Since DOE program officials use ASHRAE’s and ICC’s revision publication dates as the trigger date for DOE’s deadlines for making determinations, we used these dates for our analysis. We did not attempt to determine why DOE might miss deadlines for determinations or why individual states adopt building codes. DOE has completed only one of three commercial model building code determinations that have come due. DOE issued a positive determination for the first of three revisions to ASHRAE’s Standard 90.1 about 17 months after the deadline. As of December 2006, DOE had not completed determinations for either of the remaining revisions and has decided to combine them. Table 5 provides details about the revisions’ publication dates, the deadlines for the determinations, and the status of DOE’s reviews. DOE has completed four of five residential building code determinations that have come due. DOE issued determinations for all of these four CABO/ICC revisions to the MEC/IECC and said the revisions would improve energy efficiency. DOE completed its first determination on time and completed the next three from 1 month to over 1 year late. As of December 2006, DOE had not yet completed the determination for the fifth IECC revision. Table 6 provides details about the revisions’ publication dates, the due dates for the determinations, and the status of DOE’s reviews. DOE and PNNL staff track states’ commercial and residential building codes and publish information about them on DOE’s Web site. PNNL staff told us they e-mail state officials twice a year to confirm that DOE has the most current information about the states’ commercial and residential building codes and to obtain any updated information. Additionally, they are in frequent contact with the states and continually update their information on states’ building codes. DOE’s Web site reports the type of code adopted by each state and whether builder compliance with the code is voluntary or mandatory, and provides limited information about the stringency of the code, which PNNL staff determines by analyzing the state-provided information. For example, DOE’s Web site reports that Florida has adopted mandatory codes for both commercial and residential buildings and that the commercial building code is more stringent than the ASHRAE 90.1 2001, and the residential building code is more stringent than the 2000 IECC. The complete list of state commercial and residential building codes for energy efficiency is available at http://www.energycodes.gov/implement/state_codes/state_status_full.php. Although the information published on DOE’s Web site compares the stringency of state codes with ASHRAE’s and ICC’s model building codes, PNNL staff told us the information should not be used to judge the stringency of state codes relative to the ASHRAE’s and ICC codes for which DOE has made a determination. The staff explained that while more recent state codes are generally more energy efficient than older state codes, there are other factors that affect their stringency. For example, states may adopt DOE’s latest determination on ASHRAE’s and ICC’s codes as their state building codes, but may amend them to be weaker or stronger. For example, according to PNNL staff, Georgia adopted the latest DOE residential determination but amended to it to be more similar to prior DOE determinations. In other cases, the changes to a revised code may not affect all states equally; therefore, while a state may not have adopted the most recent revision, the changes in that revision may not have applied to that state anyway. For example, PNNL staff told us that, although Massachusetts did not adopt the 2000 IECC, the differences between the 2000 IECC and the 1995 MEC, which Massachusetts did adopt, did not apply to that state. Therefore, PNNL staff consider Massachusetts’s code to be as stringent as the 2000 IECC. Furthermore, PNNL staff told us that, while some states have adopted model building codes that are more recent than those for which DOE has issued a determination, these codes should not be assumed to be more stringent than those for which DOE has made a determination until PNNL makes a comparable technical analysis. PNNL staff told us that they have the information and technical capability to compare the stringency of all the state codes with those for which DOE has made a determination. However, they said they typically analyze building codes on a state-by- state basis only at DOE’s request and that they do not currently have a comprehensive analysis of how all states’ codes compare to DOE’s latest determinations. As of September 2006, DOE had not directed PNNL to complete a comprehensive analysis. DOE officials told us that DOE focuses on facilitating states’ efforts to adopt building codes rather than penalizing them for not meeting DOE building code determinations and, as such, they do not believe a comprehensive analysis of which states’ building codes are as stringent as those for which DOE has made a positive determination justifies the resources it would require. Our objectives were to examine (1) the extent to which DOE has met its statutory obligations to issue rules on minimum energy efficiency standards for consumer products and industrial equipment and (2) whether DOE’s plans are likely to clear the backlog of required rulemakings and whether these plans could be improved. To address these objectives, we reviewed the statutory requirements and deadlines for developing energy efficiency standards for consumer products and industrial equipment, program information available on DOE’s Web site, information provided by program staff, and DOE’s January 2006 and August 2006 reports to Congress. For the purposes of our review, we did not include the 17 additional product categories that the Energy Policy Act of 2005 added to DOE’s responsibilities, including the one that came due in August 2006. Although DOE is also required to issue rules regarding standards for plumbing products, we excluded them from this report because they primarily involve conserving water, rather than energy. Furthermore, we did not evaluate the merit of the standards DOE has issued. We conducted interviews with DOE program officials; officials of the Office of General Counsel; officials at Lawrence Berkeley National Laboratory, the National Energy Technology Laboratory, and the National Institute of Standards and Technology; and a regulatory process expert at the Department of Transportation. We also interviewed officials at the American Council for an Energy Efficient Economy; the Appliance Standards Awareness Project; the American Society of Heating, Refrigerating and Air-Conditioning Engineers; the California Energy Commission; Pacific Gas and Electric Company; and Natural Resources Canada; and obtained documentation as needed. We analyzed data on DOE’s rulemaking process, estimates of national energy savings from energy efficiency standards, and program resources. In addition, we used a Web-based, modified Delphi method to obtain views from a panel of 33 stakeholders on the causes and effects of delays in setting standards and on proposed solutions to these delays. The Delphi method is a systematic process for obtaining individuals’ views on a question or problem of interest and, if possible, obtaining consensus. Our modified Delphi method had two phases. Phase 1 consisted of a series of open-ended questions concerning DOE’s delays. In Phase 2, panel members rated the significance or priority of the causes of delays, effects of delays, and solutions to delays that they had identified in phase 1. We selected the panel members from a group of stakeholders who were both widely recognized as knowledgeable about one or more key aspects of energy efficiency standards, and who were involved or familiar with DOE’s rulemaking process. The group included officials from federal and state agencies, manufacturers, trade associations, energy efficiency advocacy groups, consumer interest groups, utilities, and utility associations, some of whom were previously employed by DOE as participants in the rulemaking process. We used a variety of methods to determine that the panelists we selected had the expertise necessary to participate in the panel. A list of the 33 panel members is included in appendix IV. To report panel results, when two-thirds or more of the panel agreed, we use the term “most.” When one-half of more of the panel agreed, we use the term “the majority.” We conducted our review from June 2005 through January 2007 in accordance with generally accepted government auditing standards. Included in “Furnaces” deadlineIncluded in general service fluorescent lamps and incandescent reflector lamps deadlineCalculations for years delayed for overdue rules are as of December 31, 2006. Subsequent updates to standards for the category called Furnaces are intended to cover updates for mobile home furnaces and small furnaces and are included in the Furnaces deadlines. Subsequent updates to standards for the category called “General service fluorescent lamps and incandescent reflector lamps” are intended to cover updates for “Additional general service fluorescent lamps and incandescent reflector lamps” and are included in the Furnaces deadlines. Earl Jones GE Consumer & Industrial Joseph Mattingly Association of Appliance & Equipment Manufacturers James McMahon Lawrence Berkeley National Laboratory Harry Misuriello Alliance to Save Energy Jim Mullen Lennox International Inc. Steven Nadel American Council for an Energy-Efficient Economy Kyle Pitsor National Electrical Manufacturers Association James Ranfone American Gas Association Priscilla Richards New York State Energy Research and Development Authority Michael Rivest Navigant Consulting, Inc. In addition to the individual named above, Karla Springer, Assistant Director; Tim Bober; Kevin Bray; Valerie Colaiaco; Janelle Knox; Megan McNeely; Lynn Musser; Alison O’Neill; Don Pless; Bill Roach; Frank Rusco; Ilga Semeiks; and Carol Herrnstadt Shulman made key contributions to this report. | The Department of Energy (DOE) sets energy efficiency standards through the rulemaking process for certain consumer product categories, such as kitchen ranges, and industrial equipment, such as distribution transformers. Congress reported in 2005 that DOE was late in setting standards and required DOE to report every 6 months on the status of the backlog. GAO examined (1) the extent to which DOE has met its obligations to issue rules on minimum energy efficiency standards for consumer products and industrial equipment and (2) whether DOE's plan for clearing the backlog will be effective or can be improved. Among other things, GAO convened an expert panel on energy efficiency standards to identify causes and effects of delays and assess DOE's plans. DOE has missed all 34 congressional deadlines for setting energy efficiency standards for the 20 product categories with statutory deadlines that have passed. DOE's delays ranged from less than a year to 15 years. Rulemakings have been completed for only (1) refrigerators, refrigerator-freezers, and freezers; (2) small furnaces; and (3) clothes washers. DOE has yet to finish 17 categories of such consumer products as kitchen ranges and ovens, dishwashers, and water heaters, and such industrial equipment as distribution transformers. Lawrence Berkeley National Laboratory estimates that delays in setting standards for the four consumer product categories that consume the most energy--refrigerators and freezers, central air conditioners and heat pumps, water heaters, and clothes washers--will cost at least $28 billion in forgone energy savings by 2030. DOE's January 2006 report to Congress attributes delays to several causes, including an overly ambitious statutory rulemaking schedule and a lengthy internal review process. In interviews, however, DOE officials could not agree on the causes of delays. GAO's panel of widely recognized, knowledgeable stakeholders said, among other things, that the General Counsel review process was too lengthy and that DOE did not allot sufficient resources or make the standards a priority. However, GAO could not more conclusively determine the root causes of delay because DOE lacks the program management data needed to identify bottlenecks in the rulemaking process. In January 2006, DOE presented to Congress its plan to bring the standards up to date by 2011. It is unclear whether this plan will effectively clear DOE's backlog because DOE does not have the necessary program management data to be certain the plan addresses the root causes. The plan also lacks critical elements of an effective project management plan, such as a way to ensure management accountability for meeting the deadlines. Finally, the plan calls for a sixfold increase in workload with only a small increase in resources. DOE plans to manage the workload through improved productivity. |
FTA’s New Starts program supports new or extensions to existing fixed- guideway transit capital projects, such as light rail, commuter rail, ferry, and bus rapid transit (BRT) projects. Sponsors of New Starts projects— those with a total cost of $250 million or more or a capital investment program contribution of $75 million or more—must take a number of steps to select a project and apply for New Starts funding. Sponsors of New Starts projects are required by law to go through a planning and project development process, which is divided into three phases: alternatives analysis, preliminary engineering, and final design. This is followed by the construction phase. (See fig. 1.) In the alternatives analysis phase, project sponsors identify the transportation needs in a specific corridor and evaluate a range of modal and alignment alternatives to address the locally identified problems in that corridor. Project sponsors complete the alternatives analysis phase by selecting a locally preferred alternative. During the preliminary engineering phase, project sponsors refine the design of the locally preferred alternative and its estimated costs, benefits, and impacts. When the preliminary engineering phase is completed and federal environmental requirements are satisfied, FTA may approve the project’s advancement into final design, after which FTA may recommend the New Starts project for a full funding grant agreement (FFGA). An FFGA establishes the terms and conditions for federal participation in a transit project. SAFETEA-LU established the Small Starts program within the capital investment program; the Small Starts program simplifies the evaluation and rating criteria and steps in the project development process for lower- cost projects. According to FTA’s guidance, projects must (1) meet the definition of a fixed-guideway for at least 50 percent of the project length in the peak period or (2) be a corridor-based bus project with certain elements to qualify as a Small Starts project. FTA subsequently introduced a further streamlined evaluation and rating process for very low-cost projects within the Small Starts program, which it calls Very Small Starts. Very Small Starts are projects that must contain the same elements as Small Starts projects and also contain the following three features: be located in corridors with more than 3,000 existing riders per average weekday who will benefit from the proposed project, have a total capital cost of less than $50 million (for all project have a per-mile cost of less than $3 million, excluding rolling stock (e.g., train cars). The project development process for Small Starts and Very Small Starts is a condensed version of the process for larger New Starts projects. For Small Starts, SAFETEA-LU set up a condensed process in which the preliminary engineering and final design phases are combined into one “project development” phase; see figure 2 below for a comparison to the New Starts project development process. When projects apply to enter project development, FTA evaluates and rates Small Starts projects on both project justification and local financial commitment criteria, but compared to New Starts projects, there are fewer statutorily prescribed project justification criteria for these projects. Very Small Starts projects also progress through a single project development phase and are evaluated and rated on the simplified project justification criteria. FTA may recommend Small Starts and Very Small Starts projects to Congress for funding once the projects have been approved to enter into project development and meet FTA’s “readiness” requirements. Congress makes final appropriations decisions on projects. FTA provides funding for Small Starts and Very Small Starts projects in one of two ways: through project construction grant agreements (PCGA) or single-year construction grants when the New Starts funding request is less than $25 million and can be met with either a single-year appropriation or existing FTA appropriations that remain available for this purpose. Exempt projects follow the same project development process as New Starts projects, including alternatives analysis, preliminary engineering, final design, and construction. (See fig. 2.) Since these projects receive less than $25 million in federal funds, they are statutorily exempt from FTA’s evaluation and rating process. However, exempt projects must still meet other FTA federal grant requirements before receiving federal funds. Currently, the exempt category of funding will expire when a final regulation implementing the Small Starts provisions of SAFETEA-LU is complete. However, FTA has not yet issued this final regulation. For the next reauthorization of federal transit programs, FTA proposes in its fiscal year 2012 budget request to transform the Capital Investment Grant program to further streamline the process for new fixed-guideway and corridor-based bus projects. FTA proposes to discontinue the separate categories of New Starts and Small Starts (which includes Very Small Starts) projects in law and instead evaluate and rate projects under a single set of streamlined criteria. Further, FTA proposes to reduce the steps in the project development process. FTA also proposes that projects that require less than 10 percent of the project’s total anticipated cost and no more than $100 million in major capital investment funds be exempt from the evaluation and rating process. The Small Starts program was created to provide a more streamlined evaluation and rating process for lower-cost and less complex projects. SAFETEA-LU expanded the types of projects eligible under the new Small Starts program to include corridor-based bus projects, which includes projects such as BRT. Thus, any new major capital project fitting the broader definition is eligible, whether it is a BRT, streetcar, or rail project. Although certain bus projects are now eligible for Small Starts funding, the law does not express a preference for any particular mode of transit and the legislative history indicates that the program was to remain mode-neutral. At the time Small Starts was established, FTA created the category of Very Small Starts to further streamline the program for simple, low-risk projects that are, based on their features, expected to be cost-effective and with sufficient land use to warrant funding. FTA officials stated that the features were developed and determined based on data that FTA had on existing projects. According to FTA, it also created Very Small Starts to be mode-neutral. Since fiscal year 2007, FTA has approved 29 Small Starts and Very Small Starts projects into project development, and has recommended for funding to Congress all 29 of them, which are mostly BRT projects and a handful of other transit modes. As Table 2 shows, 25 of the 29 projects FTA recommended for funding are BRT projects. Six of the 10 Small Starts projects are BRT projects; all 19 Very Small Starts projects are BRT projects. The number of successful applicants does not necessarily represent the interest of all potential project sponsors in Small Starts and Very Small Starts. It is difficult to establish the total number of projects that sponsors might be interested in developing because, according to FTA officials in one regional office, FTA encourages sponsors not to formally apply for entry into project development until their project is ready for approval. The regional FTA officials cited two sponsors of potential Small Starts and Very Small Starts projects that have expressed interest in the program and met with their office, but have not been able to submit a thorough and complete application for entry. In 2007, we also reported that FTA’s increased scrutiny of applications into New Starts was one of the likely reasons that the number of projects in the “pipeline” of potential projects had decreased over the past several years. It is also difficult to establish the number of project sponsors that might have considered applying to the program but decided against it, in part, because these sponsors may not have notified FTA of their intentions. Further, FTA officials we spoke with in headquarters and the regional offices are not aware of any project sponsors that withdrew from or were removed from Small Starts after being approved into project development. Therefore, because the types of projects (with respect to transit modes) that FTA can consider for funding are limited to those from sponsors that formally apply to the program, we do not have adequate information to determine whether FTA’s funding recommendations are mode-neutral. It is difficult to establish the number of “potential” project sponsors, but we identified one sponsor of a streetcar project which initially sought federal funding through the Small Starts program before switching to other sources of federal funding, including the exempt category in the New Starts program and the Transportation Investment Generating Economic Recovery (TIGER) grant program. According to the project sponsor, it spent 2 years trying to gain entry into project development as a Small Starts project but had difficulty meeting the cost-effectiveness criterion. FTA evaluates Small Starts and New Starts projects using the same cost- effectiveness criterion, which measures effectiveness primarily in terms of travel time savings for transit riders. As we have previously reported, this measurement may not favor certain projects, such as streetcars, that are not designed to create travel time savings, but instead to create other benefits, such as providing enhanced access to an urban center. According to the project sponsor, in early 2008, FTA advised the sponsor to seek funding as an exempt project and to see if a final regulation on the Small Starts program, as previously mentioned, would result in a change to how cost-effectiveness was formulated that would change the situation. In 2010, the sponsor received a TIGER grant and decided to remain in the exempt category. Several other streetcar projects have also received funding through TIGER grants. According to FTA officials, difficulty with the cost-effectiveness criterion was not the only issue which kept the streetcar project from entering the Small Starts program. In particular, the project sponsor was also unable to obtain high enough ratings on other criteria to offset the lower cost-effectiveness rating. Thus, the project was not able to obtain an overall rating that was high enough to advance in the Small Starts program as required by statute. Further, FTA officials told us that FTA has taken steps to address the problems that streetcar projects face in attempting to become Small Starts and New Starts projects. For example, FTA established the Urban Circulator Program in 2009 (which we will discuss later in our report) to provide funds to projects that aim to connect urban destinations and foster redevelopment. However, we have not assessed the actions that FTA has taken. Within Small Starts and Very Small Starts, the projects FTA recommended to Congress for funding vary in terms of the total project costs and capital investment program contribution. For the 10 Small Starts projects, the total project cost ranges from nearly $40 million to about $232 million, and the median cost is about $143 million. As shown in table 3, for half of the Small Starts projects, the total project costs are between $100 and $200 million. The capital investment program contribution to the projects’ costs ranges from about $28 million to $75 million, and the median capital investment program contribution is $75 million, as is the maximum capital investment program contribution. Seven of the 10 Small Starts projects were recommended for the maximum allowable capital investment program contribution. For the 19 Very Small Starts projects, the total project cost ranges from about $5 million to about $48 million and the median cost is about $29 million. The capital investment program contribution to the projects’ costs ranges from nearly $3 million to about $39 million and the median capital investment program contribution is about $20 million. As shown in table 4, nearly half of these projects were recommended for $20 to $30 million in capital investment program funds. FTA’s project development requirements for Small and Very Small Starts are similar in some respects, but FTA’s submission requirements for Small Starts’ project justification criteria are more extensive. For application into the project development phase, FTA requires sponsors of Small Starts and Very Small Starts projects to submit comparable information in some respects, such as project description and local financial commitment. As outlined in FTA’s Reporting Instructions for Small Starts and other guidance, both the number and type of requirements in these areas are similar for Small Starts and Very Small Starts projects. As shown in table 5, FTA has a similar number of requirements for Small Starts and Very Small Starts projects with regard to project description and maps and local financial commitment. Sponsors of Small Starts projects are also subject to submission requirements for project justification criteria, such as user benefit forecasts to meet the cost-effectiveness requirement. On the other hand, FTA does not require this information from sponsors of Very Small Starts projects. FTA officials told us they consider Very Small Starts projects to be inherently cost- effective because they do not exceed certain total and per-mile costs and meet minimum ridership thresholds (at least 3,000 per weekday). We and others have reported that the Small Starts project justification requirements can be complicated and require substantial resources to complete. However, FTA officials said they do not agree with this assessment of the work required to meet the project justification requirements. Table 5 also lists the requirements for New Starts projects, for comparison. In addition, the type of information required for project description and maps and local financial commitment is comparable for Small Starts and Very Small Starts projects beyond having a similar number of requirements. For local financial commitment, for example, Small Starts and Very Small Starts project sponsors are required by FTA to submit the same information to FTA for a simplified financial evaluation. Specifically, project sponsors submit a financial plan summary, 3 years of audited financial statements to demonstrate financial health, evidence that operations and maintenance costs for the proposed project are no greater than 5 percent of the sponsor’s systemwide operations and maintenance costs (to qualify for a simplified financial evaluation as opposed to the 20- year plans required for New Starts projects), and supporting financial documents. Besides the requirements listed in table 5, FTA also requires a project management plan for all Small Starts and Very Small Starts projects. FTA regulations and guidance outline the general requirements for a project management plan for all FTA-funded capital projects. The requirements include, for example, information on staff reporting relationships and responsibilities, recordkeeping processes, and the budget for managing the project. FTA does not have specific guidance on project management plans for Small Starts and Very Small Starts projects. According to officials from FTA headquarters, FTA regional office staff scale the general requirements and level of detail needed for each project, based on its complexity and the sponsor’s level of experience managing capital improvement projects. For example, officials from one regional office we spoke to said that while all project management plans must include information on the scope, schedule, and cost of a project, less detail would be required for a less expensive project. Further, sponsors of both Small Starts and Very Small Starts projects may have additional requirements related to FTA regulations on project management oversight. FTA may assign project management oversight contractors (PMOC) to Small Starts and Very Small Starts projects that have a total cost over $100 million, are technically complex, or have less experienced sponsors, among other reasons. To support its oversight of a project, FTA can direct a PMOC to conduct various reviews of a project. For example, FTA can direct a PMOC to review a sponsor’s project management plan or assess a project’s readiness to enter the project development phase. For such reviews, the PMOC would typically review information that project sponsors are already required to submit for project development. For other PMOC reviews, such as a review of whether a project sponsor has the technical capacity and capability to complete its project, a project sponsor may be required to complete additional work, like participating in interviews with the PMOC and providing information on staffing levels and qualifications. While requirements are similar in several ways, FTA requires the sponsors of Small Starts projects to submit more information on a project’s justification than the sponsors of Very Small Starts projects. FTA evaluates and rates Small Starts projects on three project justification criteria prescribed in statute: cost-effectiveness, land use, and economic development. Therefore, FTA requires travel forecasts for the project’s opening year, estimates of user benefits like travel time savings, and land use plans, among other items. For Small Starts, travel forecasts are often generated by regional travel models but can be provided, in some circumstances, through a more straightforward spreadsheet analysis of data that, according to FTA, makes these calculations easier for Small Starts project sponsors. In our previous work, we reported that these requirements can require substantial resources and can create disincentives for sponsors to apply for funding. By contrast, FTA does not require such project justification information for Very Small Starts projects. According to FTA guidance, by containing certain FTA-defined features, such as having a total cost under $50 million and demonstrating that the project corridor already serves more than 3,000 riders per weekday, projects are “warranted” as being inherently cost-effective at producing significant mobility benefits and supporting land use and economic development. Several project sponsors and industry groups we spoke with told us that the project development requirements for Very Small Starts projects were streamlined and not overly burdensome. However, they also felt that such requirements for Small Starts projects were too similar to the requirements for New Starts projects and required a comparable amount of time and resources. As stated earlier, Congress established the Small Starts program to create a streamlined process for smaller, less complex capital transit projects, and FTA also created the Very Small Starts category with a similar desire. However, as described in our methodology, there is not a reliable quantitative way to evaluate the effect of changes in requirements on project development time frames. FTA officials said they do not agree with GAO’s assessment of its data. (See app. III.) FTA headquarters and regional officials, as well as three project sponsors we spoke with, indicated that local issues, such as delays in finalizing funding or lack of agreement on a project’s route, often affect how long a project spends in development. Of the 29 Small Starts and Very Small Starts projects FTA has recommended for funding, 11 projects have received construction grants. These projects took from about 9 months to almost 4 years to complete the project development phase and receive a construction grant from FTA. While the amount of time it takes for a project to complete project development can be influenced by several factors, FTA officials and project sponsors told us that local issues can delay the progress of a Small Starts or Very Small Starts project. Of the 10 project sponsors we interviewed, half told us that they experienced delays during project development. Three of the five project sponsors that experienced delays said that local issues caused the delays. One project sponsor, for example, said that the lack of committed funds for the project from the state government caused a 6-month delay in the project’s development, while one other project sponsor said that its project was delayed while it addressed the public’s concerns on the project route. Another project sponsor faced delays due to local and federal issues; specifically, the project had to wait for passage of a local referendum providing operating funds for the project and had to do additional work because it received conflicting information from FTA on the work it needed to complete to fulfill federal environmental requirements. To examine the project development process, we discussed the advantages and disadvantages of the requirements with a variety of stakeholders, including 10 project sponsors, officials from FTA headquarters and 7 regional offices, and 2 industry groups. The perspectives of the stakeholders we spoke to depend, in part, on their experience with Small Starts and Very Small Starts projects, as well as any experience with New Starts projects. For example, 6 of the 10 project sponsors we spoke with had experience planning and implementing New Starts projects, while the other 4 sponsors had no such organizational experience. FTA officials in some regions told us that since the sponsors of many Small Starts and Very Small Starts projects were unfamiliar with the requirements for New Starts projects, these sponsors may not be aware of the difference in requirements or the degree to which some requirements had been scaled for their projects. FTA regional officials also had varying experience overseeing Small Starts and Very Small starts projects. Five regional offices had overseen only 1 Small Starts or Very Small Starts project while one regional office had overseen 13 projects. Stakeholders we spoke with cited advantages related to FTA’s project development requirements for both Small Starts and Very Small Starts projects. Several stakeholders we interviewed—five project sponsors, one industry group, and FTA headquarters and officials from two regions—said that the project development requirements for Very Small Starts projects were straightforward and not overly burdensome and, as a result, that Very Small Starts projects have a streamlined process. Specifically, three project sponsors told us that an advantage of Very Small Starts was the minimal data analysis requirements, specifically travel forecasting. One of these sponsors said that its Very Small Starts project required less travel analysis and had a faster application process compared to New Starts projects that it had previously completed. Another project sponsor told us that FTA’s use of a single-year construction grant instead of a multiyear PCGA helped to expedite the project development process. For this grant, the project sponsor was able to apply for the grant through FTA’s electronic grant system rather than negotiate the terms of a PCGA with FTA. FTA may use a single-year construction grant, rather than a PCGA, for projects with sponsors that request less than $25 million and whose request can be met with a single-year appropriation or existing FTA appropriations that remain available for that purpose. Seven stakeholders we spoke with, including officials from three FTA regional offices and four project sponsors, told us that the project development requirements help contribute to the success of a project through the development of detailed plans and examination of long- term costs. As a result, project sponsors are able to identify potential challenges and better communicate project details to the public. For example, one project sponsor and officials from one regional office told us that they were better prepared to respond to public questions on the project’s design and funding after completing the project development requirements. Officials from two FTA regional offices and five project sponsors told us that the project management plan, in particular, is a valuable tool to help organize a project’s implementation, particularly for project sponsors that have not previously implemented capital projects. Moreover, two project sponsors we spoke with said that they would use a project management plan even if it were not required by FTA. However, three project sponsors told us that project management plan requirements were not scaled to fit their smaller, less complex projects. As described above, FTA does not have specific project management plan guidance for Small Starts and Very Small Starts projects but scales the requirements in the general guidance to fit each project’s size and complexity. For example, officials from one regional office said that a project management plan may not include a section on real estate acquisition if the project sponsor did not have to purchase property to carry out the project. In September 2009, FTA issued an Advance Notice of Proposed Rulemaking on project management oversight regulations, which included the guidelines for project management plans. The current regulations predate the creation of the Small Starts program and Very Small Starts category. In the notice, FTA specifically seeks comment on whether sponsors of Small Starts projects should establish less detailed project management plans than New Starts projects. Two Small Starts project sponsors said that the single project development phase in the Small Starts program was an advantage. According to one project sponsor, the single phase eliminated the need to stop design work on the project while applying for and receiving approval from FTA to enter another phase, as can be the case with the two-stage process for New Starts projects. According to FTA officials, FTA allows project sponsors to continue design on a project while waiting for approval, as outlined in FTA’s 2006 program guidance. In past studies of the New Starts program, GAO and Deloitte presented the use of a single project development phase for all New Starts projects as one option to help expedite the New Starts process. In its reauthorization proposal, as identified in the fiscal year 2012 budget request, FTA proposed that all projects use this single-phase approach as one way to transform the New Starts program, balancing the need to advance projects in a reasonable time frame with being a steward for federal transit dollars. Some stakeholders we spoke with also reported disadvantages of FTA’s project development requirements. As described below, stakeholders that have experience with New Starts projects said that the Small Starts project development requirements, which were to be streamlined, are too similar to those for New Starts projects. These comments suggest that, from some stakeholders’ perspective, Small Starts could be further differentiated from New Starts. However, as stated in our previous work on the New Starts program, FTA’s oversight of projects must strike an appropriate balance between expediting project development and maintaining the use of a rigorous and systematic process to distinguish among projects. Sponsors from three Small Starts projects we spoke with were assigned PMOCs, and all three project sponsors felt that the PMOCs’ oversight should have been better scaled to their Small Starts projects. All three project sponsors said that their PMOCs provided constructive comments and assistance during project development. However, all three felt that the PMOCs’ reviews should have been better scaled to the size and complexity of their projects. Based on their experience developing both a New Starts and Small Starts project, two of the project sponsors told us that the PMOC reviewed their Small Starts project as though it were a New Starts project. As mentioned above, FTA issued an Advance Notice of Proposed Rulemaking on project management oversight regulations in September 2009, which, upon completion of the rulemaking process, could affect PMOC oversight of Small Starts projects. In the notice, FTA seeks comments on how it should best use PMOCs in overseeing projects and the circumstances, such as the complexity of a project, under which the agency may assign a PMOC to a project. Two Small Starts project sponsors said that the length of the review process for PCGAs was a disadvantage. After FTA and a project sponsor negotiate a PCGA, it must go through multiple levels of review, including the Office of the Secretary of Transportation, the Office of Management and Budget, and Congress. By statute, a PCGA is subject to a 60-day congressional review period. According to one project sponsor, a reduction in the PCGA review time would be beneficial and help them implement their projects more quickly. In a recent congressional hearing, the FTA administrator said that the agency would ask Congress to consider shortening this review period to 30 days when the New Starts program is reauthorized. According to an industry group and one project sponsor we spoke with, New Starts and Small Starts projects entail comparable levels of work. Officials from the industry group told us that some of its members therefore feel it is better to apply as a New Starts project and seek more funding rather than apply as a Small Starts project and face constraints on the project’s total cost and capital investment program share. We have previously reviewed FTA’s Small Starts program and reported on options that exist to expedite the New Starts project development process. In 2007, for example, we reported that FTA could take additional action to further streamline the Small Starts program. FTA officials acknowledged that the requirements could be further streamlined and took steps to do so, such as reducing duplicative requirements and developing Small Starts-specific reporting templates. Exempt projects are not evaluated and rated or recommended for funding by FTA; exempt projects receive under $25 million in federal assistance and are typically congressionally designated. Since FTA does not evaluate and rate these projects, they are subject to fewer FTA requirements. However, FTA requires exempt projects to submit information similar to some requirements for Small Starts and Very Small Starts projects. This consists of information on a project’s background, which includes a description of the project as well as site and vicinity maps; costs, such as worksheets that organize the project’s capital costs by year of expenditure and type of expenditure, like vehicles and stations, stops, and terminals; and local financial commitment, which includes a financial plan summary and supporting financial documentation. According to its guidance, FTA does not have to evaluate and rate exempt projects. However, the projects still have to be approved by FTA into preliminary engineering and final design. FTA’s approval for advancing exempt projects is based on compliance with planning, environmental, and project management requirements which apply to all federal-aid transit projects. FTA officials said that, as it relates to exempt projects, they mainly determine whether project sponsors possess a level of technical and financial capacity that is appropriate for the scope of the project before advancing an exempt project into the next stage of development. For example, FTA must determine whether a project has secured at least half of its local funding prior to advancing to the final design phase of project development. In terms of other requirements, FTA requires the sponsor of an exempt project to create and submit a project management plan to describe its budget, processes, procedures, and schedule for managing the project. FTA may also assign a PMOC to an exempt project with a total project cost over $100 million, technical complexity, or a sponsor with no previous experience implementing capital transit projects. As table 6 shows, a total of nine exempt projects of various modes and total costs have entered the New Starts pipeline since SAFETEA-LU was enacted. Within each mode, the exempt projects vary in characteristics, such as scope. For example: For the bus projects, one extends a transitway with dedicated bus- priority/high-occupancy-vehicle lanes, bikeways, and sidewalks; another establishes initial components and infrastructure for a BRT system that includes dedicated bus lanes, transit stations, and a real- time passenger information system. For rail projects, one project constructs a new driverless, automated rail system between an existing transit station and an airport; another project builds a new transit station along an existing heavy rail line. See appendix IV for additional information on each of these exempt projects. In addition to the nine exempt projects listed in table 6, on March 4, 2011, FTA selected five exempt projects to receive capital investment program discretionary grants under FTA’s newly created Urban Circulator Program. The grants are to help state and local governments finance new fixed-guideway capital projects, including the acquisition of property, the initial acquisition of rolling stock, the acquisition of rights-of-way, and relocation. The projects fall within the exempt category because the maximum grant for each selected project must be less than $25 million and make up no more than 80 percent of the project’s total capital cost. These are projects such as streetcars that provide a transportation option to connect urban destinations and foster the redevelopment of urban spaces into walkable mixed-use, high-density environments. Table 7 lists the five Urban Circulator projects FTA selected to receive funds. According to FTA, a total of 65 applicants requested $1.1 billion, resulting in high competition for the $130 million made available. FTA ran a competition for these funds and evaluated project proposals based on criteria such as livability, sustainability, economic development, and leveraging of public and private investments, in line with the Department of Transportation’s livability initiative that began in 2009. According to FTA, the projects selected will provide mobility choices, improve economic competitiveness, support existing communities, create partnerships, and enhance the value of communities and neighborhoods. Although stakeholders cite a need for the exempt category, projects considered “exempt” from the statutory evaluation and rating process were eliminated in SAFETEA-LU, pending the publication by FTA of a final regulation implementing Small Starts, which has not yet occurred. However, until that happens, FTA officials said that it will still have an exempt category. The stakeholders with whom we spoke want to continue this category of funding because they said that a key advantage of the exempt category is that it serves as a useful source of funding for “unique” or atypical transit projects. For example, four project sponsors that we spoke with indicated that their projects may not have competed well with other projects if evaluated against the New Starts criteria and in competition with more typical New Starts transit projects, like light rail lines. Yet, they believe their projects fill a transportation gap for the communities they serve. Compared to a new commuter or light rail line, such exempt projects are not well suited to the New Starts evaluation and rating criteria—such as cost-effectiveness measured by travel time savings to user. However, we do not have enough information to determine how these exempt projects would have fared against the New Starts criteria. In its 2012 budget request, FTA proposes to continue the exempt category in the next surface transportation reauthorization. According to its fiscal year 2012 budget request, FTA is proposing to raise the amount of federal funding available to exempt projects, in conjunction with other changes to the New Starts program. Specifically, projects could be “exempt” from the evaluation and rating process if the project sponsor is seeking less than $100 million in § 5309 Capital Investment Grant program funds and the request represents less than 10 percent of the project’s anticipated total capital cost. According to FTA, the main reason for continuing an exempt category is the awareness that if FTA provides only a small percentage of a project’s total cost, there is a corresponding lower amount of risk to the federal government; at the same time, other entities, like state and local governments, provide a greater amount of funding and assume a higher amount of risk. Because of the lowered risk to the federal government, a project would be exempt from the more stringent federal oversight (i.e., evaluated and rated against criteria) that apply to other projects, while the other funding partners would likely conduct more due diligence to protect their increased investment. Just as they are now, these projects would only be subject to basic federal grant requirements and would not be evaluated and rated by FTA. Given that these projects are not rated and evaluated, the project sponsors we talked with considered this as one of the major benefits to this category, because it potentially decreases the amount of time spent in project development and project costs. SAFETEA-LU requires project sponsors to conduct a before-and-after study for all New Starts projects. Additionally, FTA requires before-and- after studies to be conducted for all Small Starts projects, in accordance with FTA guidance. Although FTA and the project sponsors we spoke with generally view the exempt category as beneficial, these projects are not validated with studies, as are other New Starts and Small Starts projects. For New Starts and Small Starts projects, the before-and-after study describes the impact of the project on transit services and ridership and compares the predicted and actual project performance. Additionally, Very Small Starts project sponsors must complete a simplified before- and-after study on the project’s actual scope, costs, and ridership. However, according to FTA officials, exempt project sponsors do not submit such information on completed projects. As we’ve previously reported, information about the outcomes of completed transit projects can be used to better determine what a particular project accomplished and improve decisions on other projects. Our interviews with stakeholders resulted in a few reported disadvantages. Most notably, FTA has limited guidance on exempt projects. For example, FTA has a checklist that shows what is required for exempt projects, as opposed to New Starts and Small Starts. However, one project sponsor said they felt there was a lack of guidance for exempt projects and that their consultant helped to navigate the requirements in lieu of more thorough guidance. Stakeholders, including officials from FTA and project sponsors, also said that the exempt projects can face funding uncertainties. Some stakeholders said that exempt projects have no guarantee of funding beyond what has been appropriated by Congress, and a project’s exempt funding may not be appropriated all at once. One project sponsor told us that since only a portion of its exempt funding has been appropriated, they have had to leverage local funds to advance the project until more exempt funds become available. We provided a draft of this report to the Secretary of Transportation for review and comment. DOT officials provided us with clarifying and technical comments, which we incorporated throughout the report as appropriate. We are sending copies of this report to the Secretary of Transportation, the Administrator of the Federal Transit Administration, and appropriate congressional committees. This report is also available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or stjamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix V. Two recent GAO reports on the New Starts program contained recommendations that were open when we began our work on this review in December 2010. This appendix lists those reports and updates the Federal Transit Administration’s (FTA) progress in implementing these recommendations. Public Transportation: Improvements Are Needed to More Fully Assess Predicted Impacts of New Starts Projects, GAO-08-844 (Washington, D.C.: July 25, 2008). This report made five recommendations to the Department of Transportation (DOT) to improve the New Starts evaluation process and the measures of project benefits, which could change the relative ranking of projects. Table 8 lists the five recommendations with information on the status of each recommendation, as of July 2011. Public Transportation: Better Data Needed to Assess Length of New Starts Process, and Options Exist to Expedite Project Development, GAO-09-784 (Washington, D.C.: Aug. 6, 2009). This report made two recommendations to DOT to improve the New Starts program. Table 9 lists these recommendations with information on the status of each recommendation, as of July 2011. FTA has awarded construction grants to 11 of the 29 Small Starts and Very Small Starts projects recommended for funding to Congress. Table 10 provides information on each project, including the date FTA approved the project into the project development phase and the date FTA obligated funds for construction. According to FTA officials, the agency typically recommends a Small Starts or Very Small Starts project for funding the first year it is in the project development phase, which is sooner than when the agency recommends New Starts projects for funding. After a project is recommended for funding, FTA makes firm funding commitments, such as those in a project construction grant agreement, when the project’s development has reached a point where its scope, costs, benefits, and impacts are considered firm and final. To describe the legislative and program history for the creation of Small Starts and Very Small Starts, respectively, we analyzed the Safe, Accountable, Flexible, Efficient Transportation Equity Act-A Legacy for Users (SAFETEA-LU), congressional reports, testimonies before Congress, and member floor statements on the program from 2004 to 2007, the years leading up to and after the passage of SAFETEA-LU. To describe the program history behind the Very Small Starts category, we similarly analyzed Federal Register notices and program guidance issued by FTA. We also interviewed FTA officials on the creation of Very Small Starts. To provide information on Small Starts and Very Small Starts projects, including total project cost, mode of transit, and other characteristics, we collected and analyzed project data, including grant data, compiled by FTA to determine the cost, mode of transit, and other characteristics of Small Starts and Very Small Starts projects. We included projects that had been recommended for funding to Congress since the passage of SAFETEA-LU (August 10, 2005). We also sought to include projects that were (1) in the project development phase but not yet recommended for funding or (2) in the process of applying to enter this phase. Through discussions with FTA staff and analysis of FTA Annual Reports on Funding Recommendations, we determined that no projects met the above conditions at the time of our review. To verify and assess the reliability of the data compiled by FTA, we compared it to project data contained in FTA’s Annual Reports on Funding Recommendations for fiscal years 2007 through 2012 and information from project sponsors we interviewed. We resolved any discrepancies with FTA headquarters staff, and we determined that the data were sufficiently reliable for our purposes. To describe the project development requirements for Small Starts and Very Small Starts projects, we collected and summarized relevant laws, such as SAFETEA-LU, as well as FTA circulars and policy guidance for the Small Starts program, including the 2007 Updated Interim Guidance and Instructions, 2010 Reporting Instructions for the Section 5309 Small Starts Criteria, and Side-by-Side of Required Information for New Starts/Small Starts Evaluation and Rating. To determine the views of stakeholders on the advantages and disadvantages of these requirements, we conducted semistructured interviews with FTA officials from headquarters and regional offices, sponsors of projects that have been recommended for funding, and transit industry associations, such as the American Public Transportation Association. We selected a judgmental sample of 10 out of 29 projects to ensure variation in the project’s geographic location, category of funding (i.e., Small Starts or Very Small Starts), mode of transit, total project cost, and fiscal year recommended for funding. We also interviewed FTA staff at the seven regional offices that corresponded with the judgmental sample of project sponsors. Table 11 lists the Small Starts and Very Small Starts project sponsors we interviewed for our review. We used stakeholder observations and experiences, as there is not a reliable quantitative way to evaluate the impact of changes in the requirements for Small Starts and Very Small Starts projects on project development time frames compared to New Starts projects for two reasons. First, only a small number of Small Starts (including Very Small Starts) projects—11 of 29 recommended for funding—have completed the project development phase and received a construction grant. Second, in past work we found that FTA and project sponsor data on time frames for New Starts projects (such as entry into preliminary engineering and final design) are not reliable. However, FTA officials said they do not agree with GAO’s assessment of its data. Given these reasons, we did not include such a comparison in our methodology for this review. To describe the project development requirements for exempt projects, we summarized relevant laws, regulations, and FTA guidance for exempt projects. We also interviewed officials from FTA headquarters and regional offices. Our review of exempt projects included projects selected to receive funding that entered the New Starts pipeline (i.e., approved into the preliminary engineering phase) since SAFETEA-LU was enacted. To describe the types of exempt projects that have entered the New Starts pipeline since the passage of SAFETEA-LU, we collected, verified, and analyzed data from FTA. We compared the data from FTA to project data available in FTA’s Annual Reports on Funding Recommendations for fiscal years 2007 through 2012 to assess its reliability. There were a total of nine exempt projects that entered the New Starts pipeline since 2005. We worked with FTA to resolve any discrepancies and found the data sufficiently reliable for the purposes of this report. To determine the views of stakeholders on the advantages and disadvantages of the exempt category, we conducted semistructured interviews with FTA officials (headquarters and regional office staff), sponsors of exempt projects that received funding, and transit industry associations. We selected a judgmental sample of five exempt project sponsors to ensure variety in the projects’ geographic location, mode of transit, project cost, and the fiscal year the projects were approved into the New Starts preliminary engineering phase. Table 12 lists the exempt project sponsors we interviewed for our review. We conducted this performance audit from December 2010 through August 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following provides detailed descriptions of the nine exempt projects of various modes that have entered the New Starts pipeline and their total costs since SAFETEA-LU was enacted. The descriptions are primarily from FTA’s latest annual reports or as noted. There are four rail projects, a ferry project, a street car project, and three bus projects. These projects are listed in the order they entered the New Starts pipeline, beginning with the earliest. The City of Stamford, Connecticut, is proposing to extend Phase I of its Urban Transitway, currently in operation, for an additional 0.6 miles along Myrtle Avenue to U.S. Route 1. According to FTA’s Annual Report on Funding Recommendations, the facility will accommodate new dedicated bus-priority/high-occupancy-vehicle lanes in both directions, as well as bike pathways and sidewalks. Signal priority treatments at intersections will give local and commuter buses priority. Bus stops in the corridor will include real-time passenger displays. The total capital cost for the Stamford Urban Transitway Phase II project is estimated at $48.3 million, with a proposed New Starts share of $24.7 million. FTA approved the project into preliminary engineering in May 2006 and into final design in November 2007. The Maine Marine Highway Project, sponsored by the Maine Department of Transportation, is for the construction of a ferry boat—the Governor Curtis. As proposed, this vessel will expand the capacity of the Maine State Ferry Service to provide transportation between Rockland and the off-shore islands in Penobscot Bay. It will also free up another vessel to be retrofitted and serve as a backup vessel; currently, no vessel of this size is available as a backup. The new vessel will hold 250 passengers and approximately 20 cars. The New Starts share is $1.5 million of an estimated total capital cost of $10.4 million. FTA concurrently approved the project into preliminary engineering and final design in May 2006. The Jacksonville Transit Authority is planning a regional bus rapid transit system for the Jacksonville metropolitan area. The Downtown Transit Service Enhancement Project is the first phase to be developed and will serve as the center hub of the system. The 8.4-mile project includes increased bus service, semi-exclusive reserved bus lanes, 22 stations and stops, traffic signal priority, and real-time traveler information. The project is estimated to cost $15.6 million which includes a New Starts share of $9.4 million. FTA approved the project into preliminary engineering in December 2006 and into final design in August 2010. The Massachusetts Bay Transportation Authority proposes to build a new Assembly Square Station on the existing Massachusetts Bay Transportation Authority heavy rail Orange Line between the existing Sullivan Square and Wellington Stations in the City of Somerville, Massachusetts. No additional Massachusetts Bay Transportation Authority rail cars are needed to provide service to this new station. The total capital cost of the Assembly Square Station is estimated to be $47.7 million with a proposed New Starts share of $24.9 million. FTA approved the project into preliminary engineering in September 2008 and into final design in November 2010. According to FTA’s latest description of this project, the Lackawanna Cutoff project involves the restoration of commuter rail service from Port Morris, New Jersey, to Andover, New Jersey—a distance of 7.3 miles. The Lackawanna Minimum Operating Segment is a short rail line at the outer end of New Jersey Transit’s existing Montclair/Boonton Line. The alignment consists of the construction of a single track along the existing right-of-way purchased by the state of New Jersey in 2001. One station will be constructed at the terminus in Andover. The project will utilize the existing Port Morris Yard for storage and maintenance services. New Jersey Transit’s existing rolling stock will be used to operate the service. The estimated capital cost is $36.6 million with a proposed New Starts share of $18.2 million including primarily New Starts funds. New Jersey Transit has already received the full amount of appropriations necessary for this project. The City of Tucson Department of Transportation proposes to build a streetcar project in the downtown Tucson Urban Corridor. The project includes the purchase of eight streetcar vehicles. The streetcars will operate at grade on surface streets in mixed traffic in most locations, with some reserved right-of-way where available. Track placement will primarily be in the center of shared travel lanes with stations located either in the median or on the outside of roadways. Station platforms will be designed so that they can be used by buses as well as by streetcars, where possible. The total capital cost of the project is estimated to be $196.5 million; the current New Starts share is $5.8 million. FTA approved the Tucson Modern Streetcar Project into preliminary engineering as an exempt project in December 2008 and into final design in September 2009. The Bay Area Rapid Transit’s Oakland Airport Connector is a 3.2-mile rail project to connect the Oakland International Airport to the Bay Area Rapid Transit’s Coliseum Station and the rest of the transit system. According to the project sponsor, it will be a driverless, automated rail system to replace bus service and provide more integrated service to the airport. To construct the project, Bay Area Rapid Transit is using a design-build- operate-maintain project delivery approach. The estimated $492.7 million project will be funded using several funding sources, including $24.9 million in federal New Starts funding. FTA concurrently approved the project into preliminary engineering and final design in December 2009. The Rhode Island Department of Transportation proposes to build a new Pawtucket/Central Falls Commuter Rail Station on the existing Massachusetts Bay Transportation Authority Providence-to-Boston commuter rail route, which follows Amtrak’s Northeast Corridor. The station would be constructed in Pawtucket near the site of a station that was closed in 1959 between the existing South Attleboro and Providence stations. The total capital cost of the Commuter Rail Station is estimated to be $53.6 million with a proposed New Starts share of $24.9 million. FTA approved the project into preliminary engineering as an exempt New Starts project in August 2010. The Rhode Island Department of Transportation expects to begin final design in 2013, construction in 2015, and revenue operations in 2018. According to FTA, the Crystal City-Potomac Yard project is a 3.1 mile bus transitway project with eight stops. It includes 1.5 lane-miles of exclusive transit right-of-way (which is an independent roadway for buses) and 1.3 miles of an on-street dedicated bus lane, and 0.3 lane-miles of mixed traffic operation. Arlington County officials said this project is not a bus rapid transit project, which has different features such as large distances between stations. Instead, this bus transitway project provides limited local bus service that will replace the current standard local bus service. The purpose is to provide high-capacity and high-quality bus transit services in the 5-mile corridor between the Pentagon (and Pentagon City) in Arlington County and the Braddock Road Metrorail Station in the City of Alexandria. The total capital cost of the bus transitway is estimated to be $38.1 million with a proposed New Starts share of $980,000. FTA approved the project into preliminary engineering as an exempt New Starts project in August 2010. In addition to the contact named above, Catherine Colwell, Assistant Director; Lauren Calhoun; Dwayne Curry; Robert Heilman; Terence Lam; Joanie Lofgren; Sara Ann W. Moessbauer; and Amy Rosewarne made key contributions to this report. Public Transportation: Use of Contractors is Generally Enhancing Transit Project Oversight, and FTA is Taking Actions to Address Some Stakeholder Concerns. GAO-10-909. Washington, D.C.: September 14, 2010. Public Transportation: Better Data Needed to Assess Length of New Starts Process, and Options Exist to Expedite Project Development. GAO-09-784. Washington, D.C.: August 6, 2009. Public Transportation: New Starts Program Challenges and Preliminary Observations on Expediting Project Development. GAO-09-763T. Washington, D.C.: June 3, 2009. Public Transportation: Improvements Are Needed to More Fully Assess Predicted Impacts of New Starts Projects. GAO-08-844. Washington, D.C.: July 25, 2008. Public Transportation: Future Demand Is Likely for New Starts and Small Starts Programs, but Improvements Needed to the Small Starts Application Process. GAO-07-917. Washington, D.C.: July 27, 2007. Public Transportation: Preliminary Analysis of Changes to and Trends in FTA’s New Starts and Small Starts Programs. GAO-07-812T. Washington, D.C.: May 10, 2007. Public Transportation: New Starts Program Is in a Period of Transition. GAO-06-819. Washington, D.C.: August 30, 2006. Public Transportation: Preliminary Information on FTA’s Implementation of SAFETEA-LU Changes. GAO-06-910T. Washington, D.C.: June 27, 2006. Public Transportation: Opportunities Exist to Improve the Communication and Transparency of Changes Made to the New Starts Program. GAO-05-674. Washington, D.C.: June 28, 2005. | The Federal Transit Administration's (FTA) Capital Investment Grant program funds, among other things, projects for fixed-guideway systems--often called New Starts projects. In 2005, the Safe, Accountable, Flexible, Efficient Transportation Equity Act-A Legacy for Users (SAFETEA-LU) established a category of lower-cost projects--Small Starts--which expands project eligibility and offers streamlined requirements. FTA subsequently created the Very Small Starts category with a further streamlined process for very low-cost projects. Exempt projects, those receiving under $25 million and typically designated by Congress, also have a simplified process. As part of GAO's annual mandate to review New Starts, this report describes (1) the history of Small Starts and Very Small Starts and the type of projects FTA recommended for funding; (2) the project development requirements for Small Starts and Very Small Starts and what stakeholders identify as the advantages and disadvantages of the requirements; and (3) the project development requirements for exempt projects, the projects selected to receive funding, and what stakeholders identify as the advantages and disadvantages of this category. Among other things, GAO analyzed laws, regulations, and agency guidance, and interviewed FTA headquarters staff and stakeholders from 7 FTA regional offices, 15 projects, and 2 industry groups. DOT officials reviewed a draft of this report and provided technical comments, which GAO incorporated as appropriate. When SAFETEA-LU established the Small Starts program, it streamlined project development requirements and project evaluation and rating criteria, and authorized certain corridor-based bus projects--like bus rapid transit systems-- to receive transit capital funding. Furthermore, FTA created Very Small Starts within Small Starts to further streamline requirements for projects that are simple and low-risk, based on cost and other features. FTA has mostly recommended bus projects for funding but has also recommended light rail, commuter rail, and streetcar projects. Overall, FTA has recommended 10 Small Starts and 19 Very Small Starts projects for funding. These projects' total costs vary from about $5 million to about $232 million, and FTA has recommended capital investment program funds ranging from nearly $3 million to $75 million for these projects. FTA's project development requirements for Small Starts and Very Small Starts include costs and financial summaries. While all sponsors submit similar information in some respects, such as financial summaries, FTA only requires sponsors of Small Starts projects to submit information on a project's expected benefits, like travel forecasts. Some stakeholders GAO spoke with said an advantage of FTA's requirements for Very Small Starts is that they are appropriately scaled and not overly burdensome for smaller projects. For example, about half of the stakeholders experienced with Very Small Starts told GAO that the requirements were straightforward and that project sponsors were able to meet them quickly without many problems. Four project sponsors and an industry group said that a disadvantage of the Small Starts requirements is that they are too similar to those for New Starts, even though Small Starts projects have a lower total cost and are less complex. Generally, stakeholders said that the requirements for both Small Starts and Very Small Starts help project sponsors fully develop and plan projects by helping identify potential problems. Stakeholders' perspectives depend, in part, on their degree of experience with these programs, which ranged from none to several previous New Starts or Small Starts projects. Exempt projects, typically congressionally designated and below the $25 million threshold, are not evaluated and rated. Exempt projects are subject to fewer FTA requirements that mainly focus on the sponsor's ability to carry out its project. Nine exempt projects have entered the New Starts pipeline since the last reauthorization of the New Starts program in 2005. These projects vary in terms of mode and scope. For example, one project extends a bus transitway with dedicated vehicle lanes; and another project builds a new station on an existing rail line. The total costs for these projects vary from about $10 million to about $493 million, and the federal contributions range from about $1 million to nearly $25 million in capital investment program funds. Four project sponsors GAO spoke with said that the exempt category provides a useful source of capital funding for atypical transit projects that solve local transportation problems. In its 2012 budget request, FTA proposes to continue the exempt category, which is set to expire under current law, in the next surface transportation reauthorization. |
During the 1970s, the Postal Service invested in data centers and mainframe computers to support administrative functions, such as personnel, accounting, and payroll processing. During the 1980s, the Service expanded its IT technology network to cover essentially all facets of postal operations. However, as networking technology improved, the Service realized that it no longer needed to colocate some of its IT functions with the hardware processors. This presented the Service with the opportunity to reduce costs and improve efficiencies by consolidating some of its IT functions. The practice of consolidating IT functions is consistent with industry trends, as companies strive to utilize new technologies to improve operations at less cost. Before 1993, the Service had six computer centers, each with a mainframe computer, located in New York City, New York; St. Louis, Missouri; Raleigh, North Carolina; Wilkes-Barre, Pennsylvania; San Mateo, California; and Minneapolis, Minneapolis. In 1993, the Service closed its IT center located in New York and transferred all functions performed at the center to its postal IT centers located in San Mateo and Eagan. Some employees affected by the New York IT Center closure relocated to San Mateo and now face the prospects of being affected by yet another postal IT center closure. Additionally, since 1993, the Service has transferred mainframe computer operations performed in St. Louis, Raleigh, and Wilkes-Barre to the San Mateo and Eagan IT Centers. Other IT functions continue to be performed at the Service’s IT centers located in St. Louis, Raleigh, and Wilkes-Barre. The Service’s current IT structure includes two Information Service Centers located in Eagan and San Mateo. Each of these Information Service Centers houses a Computer Operations Service Center, a Management Support Center, an Accounting Service Center, and an Integrated Business Systems Solutions Center. The Computer Operations Service Centers operate the Service’s mainframe computers supporting various postal activities. The Integrated Business Systems Solutions Centers maintain and enhance software applications for postal business systems. The Management Support Service Centers provide facility support to the other centers. The Accounting Service Centers are operated by the Service’s Finance Department and provide national accounting services. The Accounting Service Centers are not included as part of the IT Department’s proposed consolidation of the San Mateo IT Center. Figure 1 shows the San Mateo IT Center, which is located at 2700 Campus Drive, San Mateo, CA, in the Bay Area. The building is a three-story structure plus a basement, contains approximately 160,000 square feet of space, and is located on 12.4 acres. The building was constructed in 1976 and was purchased by the Service in 1983 for about $13 million. After purchasing the building, the Service spent an additional $14 million on renovations, and during the last 10 years has spent an additional $3.7 million on major upgrades. The consolidation plan currently under consideration provides that San Mateo’s computer operations, management support functions, and some of its software functions would be transferred to Eagan, along with some employees. The plan further provides that San Mateo’s remaining software support functions would be transferred to postal IT centers located in St. Louis and Wilkes-Barre, along with some employees. As previously stated, the consolidation plan currently under consideration does not include transferring the accounting functions currently performed at San Mateo. Instead, the accounting functions, along with their complement of 102 employees, are to be relocated into leased space in the Bay Area. In October 2000, San Mateo’s IT functions had an authorized complement of 282, comprising 80 EAS positions, 200 bargaining-unit positions, and 2 Postal Career Executive Service (PCES) positions. Seventy-two EAS and 172 bargaining-unit positions were filled as of March 2002. Six of the IT bargaining unit employees are to be transferred to the Accounting Service Center and would not be immediately affected if the Service decides to close the San Mateo IT Center. The remaining 166 bargaining unit employees and all EAS employees would be directly affected by the closure. Under the proposed consolidation, all 166 bargaining unit employees would be offered relocation to another postal IT center, in keeping with the no-layoff clause in their collective bargaining agreement. About half of the EAS employees will be offered jobs at another postal IT center. For the employees who relocate, the Service will cover basic relocation costs. However, covered costs differ for bargaining-unit and EAS employees. Relocation benefits for bargaining-unit employees are specified in the negotiated collective bargaining agreement between the Service and the APWU. Examples of covered expenses include the cost of one advance house-hunting trip, the movement and storage of household goods, and 30 days of temporary quarters. For EAS employees the Service covers the cost of 3 house-hunting trips, the movement and storage of household goods, and 60 days of temporary quarters. Additionally, the Service provides EAS employees a more generous expense allowance and assistance in selling and purchasing their homes. Employees, who do not have the option of relocating or choose not to do so, may choose to retire, provided they meet the minimum age and service requirements for retirement. Additionally, postal officials have indicated that if the decision is made to close the San Mateo IT Center, the Service will seek early retirement authority from the Office of Personnel Management. If this authority is granted, the Service plans to make the option of voluntary early retirement available to all eligible employees. Employees who do not relocate, retire, or find other employment on their own will likely be involuntarily separated from the Service. Employees who are involuntarily separated from the Service will be eligible for severance pay. The Service has experienced financial problems in recent years that have been exacerbated by the recent economic slowdown and the use of mail to transmit anthrax. In April 2001, we placed the Service’s transformation efforts and long-term outlook on our high-risk list. We included the Service on our high-risk list to focus attention on the dilemmas facing the Service before the situation escalates into a crisis in which the options for action may be more limited and costly. While the Service recently developed a transformation plan to address its financial difficulties and has been able to cut costs, it has a significant amount of fixed costs due to its vast infrastructure that are difficult to cut in the short term. Rationalizing the Service’s infrastructure will be key as it strives to ameliorate its fiscal situation. This means that the Service may have to close or consolidate certain retail, mail processing, and administrative facilities if necessary to cut costs and improve performance. These closures and consolidations will undoubtedly lead to public concerns about the economic effects such actions will have on communities and employees. The Service is weighing these difficult issues as it considers whether to close the San Mateo IT Center. To describe the process that the Service is following to decide whether to close the San Mateo IT Center and consolidate its functions into other postal IT centers, we interviewed postal managers and reviewed available supporting reports and documents. We reviewed the Service’s investment policies and procedures manual, which provides guidance for preparing, reviewing, and approving a DAR. We also reviewed the Service’s DAR, prepared by the IT Department, which included economic analyses and justification of the proposal to close the San Mateo IT Center, as well as an alternative proposal that the IT Department considered and subsequently rejected. Further, we reviewed the OIG’s November 2000 audit report on the IT Department’s proposal and economic analysis. For contrast, we reviewed (1) the DAR the IT Department prepared in support of the decision to build a new IT facility in Eagan that would allow the Service to consolidate IT functions performed at various sites in the Minneapolis area and provide for future incorporation of IT functions and (2) documentation the Service used to support its decision to close the New York IT Center in 1993. We discussed the steps the Postal Service followed in preparing its proposal to close the San Mateo IT Center with the Service’s vice president, IT; discussed the Service’s management process for reviewing the proposal with the Service’s manager, Capital and Program Evaluation, and manager, Facilities, Headquarters; and met with representatives from the OIG to discuss the audit they did of the Service’s proposal and economic analyses. We also reviewed sections of the Postal Service’s Transformation Plan that discuss procedures the Service follows in closing postal facilities, and we explored those procedures in greater detail with postal officials. To determine experts’ views on the social and economic impacts that corporate downsizing and reorganizations can have on employees and their families, we identified and interviewed eight relocation experts from our literature review and discussions with authors of articles on the impacts of closures and relocations on employees. These relocation experts provide a wide variety of relocation services to several hundred companies throughout the United States. We also interviewed two university researchers knowledgable of worksite closures and employee displacements, and we gathered available data from the Bureau of Labor Statistics (BLS) on nationwide and regional worksite closures and employee separations for calendar years 2001 and 2002. To further broaden our perspective on how closures and relocations typically affect employees and their families, we conducted Internet research to identify reports, studies, and other sources of information that examined the impacts of closing facilities similar to the San Mateo IT Center. We also conducted literature searches and gathered information on the social and economic impacts of facility closures and relocations. To determine the social and economic impacts encountered by postal employees and their families who were affected by the Service’s closure of its New York IT Center in 1993, we interviewed postal officials as well as displaced New York IT Center employees who relocated to the San Mateo IT Center. We also obtained and analyzed salary data, for the period 1992 to 2001, on displaced New York IT Center employees who are still employed by the Service. We designed, pretested, and administered survey questionnaires to displaced New York IT Center employees (1) who relocated to other postal IT centers and (2) who continued employment with the Service in the New York City area. For those employees who relocated, we received responses from 49 of 66, for a 74 percent response rate. For those who stayed with the Service in the New York City area, we received responses from 46 of 79 for a 58 percent response rate. We analyzed the survey responses, including open-ended comments, and conducted a number of follow-up interviews. We did not gather data on displaced New York IT Center employees who left the Service following the center’s closure in 1993 because the Service was unable to provide recent mailing addresses for these employees. To determine the social and economic impacts that the San Mateo IT Center postal employees and their families would likely encounter if the Service closes that center, we obtained and reviewed relevant Postal Service documents regarding personnel policies and consolidation plans. We also interviewed cognizant postal and APWU officials and various San Mateo employees regarding these policies and plans and their potential impacts on employees. We obtained and analyzed San Mateo employee personnel data; and, on the basis of information from closure/relocation studies and experts and our interviews with postal officials and employees, we designed, pretested and administered a survey questionnaire to obtain information about how closing the San Mateo IT Center would likely affect employees and their families. We received survey responses from 213 of 243 employees, for an 88 percent response rate. We analyzed the survey responses, including open-ended comments, and conducted follow-up interviews. To provide information on how selected organizations, during downsizing, have assisted affected employees, we identified and reviewed our prior reports on restructuring and downsizing. We incorporated information on those organizations where appropriate. Because our three surveys did not make use of probability sampling, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages for the purpose of minimizing such nonsampling errors. For example, our data collection instruments were designed by survey specialists in combination with subject matter specialists and pretested to ensure that questions were clear and were understood by respondents. To increase our response rate, a follow-up mailing was made to those who did not respond in a reasonable time period. We conducted our review at Postal Service Headquarters in Washington, D.C., and at the Service’s IT center located in San Mateo, CA, from August 2001 through October 2002, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Postmaster General. The Service’s comments are discussed at the end of this letter and are reprinted in appendix II. The Service is following its Investment Review and Approval Process in deciding whether or not to close the San Mateo IT Center, because it will require an investment of $8 million to support the closure. Events leading to the IT Department’s proposal to close the San Mateo IT Center began in 1996. At that time, the IT Department started making long-range plans to consolidate the San Mateo IT Center’s mainframe computer operations into a new IT center the Service was building in Eagan. Other IT functions performed at the San Mateo IT Center were to be unaffected by the consolidation plans under development at that time. However, in 2000, the PMG called on postal managers to find savings by eliminating work, working more efficiently, and consolidating functions. In response to that call, the IT Department proposed closing the San Mateo IT Center and transferring its IT functions to Eagan and other postal IT centers. That proposal has yet to be approved by the Service, and the PMG has said that such a decision will not be made until after we have issued our report. As required by the Service’s Investment Review and Approval Process, the IT Department prepared a DAR in 2000 to support the $8 million investment needed for the proposed closure of the San Mateo IT Center. The Service’s Investment Review and Approval Process establishes the review/approval process, procedures, and responsibilities for capital investments made by the Service. Major capital investments, generally defined as $5 million or more, require a DAR, which is justification to recommend an investment for approval. In 2000, OIG reviewed the San Mateo DAR and concurred with the IT Department’s analysis that money could be saved and efficiencies gained by closing the San Mateo IT Center and transferring its IT functions to the Eagan IT Center and other postal IT centers. However, since the DAR was prepared in 2000, some of the economic assumptions, which were based on the economic conditions at that time, have changed. Additionally, the Service recently announced plans to consolidate its accounting functions in 85 postal districts into the Service’s 3 Accounting Service Centers located in Eagan, St. Louis, and San Mateo. Given this, before the Service makes its decision about closing the San Mateo IT Center, the IT Department’s DAR may need to be reviewed and updated, if appropriate, to better reflect current economic conditions and recent plans to consolidate accounting functions. Events leading to the IT Department’s proposal to close the San Mateo IT Center began in 1996 when the IT Department initiated development of a long-range plan to consolidate the San Mateo IT Center’s mainframe computer operations into a new IT center that the Service was building in Eagan. The IT Department’s proposal called for consolidating San Mateo’s mainframe computer operations into a new 352,000-square-foot postal IT center, located on 28 acres in Eagan. The new Eagan IT Center was to serve as the Service’s primary financial center, meet the Service’s long- term computer operational requirements, and have the flexibility to adjust to IT changes and provide room for growth. The Eagan IT Center was designed to absorb the San Mateo IT Center’s mainframe computer operations. At the time the IT Department was preparing its original consolidation proposal, it projected that the San Mateo IT Center’s mainframe computer operations could be consolidated into the new Eagan IT Center sometime around 2000. However, before that consolidation occurred, the PMG called on postal managers in 2000 to review their operations and find savings by eliminating work, working more efficiently, and consolidating functions. The PMG did this in response to the Service’s declining financial condition: shrinking net income, operating expense growth outpacing operating revenue growth, the threat of declining mail volumes, and stagnating productivity. Heeding the PMG’s call to find savings, the IT Department decided to expand its plans for the San Mateo IT Center. Instead of just consolidating the San Mateo IT Center’s mainframe computer operations into the Eagan IT Center, the IT Department decided to propose closing and selling the San Mateo IT Center and consolidating all of its IT functions into other postal IT centers located in Eagan, St. Louis and Wilkes-Barre. According to IT Department officials, once a decision was made in early 2000 to propose closing the San Mateo IT Center, the proposal’s sponsoring unit (the IT Department), per the Service’s Investment Review and Approval Process, prepared a DAR in support of the closure proposal. The DAR details the economic analyses and assumptions, including costs and benefits, of the proposed investment and alternatives being considered so that approving authorities will have the information necessary to make informed decisions. The Service requires that a DAR be prepared, validated, and approved for planned capital and expense investments of $5 million or more. The IT Department provided us with copies of (1) documentation and an internal paper prepared in support of the proposal to consolidate the San Mateo IT Center’s mainframe computer operations into Eagan and (2) the DAR supporting the closure proposal. However, the IT Department was unable to provide us documentation explaining the basis for expanding the mainframe consolidation proposal into a proposal to close the San Mateo IT Center entirely and transfer all of its IT functions to other postal IT centers. Additionally, according to postal officials, the two individuals most familiar with the closure proposal, the vice president, Information Technology; and the manager, Information Systems Support, are no longer with the Service. According to postal officials, there are no detailed policies or rigorous procedures that must be followed when a department proposes closing a postal facility other than a post office or mail processing facility. They said that references, in the Service’s Transformation Plan, to detailed procedures for closing facilities apply only to post offices and mail processing facilities, not to administrative offices such as the San Mateo IT Center. They said the DAR is the primary document being used to support the IT Department’s proposal to close the San Mateo IT Center. The San Mateo DAR prepared by the IT Department included (1) the objectives, justification, and details of the proposed closure/consolidation; (2) economic analyses and financial justification for the proposed closure; and (3) information on an alternative to the closure that was considered, analyzed, and eliminated. According to the DAR, closing the San Mateo IT Center and consolidating all of its IT functions at other postal IT centers would be justified because (1) consolidating IT and support operations would provide the Service savings, over a 10-year period, of approximately $25 million in staffing, utilities, maintenance, and contractor systems software support and (2) consolidating IT functions would allow the Service to sell the San Mateo facility for approximately $49 million. The DAR also noted that the San Mateo area had become one of the highest cost-of-living areas in the nation, and the San Mateo IT Center was finding it increasingly difficult to retain its executives and the top technical staff needed to operate the center. The DAR further noted that the cost of contractors in the San Mateo area was higher than in other areas where postal IT centers were located. Before finalizing the DAR to close the San Mateo IT Center, the IT Department considered one alternative proposal which, consisted of the Service selling the San Mateo facility and leasing it back from the new owner, with all of the postal functions remaining in the leased facility. The Service rejected that alternative for two reasons. The IT Department determined that (1) the annual lease cost would exceed the current depreciation cost of the San Mateo facility and (2) the Service would not realize the savings and operational efficiencies that would accrue from consolidating San Mateo IT functions at other postal IT centers. According to postal officials, the IT Department—following the Service’s Investment Review and Approval Process—prepared the San Mateo DAR and forwarded it to the Service’s Finance Department for validation and to the Facilities Department for review and concurrence. By early November 2000, the Finance Department had completed its validation of the accuracy and integrity of the San Mateo DAR’s assumptions and economic analyses; the Facilities Department had concurred with the DAR; and the sponsoring vice president, IT, had signed the DAR. Next, the senior vice president/chief technology officer and the chief financial officer/executive vice president reviewed and approved the DAR. In late November 2000, the chief financial officer/executive vice president prepared and signed a validation memorandum and executive summary, which was forwarded to the PMG, stating that the DAR proposing to close the San Mateo IT Center had been reviewed and validated. According to postal officials, before the San Mateo IT Center can be closed, approval must be obtained from the Headquarters (HQ) Capital Investment Committee and the PMG. The OIG has also reviewed the DAR. (See fig. 2 for an overview of the Service’s Investment Review and Approval Process.) Prior to our review, postal management asked the OIG to evaluate the potential benefits, costs, and operational issues surrounding the IT Department’s proposal to close the San Mateo IT Center and transfer its IT functions to other postal IT centers. The OIG did this evaluation between May and late November 2000. The OIG agreed with the IT Department that consolidating the San Mateo IT Center’s IT functions into the Eagan and other postal IT centers would produce savings in staffing, utilities, maintenance, and contractor systems software support. However, among other things, the OIG noted that the IT Department’s projected savings in the San Mateo DAR might be overstated by $4 to $8 million because the cost of leasing space for the Accounting Service Center in the Bay Area may have been underestimated. Also, the OIG questioned the Service’s support for $6.3 million in capital expenditures for hardware and software upgrades associated with the consolidation. The Service subsequently provided justification for the capital expenditures, which the OIG found to be generally responsive to its concerns. According to the OIG’s November 24, 2000, report, postal management revised its lease cost estimates for the Accounting Service Center. This revision reduced estimated savings from $78 million to the current estimate of $74 million. The OIG concluded that this and other actions the Postal Service was planning were responsive to the OIG’s concerns and concurred with the IT Department’s analyses that consolidating the San Mateo IT Center’s IT functions into other postal IT centers would result in savings. In 2000, when the Service’s IT Department prepared the DAR to support its proposal to close the San Mateo IT Center, general economic conditions were noticeably better than they are currently; and the Service had not yet made plans to automate and reengineer its field accounting activity, which involves closing its 85 district accounting offices and consolidating residual activity into its 3 Accounting Service Centers. The IT Department’s proposal to close the San Mateo IT Center was based on several major economic assumptions that have since changed. First, when the IT Department prepared the San Mateo DAR, it noted that the job market in the Bay Area was very good and that employees with IT skills were in great demand, thereby making it difficult to recruit and retain IT employees at the San Mateo IT Center. Since then, however, the general job market has declined significantly. For example, in the Bay Area, from 2000 to 2002, the unemployment rate increased from 2.2 percent to 5.4 percent. Even more dramatic was the reversal in employment growth. From 1999 to 2000, the number of Bay Area employees grew by 3.8 percent. In contrast, the number of these employees contracted by 5.9 percent from 2000 to 2002. Given the increase in unemployment and the shrinking employment growth in the Bay Area, current IT labor costs may not be as high as they were in 2000, thereby creating a more favorable recruiting environment for the Service. Second, the IT Department received an appraisal in 2000, indicating that it could reasonably expect to sell its San Mateo facility within 6 months for a minimum price of $49 million. However, according to real estate experts, office-building values in the San Mateo area are currently depressed. From the 4th quarter of 2000 to the 2nd quarter of 2002, average asking rents fell by about 60 percent, and office vacancy rates increased nearly sixfold, from less than 4 percent to 22 percent. As a result of this depressed real estate market, the time and value estimates that the IT Department received in 2000 for the San Mateo facility may be out of date. In view of what might be significant changes in some of the economic assumptions used in the San Mateo DAR, we asked the Service about updating (1) the estimate of the fair-market value of the San Mateo building and property and (2) the estimate of rental rates for office space in the Bay Area. The Service said that it was too costly to have a commercial real estate broker update this information before the Service was ready to make a decision about closing the San Mateo IT Center. However, the Service acknowledged that the commercial office market in the Bay Area has changed since the DAR was prepared in 2000. Postal officials said that the key assumptions used in preparing the DAR would be updated before the Service makes a decision about closing the San Mateo IT Center. Until these data are updated, it is uncertain how much the Service would save by closing the San Mateo IT Center. Another significant change indicating the need to update the San Mateo DAR is the Service’s plans to revamp its accounting functions. In 2001, the Service announced plans to automate and reengineer its field accounting activity. These changes will result in the Service closing its 85 district accounting offices; eliminating 1,063 field accounting technicians in these offices; and consolidating the residual activities into its 3 Accounting Service Centers located in Eagan, St. Louis, and San Mateo. Postal officials stated that this would result in about 350 more accounting technician and finance positions at these 3 centers. As noted earlier, the San Mateo DAR includes a plan to move the accounting functions and its 102 employees into leased space in the Bay Area. The DAR included $2.1 million as the 3- year estimated cost for this leased space. As a result of the planned consolidation of district accounting functions, the Service’s estimated cost of leased space may no longer be accurate for its accounting functions in the Bay Area and could affect projected savings associated with closing the San Mateo IT Center. Given this possibility, before the Service makes its decision about closing the San Mateo IT Center, the DAR may also need to be reviewed and updated, if appropriate, to reflect the cost associated with obtaining the leased space necessary to accommodate about 40 additional accounting technician and finance positions. Relocation experts we interviewed reported that plant and facility closings generally have negative economic and social impacts on displaced employees and their families. Typically, some employees experience negative economic impacts, others experience negative social impacts, and some experience both when the plant or facility where they work closes. The experts told us that employees who do not relocate with their jobs typically experience economic impacts, such as lost income, loss of retirement benefits, and lower paying jobs upon reemployment. Experts also noted that employees who relocate with their jobs are likely to encounter social impacts, such as marital stress, separations from family members, and the loss of social ties. In addition to the negative social impacts, the experts noted that dual-income families who relocate may face negative economic impacts when the trailing spouse has to give up employment in order to relocate. Experts’ opinion and research done on plant and facility closures indicate that such events have a greater negative impact on older employees than on younger employees. Older employees are less likely than younger employees to be reemployed and are more likely to experience social difficulties when relocating. Research also indicates that an increasing percentage of employees are declining relocation offers because they want to avoid the social and economic impacts associated with moving to a new geographic location, particularly the disruptions to family ties that often ensue. Research further shows that elder care is playing an increasingly important role in employee relocation decisions. Notwithstanding the negative impacts closures can have on employees and their families, closures occur across the nation and are done for a variety of business reasons and purposes. Research also indicates that in the final analysis, all affected employees may not view plant and facility closures negatively. Some employees may actually come to view their experience with a plant or facility closure as positive because it affords them the opportunity to redirect their careers, develop new competencies, or leave unsatisfying jobs. According to the relocation experts we interviewed, studies have shown that the economic impacts on employees displaced by a plant or facility closure include the loss of earnings due to periods of unemployment, loss of retirement benefits, and lower wages upon reemployment. For example, a 1999 study on job displacement stated that about 20 percent of displaced employees who found employment did so at greatly reduced wages, earning one-half or less than they did previously. The study also found that between one-fourth and one-third of displaced employees remained unemployed a year or more after displacement. Similarly, another study of displaced employees found that about 30 percent remained unemployed for a year or more. Studies also indicated that the longer displaced employees remained unemployed, the greater the drop in their wages when reemployed, with some displaced employees becoming so discouraged that they stop looking for work altogether. Similar findings were echoed in a survey of employees displaced by a New York plant closure in 1998. Survey respondents reported several impacts they experienced as a result of the worksite closure, including earnings losses, declines in job quality, and financial difficulties. The survey disclosed that most displaced plant employees had held two or more jobs since the closure, with some holding as many as eight jobs in 3 years. The survey results, published in a 2001 report, indicated that as a result of the closure, employees experienced a 17 percent decline in median income and a 33 percent increase in commuting distances. The researcher also found precipitous declines in job quality, in terms of regular raises, sufficient income, promotions, and skill development. In addition, the report stated that many of the displaced employees experienced financial difficulties, such as having to sell their homes. Relocation experts we interviewed told us that families commonly experience social difficulties when relocating, including disruptions of their children’s education and social well-being, increased marital stress, and a sense of loss caused by separations from family members. Relocation experts and available studies also point out that in addition to social adjustments, trailing spouses in dual-income households may face economic impacts due to the loss of employment and income. Research indicates that in over 50 percent of married households, both spouses are employed. This prevalence of dual-income households in the workforce is prompting companies to provide employment assistance to trailing spouses in order to reduce family stress during relocation transitions, according to studies and experts. For example, employers assist trailing spouses in a variety of ways, including paying a job-finders fee, assisting with finding employment either within or outside the company, and reimbursing the spouse for lost income while seeking employment at the new location. Examples of other types of assistance provided trailing spouses include resume preparation and review, development of job search strategies, and career counseling. According to the experts we interviewed, until and unless a trailing spouse finds comparable employment at the new location, a dual-income household might lose 30 percent to 50 percent of its income when it relocates. The impacts of relocation on trailing spouses have also been known to include emotional and cultural adjustments. For example, relocation experts noted that trailing spouses might be natives of the originating city and could experience a sense of loss at the new location. According to experts, trailing spouses may experience difficulties in adjusting to less culturally diverse regions of the country where there may be fewer opportunities associated with their culture, religion, or language. Trailing spouses may also find it difficult to transition from large urban areas where they have access to ethnic activities, with which they have an affinity, to small rural areas where they may feel isolated. Experts also stated that career development for trailing spouses who worked may be negatively affected, and they might have difficulty finding comparable employment. In addition, according to a 1998 study, relocation can lead to loss of continuity of training and skill development, as the new location may not have the same career opportunities. For these reasons, trailing spouses may be dissatisfied with their new jobs, according to experts. Experts also told us that the difficulty associated with finding comparable employment is compounded when people move from large urban areas, such as the Bay Area, to smaller communities that traditionally have fewer employment and career prospects. Research and relocation experts report that when a plant or facility closes, older employees tend to encounter greater economic and social difficulties than younger employees. While there is no consensus among studies and experts regarding the age at which individuals encounter greater difficulties due to job loss, there is general agreement that as workers age, the impacts of job displacement become increasingly severe. In particular, research and industry experts state that workers in their mid-40s begin to face economic impacts attributable to age because they face more obstacles than younger employees in obtaining new employment. Experts also noted that these older employees potentially experience greater economic losses as they are typically vested in their retirement systems and could potentially lose their retirement benefits due to job displacement. Additionally, according to relocation experts, workers in their mid-40s and older who relocate generally face greater social impacts than younger employees. In November 2001, we reported that although employees age 55 and older are not more likely to lose their jobs than younger workers, a job loss for these older employees potentially has more severe consequences. For example, we noted that employees age 55 and older may experience larger losses in earnings upon reemployment, compared with younger employees. We also reported that such employees were significantly less likely than younger employees to be reemployed. In addition, we stated that the potential loss of health care benefits following a job loss could be more problematic for employees who are age 55 and older because they tend to have more health problems than younger employees. Other studies on job loss reported similar findings regarding older workers. Similar to the greater economic impacts older employees may experience due to job displacement, relocation experts we interviewed also told us that employees in their mid-40s and older are more likely to face greater social difficulties in relocating than younger individuals. For example, according to experts, employees in their mid 40s and older are more likely to have elderly parents, grandchildren, or existing health conditions. Experts also told us that these employees are more likely to have high school or college-aged children who are often reluctant to relocate. Moreover, employees who are in their mid-40s and older are typically less inclined to move than younger individuals. For example, one relocation expert we contacted noted that older employees feel that relocating to a new community requires that they sacrifice social networks, long-time friendships, and family relationships that took years to establish. In addition, experts pointed out that trailing spouses of older employees might also experience more difficulties in finding new jobs because of age discrimination. Given the wide range of potential impacts from relocation, in particular the impacts on families, studies indicate that a growing number of employees are declining to relocate. According to survey results published by Atlas Van Lines in 2001, the number of companies reporting that employees rejected relocation offers increased from 39 percent in 1999 to 50 percent in 2001. The most frequently cited reason for declining to relocate was “family ties” (81 percent). Other reasons cited included “personal reasons” (73 percent), “no desire to move” (67 percent), and “spouse employment” (48 percent). The Employee Relocation Council (ERC) reported in 2001 that family- related considerations, such as elder care, child care, and schools played a crucial role in employees’ relocation decisions. Elder care was specifically identified as an area of increasing importance in employees’ relocation decisions. The report estimates that 80 percent of elderly persons have lived alone or with a family member, rather than in nursing homes. As a result, according to ERC, more employees and their families will be increasingly responsible for helping elderly relatives who need assistance with the activities of daily living, including shopping, transportation, personal care, paying bills, preparing meals, walking, and house cleaning. The ERC report also noted that employers offer various types of elder care assistance, such as paying to move elderly relatives, providing lists of elder care facilities and programs, and providing written materials on elder care needs. ERC further reported that similar services are provided for relocating families needing child care services or help in finding schools for their children. Plant and facility closures are the result of business decisions made for a variety of reasons and purposes. Notwithstanding their potential impact on employees and their families, business closures occur and affect thousands of employees annually. Research indicates that in the final analysis, not all affected employees view plant and facility closures negatively. According to experts, closures, accompanied by employee displacements are largely fueled by corporate downsizing. For example, the BLS reported that there were 1,253 worksite closures that resulted in nearly 380,000 employee layoffs throughout the United States in calendar year 2001. BLS data also show that in the Bay Area, where the San Mateo IT Center is located, there were 22 worksite closures that resulted in over 5,800 employee layoffs in calendar year 2001. According to studies and relocation experts, businesses make decisions to downsize, restructure, or close facilities for a wide variety of reasons, primarily driven by business needs. According to unemployment research, for example, plant relocations are organizational changes that are commonly taken to pursue company development or to solve financial and operational problems. Experts told us that companies often implement different reorganization strategies to increase profits and competitiveness and to respond to various business factors, such as the economy, the stock market, business competition, the availability of skilled workers, and the cost of labor. For example, according to one expert we contacted, companies typically move facilities to reduce costs, often relocating to regions of the country where labor costs are lower. Recent Bay Area worksite closures have resulted in employee layoffs. While specific data on the number of employee layoffs by Bay Area companies are not available, media reports indicate that several thousand employees have lost their jobs over the past year. For example, in May 2002, it was reported that a San Francisco based financial services company implemented its first large-scale employee layoff since 1987, reducing its workforce by approximately 4,500 employees in order to keep the company’s total complement under 20,000. In addition, another report indicated that as a result of a Bay Area based corporate merger in the technology sector, an estimated 15,000 positions would be eliminated worldwide over the next 2 years, including many in the Bay Area. Another event that may affect the Bay Area economy was reported in June 2002, when a major air carrier announced that as a result of the September 11, 2001, terrorist attacks, it had applied for federal assistance to aid its recovery from these events. The air carrier also plans to increase its viability by reducing costs, such as through salary cuts from pilots, managers, and administrative employees. However, if these efforts are unsuccessful, the jobs of approximately 18,000 Bay Area employees may be affected. Experts also told us that companies make decisions to displace or relocate employees according to, among other things, business needs, economic conditions, competition, company size, acquisitions and mergers, and the number of employees on the payroll. For example, a relocation expert we interviewed told us her company, which employs approximately 2,800 employees, recently downsized by closing two of its office locations. While some employees were retained and relocated, others lost their jobs and their employee benefits. Despite the negative social and economic impacts that closures have on employees, unemployment researchers have suggested that a job loss sometimes creates opportunities for individuals to change careers and life directions. Additional research also indicates that after a period of time, some employees may actually come to view their experiences with a plant or facility closure as positive because it gave them the opportunity to redirect their careers, develop new competencies, or leave unsatisfying jobs. Research and relocation literature also indicates that families differ in their abilities to adapt to change. Moreover, relocation experts stated that in many cases, families adapt well and make very successful transitions. Ninety-five displaced New York IT Center employees responded to our survey and reported economic impacts, such as diminished earnings; and social impacts, such as broken family ties. Of these 95 respondents, 46 remained with the Service in the New York City area and reported facing more economic impacts than social impacts. The remaining 49 employees relocated to postal IT centers in other geographic areas and reported more social impacts than economic impacts. Several options were available to displaced New York IT Center employees, including working for the Postal Service in the New York City area—though most likely not in an IT job; relocating to another postal IT center; retiring with a buyout; or leaving the Service. Employees most concerned about social issues, such as maintaining close family ties, tended to remain in the New York City area and deal with the resulting economic impacts, such as diminished earnings. Our survey results indicate that employees most concerned about economic issues, such as maintaining their level of earnings and career potential, relocated to other postal IT centers and dealt with the ensuing social impacts, such as diminished ties with family and friends left behind in the New York City area. Still other employees opted to retire or leave Postal Service employment altogether. Retiring or leaving the Service at the time of the New York IT Center closure may have been particularly appealing to some employees because as part of a nationwide restructuring occurring at that time, the Service was offering a monetary incentive of 6 months’ pay to all eligible employees who opted to retire or take an early out. In total, 104 of the 283 displaced New York IT Center employees chose to remain with the Service in the New York City area; 82 relocated to postal IT centers in other geographic areas. The remaining 97 displaced employees retired, separated from the Service, or found other jobs with the Service outside the New York City area. From August to November of 1992, the PMG restructured the Service. To minimize the impact on employees, an early-out option was offered to permit most employees to retire at age 50 with 20 years of service or any age with 25 years of service. A monetary incentive was offered to encourage eligible employees to retire. This incentive was a lump-sum payment equal to 6 months’ pay. According to postal officials, when the Service closed its New York IT Center in 1993, the restructuring had resulted in numerous retirements and an abundance of non-IT jobs in the New York City area to offer displaced IT center employees. Postal officials said the Service was therefore in the position to offer displaced IT center employees—both EAS and bargaining unit—various options that included (1) taking an available Service job in the New York City area that had become vacant as a result of the restructuring, (2) relocating to another postal IT center in a different geographic area, or (3) retiring with a buyout—if eligible. Some employees chose none of these options and decided to independently find other Service employment outside the New York City area. Still others opted to leave Service employment altogether. EAS employees who opted to relocate to other postal IT centers were able to continue working in the IT field at their same grade and pay. EAS employees who decided to stay in the New York City area were initially offered saved-pay for 2 years, but the Service later made saved-pay permanent for these employees. According to some EAS survey respondents, the Service made its offer of permanent saved-pay for EAS employees after some EAS employees had already relocated to other postal IT centers. Some EAS survey respondents reported that they based their decisions to relocate on their belief that if they stayed in the New York City area in a lesser graded position, they would receive saved-pay for only 2 years. Some EAS survey respondents said that if they had known the Service would make saved-pay protection permanent, they would have taken a non-IT job and stayed in the New York City area. Bargaining unit employees who took lower paying Service jobs in the New York City area were to receive saved-pay protection for 1 year. In many instances, however, the actual reduction in pay did not occur until about 2 years after the closure because the time that employees spent in a trial employment status did not count against the 1-year limit. Also, the Service experienced some delay in making the pay changes in its computerized payroll system. Figure 3 shows the number and percent of displaced New York IT Center employees who chose each option. For many displaced New York IT Center employees, the decision to accept other postal employment in the New York City area meant that they fared worse economically in the long run than those who chose to relocate to postal IT centers in other geographic areas. According to a postal official, when the Service closed the New York IT Center, the Service had numerous vacancies in the New York City area because it had recently gone through a major restructuring that saw many employees leave the Service through retirement or separation. As a result of the numerous vacancies in the New York City area, the Service was able to offer jobs to displaced New York IT Center employees who did not wish to relocate to postal IT centers in other geographic areas. However, the vacant postal jobs in the New York City area were frequently lower paying, non-IT positions. Using salary data on the 147 displaced New York IT Center employees still employed by the Service at other postal IT centers or postal facilities in the New York City area, we determined that the 68 employees who relocated (44 bargaining unit and 24 EAS employees) generally fared better economically than those who remained in the New York City area. Nine years after the closure, salary data provided by the Service showed that the average salary of employees who relocated had increased about 11 percent (in constant 2001 dollars); in comparison, the average salary of employees who remained with the Service in the New York City area had decreased about 1 percent (in constant 2001 dollars). The impact on salary between displaced employees who relocated and those who remained in the New York City area was greater for bargaining unit employees than for EAS employees. Our analysis of the Service’s salary data shows that the 44 displaced bargaining unit employees who relocated to other postal IT centers were, 9 years later, earning salaries that averaged about 14 percent more (in constant 2001 dollars) than they had been earning when the New York IT Center closed. In contrast, the 43 displaced bargaining unit employees who remained in the New York City area were earning salaries that averaged about 6 percent less (in constant 2001 dollars) than the salaries they were earning at the time of closure. Almost all of the 43 displaced bargaining unit employees who remained in the New York City area after the closure took lower paying mail clerk positions. No EAS employees took pay cuts (in actual dollars) as a result of the closure. Nine years after the New York IT Center closed, the 36 displaced EAS employees working in the New York City area were earning an average of about 2 percent more (in constant 2001 dollars) than they had been at the time the center closed. In contrast, the salaries of the 24 displaced EAS employees who relocated to other postal IT centers saw their salaries increase an average of about 8 percent (in constant 2001 dollars). Displaced EAS employees who relocated, however, fared worse, on average, than displaced bargaining unit employees who relocated. Generally, survey respondents who took postal jobs in the New York City area indicated they did so because they were concerned about the social impacts they believed they would face if they left the New York City area. For example, 34 of the 46 survey respondents who remained in the New York City area believed it was very important to remain in the area in order to maintain family ties with parents and other relatives—ties that would otherwise be strained by relocating to a new geographic area. One respondent who took a postal job in the New York City area commented that he could not relocate because he was the only person able to provide care for his aged parents. Another respondent commented that she was unable to relocate to another postal IT center because her parents’ health was such that they would not be able to accompany her. A third respondent who took a postal job in the New York City area commented that she was unable to relocate for several reasons, one being that she could not relocate her mother, who lived close by and was about to undergo surgery. Respondents remaining in the New York City area were also concerned that relocating to another geographic area could alienate relationships with their children and grandchildren. Twenty-five of the 46 respondents remaining in the New York City area stated that it was very important that they remain in close proximity to their children and grandchildren. Respondents who remained with the Service in the New York City area indicated that concerns related to their spouses weighed heavily in their decisions not to relocate. For example, 24 of the 46 respondents remaining in the New York City area said that a very important concern they had was that relocating would strain the relationship they had with their spouse. One EAS employee decided against relocating because his wife was pregnant at the time and did not want to sever the close relationship she had with her physician. Another EAS employee stated that he did not relocate primarily for family reasons—his spouse was working, and it would have been very hard to ask her to give up her job and pension and move across the country to a new location. Twenty-three of the 46 respondents remaining with the Service in the New York City area also indicated that in reaching their decisions not to relocate, it was very important to them not to lose social, community, and cultural ties to the New York City area. Given that so many of the 46 survey respondents who remained with the Service in the New York City area did so to minimize the social impacts they believed relocating would involve, it is not surprising that most had strong family and cultural ties to the area. Forty-one of the 46 respondents had parents, grandchildren, or relatives living in the New York City area. Thirty-six of the 46 respondents had a spouse/partner living with them, and 34 had children. Most of the respondents who remained in the New York City area also tended to be more senior employees with many years of postal service. Forty of the 46 respondents were 40 years old or older, and 30 of the 46 respondents had over 15 years of postal service at the time the New York IT Center closed. Several survey respondents who accepted postal jobs in the New York City area, rather than relocate, furnished written comments describing the economic impacts they experienced following the closure of the New York IT Center. In general, they believe that they suffered financially by staying with the Service in the New York City area. As indicated earlier, our analysis of the Service’s salary data confirmed that employees who relocated generally fared better economically than those who did not. Some said that their salaries would have been progressively higher; and their chances for advancement would have been better, if they had relocated to another postal IT center. For many displaced New York IT Center employees, the decision to move to another postal IT center in another geographic area was accompanied by more social impacts than economic impacts. When the Service closed the New York IT Center in 1993, it offered displaced employees the opportunity to relocate to other postal IT centers, such as Eagan and San Mateo. By relocating to other postal IT centers, employees were able to continue working in the IT field with no pay cut. Eighty-two of the 283 displaced New York IT Center employees chose to relocate to a postal IT center in another geographic area. Survey respondents who relocated to postal IT centers in other geographic areas often reported encountering very significant social impacts as a result of their moves. For example, 23 of the 49 relocated postal employees who responded to our survey reported that they had found it very difficult to adjust to the new geographic area, work environment, and local culture. The same number of respondents also reported finding it very difficult to maintain or establish social, community, and cultural ties or find supporting communities where they could get involved. One respondent who relocated to the Eagan IT Center reported that her family, coming from New York City, found it difficult to assimilate into the midwestern culture. Another respondent found it difficult being away from family and friends and missing out on important family events. She noted that relocating had strained her marriage and forced her to give up a private business she had been running in New York City. Many survey respondents who relocated reported that they had been greatly affected by concerns related to their spouse/partner. For example, 18 of the 49 respondents reported that since relocating, their spouse/partner had found it very difficult to maintain or secure a job/career, benefits, and retirement security. Nineteen respondents also indicated that the relocation had been very difficult because of the strains it had placed on relationships with spouses/partners. One respondent who relocated commented that it had been emotionally difficult for his wife to relocate and leave her family behind. Twenty-two of the 49 relocated respondents reported that relocating from the New York City area to another geographic area had put strains on family relationships; and 20 respondents reported that as a result of the relocation, it had been very difficult to keep their families together. One respondent noted that it had been difficult for him and his wife to leave the New York City area because they had children working and going to school there, and they were also leaving elderly parents behind. Twenty-three of the 49 respondents who relocated stated that as a result of their relocation, they had a very difficult time maintaining social ties with parents and other relatives. One respondent commented that the relocation meant not being there for his mother when she passed away and not being able to be with his relatives. Twenty-two respondents stated that as a result of relocating to another geographic area, they had a very difficult time assisting with the physical care of their parents or other relatives. One employee commented that relocating had placed tremendous stress on him as well as his wife, children, parents, and other relatives. Finally, survey respondents who relocated typically reported that they had experienced significant social impacts associated with the relocation itself. Twenty of the 49 relocated respondents reported that they had experienced a very difficult time dealing with the cost, time, and energy involved in relocating from the New York City area to their new homes in another geographic area. Fourteen respondents also noted that they had a very difficult time selling their homes in the New York City area or buying homes in their new geographic area. One respondent who was a bargaining unit employee stated that the Service did not help him sell his house in the New York City area or assist with the costs associated with the sale. Another survey respondent stated that he lost money relocating because he had just refinanced his home in the New York City area when the Service announced the closure of the New York IT Center. He believed it would have been better if postal officials had given employees more lead time to plan their affairs before the Service closed the New York IT Center. Another respondent believed that relocating workers needed more time than the Service provided to research housing options, schools, and medical facilities. He believed that because employees had insufficient time to research these things, it created a social hardship on them and their families that could have been avoided. According to postal officials, the Service provides bargaining unit employees about 7 months’ notice of planned closures. Most survey respondents who relocated to other postal IT centers indicated that they did so because they were concerned about the economic impacts they believed they would face if they remained in the New York City area. Forty-four of the 49 respondents who relocated reported that keeping their jobs for economic/financial well-being had been very important to them when deciding whether or not to relocate to another postal IT center. Forty of the 49 respondents said that concerns about maintaining their pension plan and retirement security had been a very important factor in deciding to relocate. One respondent indicated that he and his colleagues who relocated to Eagan felt that they had no real choice but to relocate if they wanted to maintain their financial well- being. He stated that the only option the Service offered him besides relocating to another postal IT center was to remain in the New York City area as a mail clerk or letter carrier with temporary saved-pay. Another respondent who relocated also stated that she didn’t believe she had any other viable choice besides relocating. She stated that she was too old to look for a nonpostal job and didn’t believe she could afford to remain with the Service in the New York City area in a non-IT position and take a reduction in pay when her saved pay expired. As previously noted, our analysis of the Service’s salary data for displaced New York IT Center employees confirmed that employees who relocated generally fared better economically than those who remained in the New York City area. Twenty-two of the 49 relocated survey respondents also reported that in deciding to relocate, they had considered it very important that they keep their IT jobs for career advancement purposes. Twenty respondents also reported that they had been very concerned that as older workers, they would have found it very difficult to find satisfactory jobs if they had not relocated to other postal IT centers and kept their IT jobs. A respondent who relocated to another postal IT center stated that he had no other choice but to relocate if he wanted to continue working with the Service in an IT position. Most of the 49 survey respondents who relocated to postal IT centers in other geographic areas tended to have fewer years of service than the 46 respondents who remained in the New York City area. Only 16 of the 49 respondents who relocated to another postal IT center had 15 years or more service, whereas 30 of the 46 respondents who stayed with the Service in the New York City area had 15 years of service or more. Otherwise, the demographics of those who relocated to a postal IT center in another geographic area were similar to those of the respondents who stayed in the New York City area. Thirty-eight of the 49 relocated survey respondents were 40 years of age or older at the time the New York IT Center closed; 42 had parents, grandchildren, or relatives living in the New York City area; and 39 had a spouse/partner living with them. In contrast, 40 of the 46 respondents who remained with the Service in the New York City area were 40 years of age or older at the time the New York IT Center closed; 41 had parents, grandchildren, or relatives living in the New York City area; and 36 had a spouse/partner living with them. San Mateo IT Center employees responding to our survey anticipate more economic impacts than social impacts if they choose to remain in the Bay Area and more social impacts if they relocate with the Service to another postal IT center. If the Service closes the San Mateo IT Center, the options available to San Mateo IT employees to minimize their economic and social impacts vary, depending on individual circumstances and job status. Some affected employees will likely seek other employment in the Bay Area, some will likely retire, and others will likely relocate to another postal IT center in order to keep their IT jobs. On the basis of the best information they had available at the time, 167 (78 percent) of the 213 San Mateo IT employees responding to our survey indicated that they would likely stay in the Bay Area if the Postal Service closes the San Mateo IT Center. Under such circumstances, they said they would expect to face more economic impacts than social impacts. Many anticipate that their decision to stay in the Bay Area could result in them losing their Postal Service jobs. According to postal officials, affected San Mateo IT employees will be able to apply for vacant postal positions, provided they meet the minimum qualifications; however, the Service does not anticipate holding open vacant positions in the Bay Area for these employees. Furthermore, the Service no longer anticipates that it will have many job vacancies in the Bay Area. Thirty-six (17 percent) of the employees responding to our survey indicated that if the San Mateo IT Center closes, they would probably relocate to another postal IT center. Moreover, they anticipate facing more social impacts than economic impacts from their decisions. Survey respondents indicating they would likely relocate reported that they were doing so primarily for their financial well-being and retirement security. Many provided narrative comments in their survey responses describing how relocations would probably lead to family separations for at least a few years. Some of the married respondents indicated they would probably experience some economic impacts from their relocation decision because they would have trailing spouses who would be leaving their jobs in the Bay Area. The remaining 10 survey respondents (5 percent) indicated they were unsure what they would do if the Service closed the San Mateo IT Center. All San Mateo IT Center employees meeting the minimum age and service requirements have the option of retiring. According to postal officials, the Service anticipates offering early retirements to all eligible employees provided that the Office of Personnel Management approves the Service’s request, but the Service does not anticipate offering buy-outs. Other options available to San Mateo IT Center employees depend, in part, on whether they are bargaining unit or EAS employees. Bargaining unit employees are covered by a collective bargaining agreement with a no- layoff provision and are therefore guaranteed a job at another postal IT center. EAS employees are not guaranteed a job at another postal IT center. According to postal officials, about half of the EAS employees will be offered jobs at other postal IT centers. Bargaining unit and EAS employees who do not retire, relocate to another postal IT center, or find another postal job on their own will likely be separated from the Service. Two years ago, when the IT Department first announced its proposal to close the San Mateo IT Center, postal officials indicated that the Service could likely find postal jobs for bargaining unit employees who wanted to stay in the Bay Area rather than relocate to another postal IT center. For most bargaining unit employees, this would have involved a downgrade to a mail clerk position. However, according to postal officials, most affected employees would have retained their IT pay for up to 2 years, after which their pay would have been reduced to the pay of their new positions. However, in July 2002, postal officials told us that conditions had changed and that the Service was no longer in a position to accommodate bargaining unit employees who want to stay in the Bay Area. San Mateo IT Center employees can still apply for any postal job vacancies, provided they meet the minimum qualifications; however, few vacancies are anticipated for the Bay Area. Postal officials said that because of recent drops in mail volumes and advances in automated mail processing, the Service now has an excess of mail clerks in the Bay Area. Nevertheless, postal officials indicated that in keeping with the Service’s collective bargaining agreement with APWU, all bargaining unit employees would be offered jobs in one of its other postal IT centers—although most of the job offers would probably be for positions in its Eagan IT Center. The Service has indicated that it also plans to make available to all EAS employees the services of a private job search firm to help them find employment outside the Service. However, according to postal officials, the Service has no plans at this time to extend these services to its bargaining unit employees because of their no-layoff clause, which EAS employees do not have. Postal officials said that the benefits provided San Mateo bargaining unit employees are specifically governed by its collective bargaining agreement with the APWU. In appendix I, we present (1) demographic data for San Mateo IT Center employees and displaced New York IT Center employees who responded to our survey and (2) the likely options/opportunities for San Mateo employees and options/opportunities that were available for displaced New York employees. Essentially all of the 167 San Mateo survey respondents who indicated that they would likely stay in the Bay Area reported that they anticipate their decisions will result in more economic impacts than social impacts. According to postal officials, those staying in the Bay Area will likely lose their postal IT jobs and will not likely find other jobs with the Postal Service because of its efforts to downsize. Further, the respondents were concerned that because of the tight job market in the technology sector, they might be unable to find nonpostal IT jobs in the Bay Area. Of the 167 respondents, 140 (84 percent) anticipated difficulty finding jobs to adequately support their families. Finally, respondents said they were concerned that if they could not find satisfactory jobs with the Postal Service or the federal government, they would lose their ties to the federal retirement and health care systems. Respondents planning to stay in the Bay Area generally anticipate a range of outcomes that include retiring earlier than planned to adjusting to a reduced standard of living. San Mateo respondents who indicated that they planned to stay in the Bay Area cited a number of reasons for their decisions, including age, longevity in the Bay Area, extensive family and social ties in the area, and working spouses with substantial time invested in their careers. Three-fourths of the respondents planning to stay are over 45 years old; have lived in the Bay Area for more than 20 years; and indicated that they have extensive family ties, social ties, and other responsibilities in the area. These older respondents also typically have working spouses who are reluctant to leave their jobs and lose their pension benefits. For example, 79 of the 134 respondents (59 percent) who provided reasons for not relocating identified concerns with spouses’ careers as a primary motivator in their decision not to relocate. This is not surprising since 49 of the 79 respondents (62 percent) reported that their spouses contributed at least 50 percent to the household income. San Mateo respondents who indicated they would not likely relocate also reported having high school and college age children reluctant or unwilling to relocate and aging parents in need of assistance who would be left behind. Of the 60 respondents with high school and college age students, 58 (97 percent) said keeping their family together was a very important factor in deciding not to relocate. In addition, 78 percent of those deciding not to relocate cited maintaining social and community ties as an important factor in their decision. Over 70 percent also cited health issues and a reluctance to sell their homes as factors in deciding not to relocate. A similar percentage were also concerned that maintaining their health care coverage might become difficult should they lose their postal IT jobs–a particular concern if family members have serious health conditions and continuity of care and coverage for preexisting conditions becomes critical. Under the Service’s collective bargaining agreement with APWU, San Mateo IT Center bargaining unit employees are guaranteed jobs at other postal IT centers. Thus, these employees have the option of relocating with their jobs to other postal IT centers and continuing employment with the Service. Conversely, EAS employees are not guaranteed jobs, and some San Mateo EAS employees will likely not have the option of relocating and continuing employment with the Service. According to the Service, the option to relocate will likely be extended to about half of the EAS employees. One respondent who was an EAS employee, age 49 with 15 years of service, commented that not having the assurance of being able to relocate to another postal IT center has increased his anxiety about the potential closure. He reported that he is not eligible for retirement and doubts that he will be offered other employment with the Service. He indicated that he essentially has no option but to find a nonpostal job because he is the primary breadwinner and has many years to go on his mortgage payments. He indicated that the Service will pay him severance pay equal to 4 or 5 months’ salary, but he anticipates much difficulty finding another IT job in the Bay Area because of the tight labor market for technology positions. The majority of the 167 San Mateo respondents who anticipate staying in the Bay Area reported that they will likely seek whatever postal employment that may be available in the Bay Area, although many indicated in their narrative responses that such job prospects do not look good. The remaining respondents who anticipate staying in the Bay Area reported that they would likely retire or seek nonpostal employment in the Bay Area. (See fig. 4 for details.) San Mateo respondents indicating that they would likely seek other employment in the Bay Area were skeptical about finding such employment. According to postal officials, finding other postal employment is unlikely because of Servicewide downsizing. Some respondents thought they might qualify for other postal management or administrative positions, but they were not optimistic about the availability of such positions. Both EAS and bargaining unit respondents seeking nonpostal employment anticipated difficulties because of the current slump in IT employment and the added difficulty faced by older workers in finding new employment. Additionally, those anticipating seeking new employment were concerned that their earnings would likely be less and that they would lose current health and retirement benefits. Some San Mateo respondents indicated they would be willing to take a non-IT job if it would mean continued employment with the Service in the Bay Area. For example, one younger respondent said she would be willing to transfer from her computer programmer/analyst job to other kinds of postal work in order to stay in the Bay Area because both her and her husband’s families live nearby, her mother cares for her baby while she is at work, and her husband’s parents are in frail health and need constant assistance. Other respondents indicated they wanted to stay in the Bay Area but did not want to take a non-IT job. For example, one senior systems analyst, making $67,000 per year, said he would rather take early retirement and try to find a nonpostal job in his field than take a mail clerk job at $40,000 per year. According to postal officials, employment opportunities may not be very good for individuals wishing to stay in the Bay Area. Postal officials said in July 2002 that the job market for IT positions in the Bay Area had changed significantly in the last 2 years. They said that 2 years ago, IT jobs were plentiful in the Bay Area and that the Service had difficulty attracting and keeping IT personnel. More recently, however, they reported that those conditions had changed. They said that in the current economic environment, IT positions in the Bay Area have become difficult to find. Additionally, postal officials indicated that they could no longer accommodate San Mateo bargaining unit employees who might want to stay in the Bay Area as mail clerks because the Service now has excess employees in the Bay Area. Fifty-two of the 167 San Mateo respondents who did not anticipate relocating to another postal IT center reported that that they would likely exercise their option to retire. Forty-five of those respondents said their retirement would be earlier than planned, and 39 said they would face financial difficulties in retirement. Of the respondents indicating that they would likely retire, 22 said it would be very difficult to maintain or find adequate housing on their reduced incomes. For example, one respondent said that her reduced income would be insufficient to cover her health benefits, mortgage payments, and other expenses and that finding supplementary employment would be difficult at her age. One respondent who was an EAS employee, age 50 with 25 years of service, said he would likely take early retirement because he does not think that the Service will offer him another job. He indicated that he must stay in the Bay Area because his wife will not be eligible to retire for 5 years, his daughter attends a local college, and his aging parents need regular assistance. Because retirement would considerably reduce his income, he said he would have to find other work in order to meet his mortgage payments and other expenses. In discussions of the survey results with postal officials, they expressed surprise that such a large percentage of affected San Mateo IT Center employees had indicated they would likely stay in the Bay Area and compete for jobs in a tight labor market, rather than relocate and continue their employment with the Postal Service. The officials said they expected that more affected IT employees would eventually decide to relocate when they encountered difficulty finding other suitable jobs in the Bay Area. The officials also said that based on past experience, about one-third of all employees would likely relocate if the Service closes the San Mateo IT Center. As previously noted, the Service plans to offer all EAS employees the services of a private job search firm to help them find nonpostal jobs if they decide to seek outside employment. However, the Service has no current plans to extend these services to bargaining unit employees because they are guaranteed jobs at another postal IT center. Nearly all of the 36 San Mateo survey respondents who indicated that they would likely relocate to another postal IT center anticipate social impacts if they relocate. As a group, these respondents were most concerned about the impact that relocating would have on their spouses and/or other family members. Twenty-nine of the 36 San Mateo respondents (81 percent) who said they intend to relocate to other postal IT centers are over age 45, and many are concerned that the relocation would split up their families. Twenty-two of the 36 respondents have spouses; and 9 said that if they relocated, their spouses would probably not relocate with them. The nine respondents reported several reasons why they did not believe their spouses would relocate: spouse has lived in the Bay Area for 30 or more years, spouse has aging parents in the Bay Area who need care, spouse has children who would not be relocating, and spouse’s job pays well (about half the family income). Twelve of the 18 San Mateo respondents who had children and/or grandchildren living in their households reported that at least 1 family member would be left behind. One respondent commented that leaving his 7-year-old son with no father present could have a life-long negative impact. Another said relocation would separate him from his wife and three children for 5 to 6 years until he was eligible to retire. He stated that the economic impacts for him would be (1) the cost of maintaining two households, (2) travel costs between the Bay Area and his new job for family visits, and (3) a more expensive telephone bill. Additionally, 30 of the 36 respondents reported having aging parents or relatives in the Bay Area, and 25 said relocation would make current or future care for them difficult. In some cases, the respondents reported that their parents relied on them to assist with doctors’ appointments, aid their limited English language skills, and be available in case of emergencies. San Mateo respondents likely to relocate also reported concerns with losing community and social ties and with adjusting to new communities, cultures, and work environments. These concerns were particularly reported among ethnic minority respondents, who accounted for more than half of those likely to relocate. They expressed concerns about leaving their important cultural ties in the Asian and Hispanic communities of the Bay Area. For example, one respondent commented that his wife, of Asian descent and with limited English language skills, would have a difficult time leaving the Bay Area. Three-fourths of San Mateo respondents who anticipated relocating to another postal IT center reported that the relocation would strain their family relationships, and one commented that it would likely lead to divorce. Although respondents anticipated strained family relationships, several saw the relocation as temporary, until they were eligible to retire. Other San Mateo respondents who said they would likely relocate reported concerns about anticipated economic impacts associated with trailing spouses. Some reported concerns about the possible loss of spousal income and benefits if their working spouses were to relocate and be unable to find employment at the new location. For example, eight respondents reported having a trailing spouse who would be looking for employment after relocation, and seven of those thought employment would be difficult for their spouses to find. One respondent commented that relocating her spouse would be a major problem because her husband is a journalist and has already learned that prospects for such jobs at the new location are poor. Postal officials indicated that in cases where the trailing spouse is also a postal employee, the Service would work with the trailing spouse to find suitable postal employment at the new location. However, officials did not anticipate that the Postal Service would offer employment services to trailing spouses who are not postal employees. Nearly all of the 36 San Mateo respondents who indicated that they would likely relocate reported that the cost, time, and energy involved in moving would be difficult for them. Twenty-three respondents reported being homeowners who will face the prospect of selling and buying homes. While relocating to a lower housing cost area could provide them some financial advantage, several respondents commented that the differential costs and the loss of preferential property tax status would make it difficult for them to ever move back to the Bay Area. We have previously reported that when employees lose their jobs it can be a traumatic experience; therefore, progressive organizations often work with employees to help them through such difficult times. We reported that not only does job loss disrupt employees’ personal lives and plans, but it can also cause stressful concerns about finding another job. Of 25 organizations we surveyed for our 1995 report on job loss issues, 23 had devised programs to help employees who lost their jobs. These programs included job placement assistance, employee and family counseling, relocation assistance, and training. Some of the organizations surveyed provided self-administered job placement assistance while others used outside job assistance companies. Also, we have previously reported that the Department of Defense was very successful in minimizing the impacts of maintenance depot closures on employees, primarily through a comprehensive outplacement effort. Our prior work on job loss issues further showed that, in general, organizations that offer job placement assistance to displaced employees also benefit. Providing job placement assistance helped sponsoring organizations (1) avoid lawsuits by displaced employees, (2) reduce unemployment costs, and (3) enhance their reputations in the community by demonstrating that they cared about their employees. Similar to other organizations that have helped displaced employees, the Service has indicated that it plans to provide assistance to San Mateo IT Center employees to minimize the impact, if it decides to close the center. In addition to providing the relocation and separation benefits mentioned earlier, the Service has indicated that it plans to hold a series of meetings with affected San Mateo employees and their families to provide them with needed information on available options and opportunities and to address employees’ questions about the closure. Additionally, as noted earlier, the Service plans to provide the services of a job search firm to San Mateo EAS employees, although bargaining unit employees will not receive similar services because such services are not provided for by the collective bargaining agreement that governs IT bargaining unit employees’ benefits. According to postal officials, services to be provided by the job search firm are to include, among other things, seminars on change management, skills development, resume writing, negotiation skills, how to network, how to find a job on the Internet, and group counseling and coaching. In 2000, when the IT Department proposed closing the San Mateo IT Center, it estimated that 33 percent of affected employees would likely relocate to other postal IT centers and possibly need some relocation assistance. The IT Department acknowledged that because it expected that only 33 percent of affected employees would relocate, it would need to rely on contractor support to cover for the loss of knowledgable employees who would not be relocating from the San Mateo IT Center to other postal IT centers. Our survey of San Mateo employees, however, indicates that the IT Department’s estimate of relocating employees may be overstated; and its need for contractor support could, therefore, be greater than planned. According to our survey results, only 17 percent of San Mateo employees currently anticipate that they would likely relocate to another postal IT center if the Service closes the San Mateo IT Center. Fifty-nine percent of the respondents (who indicated they would likely stay in the Bay Area and provided reasons for not relocating) identified concerns with their spouses’ careers as a primary motivator in their decision not to relocate. Further, seven of the eight respondents relocating with trailing spouses who would be looking for jobs at the new location expressed concern that the spouses would have difficulty finding suitable employment. Relocation experts have noted that because of the prevalence of dual- income households in the workforce, employees who must make a relocation choice often base their decision on whether or not their trailing spouses can find suitable employment at the new location. Given this, relocation experts report more companies are providing employment assistance to trailing spouses, such as resume preparation and review, paying job finders-fees, assisting with finding employment, and reimbursing trailing spouses for lost income while they seek employment at the new location. When the IT Department first proposed closing the San Mateo IT Center in 2000, economic conditions and the employment outlook in the Bay Area were noticeably better than they are today; so much so that the Service anticipated that postal positions would be available in the Bay Area for many displaced employees who did not relocate. However, because of changed economic conditions, recent drops in mail volumes, and advances in automated mail processing, the Service no longer expects that it will have job openings to accommodate employees who do not relocate. As noted earlier, although the Service has indicated that it plans to make the services of a private job search firm available to San Mateo EAS employees, the Service has no plans at this time to extend similar services to San Mateo IT Center bargaining unit employees because their benefits are governed by the Service’s collective bargaining agreement with the APWU. Consequently, many bargaining unit employees will face a dilemma if the Service closes the San Mateo IT Center. If bargaining unit employees do not relocate, they will likely lose their postal employment and would not have the services of a private job search firm to help them find other employment. Additionally, bargaining unit and EAS employees with working spouses who are not postal employees face a dilemma concerning the impact a relocation would have on their trailing spouses’ careers and their families’ household incomes. That is, if these employees relocate, will their trailing spouses be able to find suitable employment at the new location or will household incomes and spouses’ careers suffer? By not addressing employees’ concerns about trailing spouses, the Service may be missing an opportunity to entice more of its San Mateo IT Center employees to relocate to other postal IT centers, thereby exposing itself to higher than necessary contractor costs. If the Service decides to close the San Mateo IT Center, it is required under its agreement with APWU to notify the union of its decision and offer to meet with national level officials to discuss the closure’s impact on affected employees. Historically, discussions preceding previous closures have resulted in additional provisions for affected bargaining unit employees, which were specified in Memorandums of Understanding that modified existing collective bargaining agreements. According to postal officials, the Service expects that if it decides to close the San Mateo IT Center, the APWU will request a meeting to discuss, among other things, additional benefits for affected bargaining unit employees. The Service is following its Investment Review and Approval Process as it moves toward a decision about closing the San Mateo IT Center. However, economic conditions have changed significantly since 2000 when the IT Department prepared the DAR in support of closing the San Mateo IT Center. Also, the DAR does not reflect the impact that the Service’s announced plans to automate and reengineer its field accounting activity—which involves closing its 85 district accounting offices and consolidating the residual activities into its 3 Accounting Service Centers—could have on projected savings associated with the proposal to close the San Mateo IT Center. Therefore, the Service may be using dated information as it goes about making its closure decision regarding the San Mateo IT Center. Finally, the employment outlook is not nearly as encouraging now as it was in 2000 when employment conditions in the Bay Area were better and the Service indicated it would have postal positions available for bargaining unit employees who did not want to relocate. Given these changed conditions, bargaining unit employees who do not relocate might encounter difficulty in finding employment in the Bay Area. Further, some San Mateo IT employees may be reluctant to relocate because of concerns that their trailing spouses might have difficulty finding jobs at the new location. We recommend that before the Service makes its decision regarding whether to close the San Mateo IT Center, the PMG should direct the IT Department to review and update the economic assumptions and analyses used in the San Mateo DAR and make revisions, if appropriate, to better reflect current economic conditions and recent plans to automate and reengineer its field accounting activity. If the Service decides to close the San Mateo IT Center, we recommend that the PMG consider: During discussions with APWU regarding the IT center’s closure, offering to help bargaining unit employees find jobs if they decide to remain in the Bay Area. During discussions with APWU regarding the IT center’s closure, offering some assistance—such as resume preparation and review services—to the trailing spouses of bargaining unit employees who decide to relocate to another postal IT center. Providing some assistance–such as resume preparation and review services—to the trailing spouses of EAS employees who decide to relocate to another postal IT center. The Postal Service provided comments on a draft of this report in a letter from the chief financial officer and executive vice president dated December 19, 2002. These comments are summarized below and are reprinted in appendix II. Postal officials also provided technical and clarifying comments, which we have incorporated into the report where appropriate. Although the Service did not comment on our findings, it did agree with our recommendations. The Service reiterated that it has not yet made a decision regarding the proposed closure of the San Mateo IT Center and will reevaluate the proposed closure as part of its overall strategy to rationalize its administrative infrastructure and meet its data processing needs with the appropriate facilities, technologies, and staff. The Service indicated that it would implement our recommendation that the DAR be reviewed and updated before a decision is made about closing the San Mateo IT Center. Specifically, the Service said that before making any decisions regarding possible disposition of the building and property, it would update the information in the DAR. The Service further stated that it was aware that conditions in the commercial building market in the Bay Area have changed since the San Mateo DAR was submitted. The Service stated that it might need to revisit the proposal to sell the building in light of an updated assessment of the building’s fair market value, the viability of potential outlease or leaseback options, and the space needs of the expanded Accounting Service Center. In response to our other recommendations that the Service consider offering to (1) help bargaining unit employees find jobs if they remain in the Bay Area and (2) provide some assistance to help the trailing spouses of employees who relocate find jobs, the Service indicated that it would try to minimize the negative effects of relocation. The Service said that if it determines that closing the San Mateo IT Center and relocating its functions to other postal IT centers are still critical to the Service’s IT strategy, the Service will adhere to the provisions of its bargaining unit agreements. The Service further stated that to the extent possible, consistent with those agreements, it will attempt to mitigate the negative impacts that relocation may have on employees and their families. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report for 30 days after the date of this letter. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members, Senate Committee on Governmental Affairs and its Subcommittee on International Security, Proliferation, and Federal Services; and to the Chairman, House Committee on Government Reform. We will also send copies of this report to the Postmaster General and Chief Executive Officer, U.S. Postal Service, and the President of the American Postal Workers Union. In addition this report will be available at our Web site at http://www.gao.gov. Major contributors to this report included Gerald P. Barnes, Isidro L. Gomez, Stuart M. Kaufman, Roger L. Lively, Donald J. Porteous, and Charles F. Wicker. If you have any questions about this letter or the appendixes, please contact me or Mr. Barnes on (202) 512-2834 or at ungarb@gao.gov or barnesgp@gao.gov. According to the responses to our survey, San Mateo IT employees, on average, are older and have more years of service than displaced New York IT Center employees at the time of the center’s closure. The demographics of San Mateo IT Center employees indicating they would not likely relocate and displaced New York IT Center employees who did not relocate were more similar than the demographics of San Mateo IT Center employees who indicated they would likely relocate and displaced New York IT employees who relocated. Table 1 provides comparative demographic data for San Mateo and displaced New York IT employees. Data are shown for employees who relocated (or are likely to relocate in the case of San Mateo employees) and those who did not relocate (or are likely not to relocate in the case of San Mateo employees). Because our data for New York IT Center employees do not include all displaced New York employees, the data may not be totally representative. For example, during the 9 years since the New York IT Center closed, some of its displaced employees may have retired and are therefore not reflected in our data. If data were available for this group, it would have tended to increase the average age and years of service of displaced New York IT Center employees at the time of closure. Additionally, displaced New York IT Center employees had more favorable options/opportunities to lessen the impact of the closure than San Mateo IT Center employees are likely to have. For example, all displaced New York IT Center employees had the option of relocating with their jobs to another postal IT center or continuing to work for the Service in the same geographic area (though not necessarily in an IT position), whereas San Mateo IT Center employees likely will not have this option. Table 2 displays the range of options/opportunities likely to be available to San Mateo IT Center employees and options/opportunities that were available to displaced New York IT Center employees. Options/opportunities are similar for early retirements, assistance programs to help EAS employees find employment outside the Service, and help for trailing spouses who are postal employees and need to find suitable employment at the new location. Options/opportunities are not similar with regard to buy-outs, ability to continue employment with the Service in some capacity, saved- pay protections, and the availability of local postal and nonpostal jobs. | While the U.S. Postal Service (USPS) rationalizes its infrastructure, it is weighing a proposal to close and sell its Information Technology (IT) center located in San Mateo, California. According to USPS, closing the IT center and selling the facility should save USPS about $74 million over the next 10 years and result in increased efficiency. All IT union employees and about half of IT management employees will be offered the opportunity to relocate with their jobs to other postal IT centers. The San Mateo IT Center also houses an Accounting Service Center whose functions and staff are to be moved into leased space in the San Francisco Bay Area. GAO undertook this study to, among other things, identify the process USPS is following in making its decision about closing the IT center and determine the impact such a closure would have on IT employees at the center. USPS is following its Investment Review and Approval Process in reaching a decision about closing the San Mateo IT Center. To support the investment needed to close the IT center, the process requires--and USPS prepared--analyses based on prevailing economic and other conditions. However, these conditions have changed since USPS prepared the analyses in 2000. In 2001, USPS announced plans to automate and reengineer its field accounting activity, which will result in USPS closing its 85 district accounting offices and consolidating the residual activities into its 3 Accounting Service Centers. USPS has not updated its analyses to reflect the changed conditions, but said that it planned to do so. San Mateo IT employees anticipate mostly negative social impacts if they relocate and mostly negative economic impacts if they stay in the Bay Area. Of the 213 San Mateo IT employees who responded to our survey, 36 (17 percent) indicated they would likely relocate, although most would be offered jobs at other postal IT centers. In 2000, USPS' economic analyses included an assumption--and San Mateo IT employees believed--that local jobs would be available for those individuals who did not want to relocate. However, local postal jobs are no longer available, and nonpostal IT job opportunities have tightened considerably in the Bay Area. GAO has previously noted that progressive organizations that are restructuring often provide job placement assistance to employees faced with losing their jobs. USPS plans to offer job assistance to management employees seeking nonpostal jobs. However, USPS does not plan to offer job assistance to union employees because such assistance is not covered by their collective bargaining agreement. Because the employment outlook in the Bay Area has changed dramatically, union employees who decide not to relocate may encounter difficulty finding employment in the Bay Area. |
In January 1999, INS issued its Interior Enforcement Strategy. This strategy focused resources on areas that would have the greatest impact on reducing the size and annual growth of the illegal resident population. Certain criteria were used to develop the priorities and activities of the strategy. The criteria focused on potential risks to U.S. communities and persons, costs, capacity to be effective, impact on communities, potential impact on reducing the size of the problem, and potential value for prevention and deterrence. The strategy established the following five areas in priority order: 1. Identify and remove criminal aliens and minimize recidivism. Under this strategic priority, INS was to identify and remove criminal aliens as they come out of the federal and state prison systems and those convicted of aggravated felonies currently in probation and parole status. 2. Deter, dismantle, and diminish smuggling or trafficking of aliens. This strategic priority called for INS to disrupt and dismantle the criminal infrastructure that encourages and benefits from illegal migration. INS efforts were to start in source and transit countries and continue inside the United States, focusing on smugglers, counterfeit document producers, transporters, and employers who exploit and benefit from illegal migration. 3. Respond to community reports and complaints about illegal immigration. In addition to responding to local law enforcement issues and needs, this strategic priority emphasizes working with local communities to identify and address problems that arise from the impact of illegal immigration, based on local threat assessments. 4. Minimize immigration benefit fraud and other document abuse. Under this strategic priority, INS was to aggressively investigate and prosecute benefit fraud and document abuse to promote integrity of the legal immigration system. 5. Block and remove employers’ access to undocumented workers. The strategy emphasizes denying employers access to unauthorized workers by checking their compliance with the employment verification requirements in the Immigration Reform and Control Act of 1986. Coupled with its efforts to control smuggling activity, this effort could have a multiplier effect on access of employers to illegal workers and on the overall number of illegal residents in the country. Figure 1 shows that INS had generally allocated its interior enforcement resources consistent with these priorities and that the workyears devoted to several of INS’s interior enforcement efforts had either declined or stayed about the same between fiscal years 1998 and 2002. Our work has shown that INS faced numerous daunting enforcement issues, as will BICE as it assumes responsibility for the strategy. For example, the potential pool of removable criminal aliens and fugitives numbers in the hundreds of thousands. Many are incarcerated in hundreds of federal, state, and local facilities, while others are fugitives at large across the country. The number of individuals smuggled into the United States has increased dramatically, and alien smuggling has become more sophisticated, complex, organized, and flexible. Thousands of aliens annually illegally seek immigration benefits, such as work authorization and change of status, and some of these aliens use these benefits to enable them to conduct criminal activities. Hundreds of thousands of aliens unauthorized to work in the United States have used fraudulent documents to circumvent the process designed to prevent employers from hiring them. In many instances, employers are complicit in this activity. Given the nature, scope, and magnitude of these activities, BICE needs to ensure that it is making the best use of its limited enforcement resources. We found that fundamental management challenges exist in several of the interior enforcement programs and that addressing them will require the high-level attention and concerted efforts of BICE. In several reports we noted that INS did not believe it had sufficient staff to reach its program goals. Having data on how to effectively allocate staff and placing sufficient staff in the right locations is important if BICE is to achieve program goals. Staff shortages had contributed to INS’s inability to promptly remove the majority of criminal aliens after they have completed their prison sentences. In 1997 INS did not place into removal proceedings 50 percent of potentially deportable criminal aliens who were released from federal prisons and state prisons from 5 states. In 1999 we reported that, although the removal of criminal aliens was an INS management priority, INS faced the same staff shortage issues in 1997 as it had in 1995. In particular, agent attrition – about one-third of the workforce - continued to impede INS’s ability to meet its program goals. INS had told us that since 1997, the attrition rates of agents in this program has stabilized and that, in fiscal year 2003, the agents from this program would be reclassified as detention removal officers, which INS believed should further help reduce attrition. Even if INS had additional staff working in these program areas, it lacked good management information to determine how many staff it needed to meet its program goals and how best to allocate staff given the limited resources it did have. With respect to its program for removing incarcerated criminal aliens, INS told us that beginning in fiscal year 2002, the agency implemented our recommendation to use a workload analysis model. This was to help identify the resources the agency needed for its criminal alien program in order to achieve overall program goals and support its funding and staffing requests. We have not reviewed this new model to ascertain its usefulness. With respect to alien smuggling, INS lacked field intelligence staff to collect and analyze information. Both 1998 and 1999 INS Annual Performance Plan reports stated that the lack of intelligence personnel hampered the collection, reporting, and analysis of intelligence information. Although INS’s Intelligence Program proposed that each district office have an intelligence unit, as of January 2000, 21 of INS’s 33 districts did not have anyone assigned full-time to intelligence-related duties. Our ongoing work at land ports of entry shows this to be a continuing problem. The worksite enforcement program received a relatively small portion of INS’s staffing and budget. In fiscal year 1998, INS completed a total of 6,500 worksite investigations, which equated to about 3 percent of the estimated number of employers of unauthorized aliens. Given limited enforcement resources, BICE needs to assure that it targets those industries where employment of illegal aliens poses the greatest potential risk to national security. The program now has several initiatives underway that target sensitive industries. INS had long-standing difficulty developing and fielding information systems to support its program operations, and effectively using information technology continued to remain a challenge. For example, in 2002 we reported that benefit fraud investigations had been hampered by a lack of integrated information systems. The operations units at the four INS service centers that investigate benefit fraud operate different information systems that did not interface with each other or with the units that investigate benefit fraud at INS district offices. As a result, sharing information about benefit applicants is difficult. The INS staff who adjudicate applications did not have routine access to INS’s National Automated Immigration Lookout System (NAILS). Not having access to or not using NAILS essentially means that officers may be making decisions without access to or using significant information and that benefits may be granted to individuals not entitled to receive them. Thus, INS was not in the best position to review numerous applications and detect patterns, trends, and potential schemes for benefit fraud. Further, in 2002 we reported that another INS database, the Forensic Automated Case and Evidence Tracking System (FACETS), did not contain sufficient data for managers to know the exact size and status of the laboratory’s pending workload or how much time is spent on each forensic case by priority category. As a result, managers were not in the best position to make fact-based decisions about case priorities, staffing, and budgetary resource needs. With respect to the criminal alien program, in 1999 we reported that INS lacked a nationwide data system containing the universe of foreign-born inmates for tracking the hearing status of each inmate. In response to our recommendation, INS developed a nationwide automated tracking system for the Bureau of Prisons and deployed the system to all federal institutional hearing program sites. INS said that it was working with the Florida Department of Corrections to integrate that state’s system with INS’s automated tracking system. INS also said that it planned to begin working with New York, New Jersey, and Texas to integrate their systems and then work with California, Illinois, and Massachusetts. We have not examined these new systems to determine whether they were completed as planned or to ascertain their effectiveness. In 2000 we reported that INS lacked an agencywide automated case tracking and management system that prevented antismuggling program managers from being able to monitor their ongoing investigations, determine if other antismuggling units were investigating the same target, or know if previous investigations had been conducted on a particular target. In response to our recommendation, INS deployed an automated case tracking and management system for all of its criminal investigations, including alien smuggling investigations. Again, we have not examined the new system to ascertain its effectiveness. Our review of the various program components of the interior enforcement strategy found that working-level guidance was sometimes lacking or nonexistent. INS had not established guidance for opening benefit fraud investigations or for prioritizing investigative leads. Without such criteria, INS could not be ensured that the highest-priority cases were investigated and resources were used optimally. INS’s interior enforcement strategy did not define the criteria for opening investigations of employers suspected of criminal activities. In response to our recommendation, INS clarified the types of employer-related criminal activities that should be the focus of INS investigations. INS’s alien smuggling intelligence program had been impeded by a lack of understanding among field staff about how to report intelligence information. Staff were unclear about guidelines, procedures, and effective techniques for gathering, analyzing, and disseminating intelligence information. They said that training in this area was critically needed. INS had not established outcome-based performance measures that would have helped it assess the results of its interior enforcement strategy. For example, in 2000 we reported that while INS had met its numeric goals for the number of smuggling cases presented for prosecution in its antismuggling program, it had not yet developed outcome-based measures that would indicate progress toward the strategy’s objective of identifying, deterring, disrupting, and dismantling alien smuggling. This was also the case for the INS intelligence program. INS had not developed outcome- based performance measures to gauge the success of the intelligence program to optimize the collection, analysis, and dissemination of intelligence information. In 2002 we reported that INS had not yet established outcome-based performance measures that would help it assess the results of its benefit fraud investigations. Additionally, INS had not established goals or measurement criteria for the service center operations units that conduct fraud investigation activities. INS’s interior enforcement strategy did not clearly describe the specific measures INS would use to gauge its performance in worksite enforcement. For example, in 1999 we reported that the strategy stated that INS would evaluate its performance on the basis of such things as changes in the behavior or business practices of persons and organizations, but did not explain how they expected the behavior and practices to change. And although INS indicated that it would gauge effectiveness in the worksite area by measuring change in the wage scales of certain targeted industries, it left unclear a number of questions related to how it would do this. For example, INS did not specify how wage scales would be measured; what constituted a targeted industry; and how it would relate any changes found to its enforcement efforts or other immigration-related causes. The strategy stated that specific performance measurements would be developed in the annual performance plans required by the Government Performance and Results Act. According to INS’s fiscal year 2003 budget submission, the events of September 11th required INS to reexamine strategies and approaches to ensure that INS efforts fully addressed threats to the United States by organizations engaging in national security crime. As a result, with regard to investigating employers who may be hiring undocumented workers, INS planned to target investigations of industries and businesses where there is a threat of harm to the public interest. However, INS had not set any performance measures for these types of worksite investigations. Since the attacks of September 11, 2001, and with the formation of DHS, a number of management challenges are evident. Some of the challenges discussed above carry over from the INS, such as the need for sound intelligence information, efficient use of resources and management of workloads, information systems that generate timely and reliable information, clear and current guidance, and appropriate performance measures. Other challenges are emerging. These include creating appropriate cooperation and collaboration mechanisms to assure effective program management, and reinforcing training and management controls to help assure compliance with DHS policies and procedures and the proper treatment of citizens and aliens. BICE will need to assure that appropriate cooperation and collaboration occurs between it and other DHS bureaus. For example, both the Border Patrol, now located in the Bureau of Customs and Border Protection (BCBP), and BICE’s immigration investigations program conducted alien smuggling investigations prior to the merger into DHS. These units operated through different chains of command with different reporting structures. As a result, INS’s antismuggling program lacked coordination, resulting in multiple antismuggling units overlapping in their jurisdictions, making inconsistent decisions about which cases to open, and functioning autonomously and without a single chain of command. It’s unclear at this time how the anti-smuggling program will operate under DHS. Should both BCBP’s Border Patrol and BICE’s Investigations program continue to conduct alien smuggling investigations, Under Secretary Hutchinson will need to assure that coordination and collaboration exists to overcome previous program deficiencies. The Bureau of Citizenship and Immigration Services (BCIS) is responsible for administering services such as immigrant and nonimmigrant sponsorship, work authorization, naturalization of qualified applicants for U.S. citizenship, and asylum. Processing benefit applications is an important DHS function that should be done in a timely and consistent manner. Those who are eligible should receive benefits in a reasonable period of time. However, some try to obtain these benefits through fraud, and investigating fraud is the responsibility of BICE’s Immigration Investigations program. INS’ approach to addressing benefit fraud was fragmented and unfocused. INS’ interior enforcement strategy did not address how the different INS components that conducted benefit fraud investigations were to coordinate their investigations. Also, INS had not established guidance to ensure the highest-priority cases are investigated. Secretary Ridge will need to ensure the two bureaus work closely to assure timely adjudication for eligible applicants while identifying and investigating potential immigration benefit fraud cases. BICE’s Intelligence Program is responsible for collecting, analyzing, and disseminating immigration-related intelligence. Immigration-related intelligence is needed by other DHS components such as Border Patrol agents and inspectors within BCBP and personnel within BCIS adjudicating immigration benefits. BICE will need to develop an intelligence program structure to ensure intelligence information is disseminated to the appropriate components within DHS’s other bureaus. Since the attacks of September 11, 2001, and with the formation of DHS, the linkages between immigration enforcement and national security have been brought to the fore. Immigration personnel have been tapped to perform many duties that previously were not part of their normal routine. For example, as part of a special registration program for visitors from selected foreign countries, immigration investigators have been fingerprinting, photographing, and interviewing aliens upon entry to the U.S. Immigration investigators have also participated in anti-terrorism task forces across the country and helped interview thousands of non- immigrant aliens to determine what knowledge they may have had about terrorists and terrorist activities. As part of its investigation of the attacks of September 11, the Justice Department detained aliens on immigration charges while investigating their potential connection with terrorism. An integrated Entry/Exit System, intended to enable the government to determine which aliens have entered and left the country, and which have overstayed their visas, is currently under development and will rely on BICE investigators to locate those who violate the terms of their entry visas. All of these efforts attest to the pivotal role of immigration interior enforcement in national security and expanded roles of investigators in the fight against terrorism. It is important that BICE investigators receive training to perform these expanded duties and help assure that they effectively enforce immigration laws while recognizing the rights of citizens and aliens. It is also important that DHS reinforce its management controls to help assure compliance with DHS policies and procedures. | Department of Homeland Security's (DHS) Immigration Interior Enforcement Strategy's implementation is now the responsibility of the Bureau of Immigration and Customs Enforcement (BICE). This strategy was originally created by the Immigration and Naturalization Service (INS). In the 1990s, INS developed a strategy to control illegal immigration across the U.S. border and a strategy to address enforcement priorities within the country's interior. In 1994, INS's Border Patrol issued a strategy to deter illegal entry. The strategy called for "prevention through deterrence"; that is, to raise the risk of being apprehended for illegal aliens to a point where they would consider it futile to try to enter. The plan called for targeting resources in a phased approach, starting first with the areas of greatest illegal activity. In 1999, the INS issued its interior enforcement strategy designed to deter illegal immigration, prevent immigration-related crimes, and remove those illegally in the United States. Historically, Congress and INS have devoted over five times more resources in terms of staff and budget on border enforcement than on interior enforcement. INS's interior enforcement strategy was designed to address (1) the detention and removal of criminal aliens, (2) the dismantling and diminishing of alien smuggling operations, (3) community complaints about illegal immigration, (4) immigration benefit and document fraud, and (5) employers' access to undocumented workers. These components remain in the BICE strategy. INS faced numerous challenges in implementing the strategy. For example, INS lacked reliable data to determine staff needs, reliable information technology, clear and consistent guidelines and procedures for working-level staff, effective collaboration and coordination within INS and with other agencies, and appropriate performance measures to help assess program results. As BICE assumes responsibility for strategy implementation, it should consider how to address these challenges by improving resource allocation, information technology, program guidance, and performance measurement. The creation of DHS has focused attention on other challenges to implementing the strategy. For example, BICE needs to coordinate and collaborate with the Bureau of Citizenship and Immigration Services (BCIS) for the timely and proper adjudication of benefit applications, and with the Bureau of Customs and Border Protection (BCBP) to assist in antismuggling investigations and sharing intelligence. In addition, BICE needs to assure that training and internal controls are sufficient to govern investigators' antiterrorism activities when dealing with citizens and aliens. |
Securing transportation systems and facilities is complicated, requiring balancing security to address potential threats while facilitating the flow of people and goods. These systems and facilities are critical components of the U.S. economy and are necessary for supplying goods throughout the country and supporting international commerce. U.S. transportation systems and facilities move over 30 million tons of freight and provide approximately 1.1 billion passenger trips each day. The Ports of Los Angeles and Long Beach estimate that they alone handle about 43 percent of the nation’s oceangoing cargo. The importance of these systems and facilities also make them attractive targets to terrorists. These systems and facilities are vulnerable and difficult to secure given their size, easy accessibility, large number of potential targets, and close proximity to urban areas. A terrorist attack at these systems and facilities could cause a tremendous loss of life and disruption to our society. An attack would also be costly. According to recent testimony by a Port of Los Angeles official, a 2002 labor dispute led to a 10-day shutdown of West Coast port operations, costing the nation’s economy an estimated $1.5 billion per day. A terrorist attack to a port facility could have a similar or greater impact. One potential security threat stems from those individuals who work in secure areas of the nation’s transportation system, including seaports, airports, railroad terminals, mass transit stations, and other transportation facilities. It is estimated that about 6 million workers, including longshoreman, mechanics, aviation and railroad employees, truck drivers, and others access secure areas of the nation’s estimated 4,000 transportation facilities each day while performing their jobs. Some of these workers, such as truck drivers, regularly access secure areas at multiple transportation facilities. Ensuring that only workers that do not pose a terrorist threat are allowed unescorted access to secure areas is important in helping to prevent an attack. According to TSA and transportation industry stakeholders, many individuals that work in secure areas are currently not required to undergo a background check or a stringent identification process in order to access secure areas. For example, according to stakeholders at several ports, truck drivers need only present a driver’s license, which can be easily falsified and obtained, to access secure areas of the nation’s ports. In addition, without a standard credential that is recognized across modes of transportation and facilities, many workers must obtain multiple credentials to access each transportation facility they enter. For example, in Florida, truck drivers who deliver goods to multiple ports in the state must obtain credentials for as many as 13 individual ports. With so many different credentials in use, it may be difficult to verify the authenticity of all of them. In the aftermath of the September 11, 2001, terrorist attacks, the Aviation and Transportation Security Act (ATSA) was enacted in November 2001. Among other things, ATSA required TSA to work with airport operators to strengthen access control points in secure areas and consider using biometric access control systems to verify the identity of individuals who seek to enter a secure airport area. In response to ATSA, TSA established the TWIC program in December 2001 to mitigate the threat of terrorists and other unauthorized persons from accessing secure areas of the entire transportation network, by creating a common identification credential that could be used by workers in all modes of transportation. In November 2002, the Maritime Transportation Security Act of 2002 (MTSA) was enacted and required the Secretary of Homeland Security to issue a maritime worker identification card that uses biometrics, such as fingerprints, to control access to secure areas of seaports and vessels, among other things. The responsibility for securing the nation’s transportation system and facilities is shared by federal, state, and local governments, as well as the private sector. At the federal government level, TSA, the agency responsible for the security of all modes of transportation, has taken the lead in developing the TWIC program, while the Coast Guard is responsible for developing maritime security regulations and ensuring that maritime facilities and vessels are in compliance with these regulations. As a result, TSA and the Coast Guard are working together to implement TWIC in the maritime sector. According to TSA officials, TWIC is being implemented in the maritime sector first to meet MTSA requirements and because the aviation sector already has established systems to control access to secure areas. According to TSA, the agency is considering extending the program to other modes of transportation. Most seaports, airports, mass transit stations, and other transportation systems and facilities in the United States are owned and operated by state and local government authorities and private companies. As such, certain components of the TWIC program, such as installing access control systems, such as card readers, will be the responsibility of these state and local governments and private industry stakeholders. For example, at most seaports, the private companies that operate the terminal are responsible for controlling access to secure areas, while at other ports, local governments handle this responsibility. As a result, the responsibility for implementing certain components of the TWIC program at each facility will be shared between local governments and the private sector. TSA—through a private contractor—tested the TWIC program from August 2004 to June 2005 at 28 transportation facilities around the nation, including 22 port facilities, 2 airports, 1 rail facility, 1 maritime exchange, 1 truck stop, and a U.S. postal service facility. In August 2005, TSA and the testing contractor completed a report summarizing the results of the TWIC testing. TSA also hired an independent contractor to assess the performance of the TWIC testing contractor. Specifically, the independent contractor conducted its assessment from March 2005 to January 2006, and evaluated whether the testing contractor met the requirements of the testing contract. The independent contractor issued its final report on January 25, 2006. Since its creation, the TWIC program has received about $90 million in funding for program development and testing. Table 1 provides a summary of TWIC program funding since fiscal year 2003. In December 2004, we reported on the challenges TSA faced in implementing the TWIC program, such as developing regulations and a comprehensive plan for managing the program. We also reported on several factors that caused TSA to miss its initial August 2004 target date for issuing TWIC cards, including (1) difficulty obtaining approval from DHS to test the TWIC program; (2) delays in developing cost-benefit and alternative analyses for the program; and (3) difficulty determining which TWIC card technologies were best suited for the port environment. We recommended that TSA employ industry best practices for project planning and management by developing a comprehensive project plan for managing the program and specific detailed plans for risk mitigation and cost-benefit and alternatives analyses. DHS generally agreed with these recommendations and subsequently developed plans to help them manage the TWIC program, ensure quality, and assess and mitigate the risks to the program. According to TSA, the agency also developed a cost model to assist in developing program budget estimates. According to TSA, the TWIC program, under the proposed rule issued in May 2006, is to consist of key components designed to enhance security (see fig. 1). These include: Enrollment: Transportation workers are to be enrolled in the TWIC program at enrollment centers by providing personal information, such as a social security number and address, digital photographs, and fingerprints. Workers who are unable to provide quality fingerprints are to provide an alternate authentication mechanism, such as a digital photograph. Background checks: TSA is to conduct background checks on each worker to ensure that individuals do not pose a threat. These are to include several components. First, TSA is to conduct a security threat assessment to make sure that the worker is not listed in any terrorism databases or on a terrorism watch list, such as TSA’s No-fly and selectee list. Second, a Federal Bureau of Investigation criminal history records check is to be conducted to identify if the worker has any disqualifying criminal offenses. Third, workers immigration status is to be checked by the U.S. Citizenship and Immigration Service. Workers are to have the opportunity to appeal the results of the background check or request a waiver if they do not pass the check. TWIC card production: After TSA determines that a worker has passed the background checks, the agency provides transportation worker information to a federal card production facility where the TWIC card is to be personalized for the worker, manufactured, and then sent back to the enrollment center. Card issuance: Transportation workers are to be informed when their cards are ready to be picked up at enrollment centers. Privilege granting: TWIC cards are to be activated at enrollment centers and workers will choose a personal identification number. Transportation facility security officials will then grant workers access to secure areas on an individual basis. Workers are to then use their TWIC cards to match the card to the card holder when accessing secure areas through biometric access control systems. Card Revocation: Local facilities can download or receive real-time lists of workers deemed to pose a threat or whose cards have been lost or stolen from TSA. Facilities can then remove these workers’ access privileges to secure areas. TWIC cards are to be renewed and background checks repeated every 5 years. Cards will be re-issued to workers if ever lost or stolen. In May 2006, DHS issued a proposed rule that describes the requirements of the TWIC program that the owners and operators of maritime facilities and vessels would be required to implement. Table 2 provides an overview of the requirements in the TWIC proposed rule. In the TWIC proposed rule, TSA and the Coast Guard present cost estimates for implementing the TWIC program. According to the estimates, the cost of the TWIC program to the federal government and the maritime industry could range from about $777 million to $829 million over the next 10 years. About 40 percent of these costs—$355 million to $378 million—would be incurred in the initial program start up. According to TSA and the Coast Guard’s cost estimate, about 48 percent of the total cost of the TWIC program will be incurred by the owners and operators of port facilities and vessels. TSA and the Coast Guard estimate that the total cost to these facilities and vessel owners and operators will be about $467 million over 10 years, mostly for the installation of access control systems and other technology to operate these systems. In addition to these costs, TSA and the Coast Guard estimate that they will charge a fee of $149 to produce and issue each TWIC card for the estimated 750,000 workers that will need to receive a card. According to TSA, this fee will cover the cost of the background checks and card production and issuance. This fee is to be collected from the applicant at the enrollment center when applying for a TWIC. In August 2006, DHS decided that the TWIC program would be implemented in the maritime sector using two separate rules, one for enrolling workers and issuing cards and the second for implementing TWIC access control technologies, such as biometric card readers. DHS made the decision to use two separate rules in response to numerous maritime industry concerns about whether the access control technologies necessary to operate the TWIC program will work effectively in the maritime sector. DHS plans to finalize the first TWIC rule, which is expected to cover enrolling workers, conducting background checks, and issuing TWIC cards, by the end of calendar year 2006. TWIC access control technology requirements are expected to be addressed in a second TWIC proposed rule, to be issued after DHS finalizes the first TWIC rule. DHS and industry stakeholders face three major challenges in addressing problems identified during TWIC program testing and ensuring that key components of the TWIC program can work effectively. The first challenge is enrolling and issuing TWIC cards to a significantly larger population of workers in a timely manner than was done during testing of the TWIC program. The second challenge will be ensuring that the technology required to operate the TWIC program, such as biometric card readers, works effectively in the maritime sector. The third challenge DHS faces is balancing the added security benefits of the TWIC program in preventing a terrorist attack that could result in a costly disruption in maritime commerce with the impact that the program could have on the daily flow of maritime commerce. TSA and Coast Guard officials told us they are taking steps to improve the enrollment and card issuance process, and plan to obtain additional comments on the access control technology requirements for the TWIC program and the potential impact that the program could have on the flow of maritime commerce as part of a second rulemaking on the TWIC program. Given the large investment required by the federal government and maritime industry to implement the TWIC program, it is important that solutions to these problems are developed and tested prior to implementation to help ensure that the program meets its intended goals without further delays and that government and maritime industry resources are used efficiently. TSA had difficulty in meeting its goals for enrolling workers and issuing TWIC cards during testing. Specifically, TSA’s goal was to enroll and issue TWIC cards to 75,000 workers at 28 transportation facilities. However, only about 12,900 workers were enrolled and only about 1,700 TWIC cards were issued to workers at 19 facilities. According to TSA officials and the testing contractor, these problems were caused by difficulties finding volunteers to enroll in the TWIC program during testing and technical problems, such as collecting fingerprints from workers at certain testing locations and enrolling large numbers of workers at one time. TSA officials stated that during implementation the agency will use a faster and easier method of collecting fingerprints and will enroll workers individually. While these actions should address the problems that occurred during testing, during implementation, TSA faces the challenge of enrolling and issuing TWIC cards to 750,000 workers at 3,500 maritime facilities and 10,800 vessels—a significantly larger population of workers than were included in TWIC program testing. Another challenge TSA faces is ensuring that workers are not providing false information and counterfeit identification documents when they enroll in the TWIC program. This step is of critical importance in ensuring that a person being issued a TWIC card does not pose a security threat. Since social security cards, immigration documents, passports, and other forms of identification can be obtained from fraudulent identity providers, the authenticity of these documents must be verified and personnel that enroll workers must be trained to identify fraudulent documents. During TWIC testing, enrollment personnel were provided some training in identifying fraudulent documents. According to TSA, the TWIC enrollment process to be used during implementation will include using document scanning and verification software to help determine if identification documents are fraudulent and training personnel to identify fraudulent documents. While it is important that the enrollment process include the capability to prevent workers from using fraudulent identification documents to obtain a TWIC card, details on the approach that TSA will use during implementation are not yet available. In addition, TSA is taking steps to address other problems regarding enrolling workers and issuing TWIC cards in a timely manner that were encountered during testing. Specifically, TSA has eliminated approaches used at certain locations to collect fingerprints and enroll large groups of workers at one time, which caused problems during testing, and kept approaches to enrolling workers and issuing cards that worked successfully at other locations. While these actions appear to address these problems, TSA could not provide us the results of how these successful approaches worked at other testing locations. Figure 2 is an example of an enrollment station used during testing of the TWIC program. The TWIC proposed rule would require each facility and vessel to (1) install and use biometric card readers in the maritime environment to control access to secure areas, (2) link these card readers to the individual facility or vessel access control system, or use hand held card readers, and (3) routinely connect to TSA’s national TWIC database and incorporate updates on TWIC cards that should be revoked because a worker poses a security threat or a TWIC card has been lost or stolen. Our analysis of the results of TWIC program testing and visits to 15 of the 28 testing sites, as well as the concerns expressed by industry stakeholders at public meetings on the TWIC proposed rule, suggest that it may be difficult to implement each of these steps. Furthermore, industry stakeholders are concerned about the cost of implementing and operating biometric card readers, linking the readers to their local access control system, and connecting to TSA’s national TWIC database. TSA’s recent decision to implement the TWIC program by issuing two separate rules will give the agency more time to consider maritime industry concerns regarding the TWIC access control technology and develop solutions that will help ensure that TWIC will work effectively in the maritime environment. TSA is also working with the National Institute of Standards and Technology (NIST) to ensure that the biometric identification cards and card readers to be used for the TWIC program meet federal standards for identification and access controls. Industry stakeholders will be required to install biometric TWIC card readers capable of reading a worker’s fingerprint and matching that fingerprint to a worker’s TWIC card in order for the worker to gain unescorted access to secure areas of a facility or vessel. While TSA was able to provide us the total number of card readers installed at each testing location, they could not tell us which or how many of these card readers were biometric or non-biometric. According to TWIC testing contractor officials, less than half of the 99 card readers installed during TWIC testing were biometric. In addition, only 8 of the 15 testing facilities that we visited tested biometric card readers, and officials at only 2 of these 8 facilities told us that their biometric card readers functioned effectively. For example, at one testing facility, six biometric card readers were installed, but were never operational because the testing contractor had difficulty installing the infrastructure to provide electrical power and communications capability to the readers themselves. As a result, the biometric card readers were never used by workers at this facility. According to TSA officials, the agency and the testing contractor did not have the authority or responsibility for installing or repairing facility access control systems and infrastructure during TWIC testing, other than what was agreed to in the initial memorandum of understanding with those facilities. In addition, TSA did not test the use of biometric card readers on vessels at all during testing of the TWIC program, although the TWIC proposed rule requires the use of biometric card readers on vessels during implementation of the program. An independent assessment of TWIC testing also found that 10 of the 18 TWIC testing sites they visited encountered problems installing TWIC technologies. Although the independent assessment does not specify the problems encountered, TSA and the TWIC testing contractor confirmed that some sites had problems installing the infrastructure necessary to operate the TWIC card readers and others had problems effectively interfacing card readers with existing facility access control systems. Figure 3 provides an example of biometric card readers used during testing of the TWIC program. In commenting on the TWIC proposed rule, industry stakeholders expressed concerns regarding TSA’s limited testing of biometric card readers and the challenges of using these readers in the harsh outdoor maritime environment. Stakeholders that have already installed biometric fingerprint-based card readers in the outdoor maritime environment stated that these readers did not work effectively in the maritime environment where they were often damaged and affected by dirt, wind, salt, and water. Several stakeholders also provided comments about the design of TWIC card readers to ensure that these readers were less susceptible to the elements in the maritime environment, such as salt and water. In addition, the TWIC testing contractor recommended that contactless card readers be used during implementation of the TWIC program to more quickly process workers into secure areas and better withstand the harsh maritime environment. According to TSA, the agency will consider these and other industry stakeholder comments regarding TWIC access control technologies as part of the second rulemaking. Several industry stakeholders proposed that TSA conduct additional maritime testing of biometric card readers, including their use on vessels, to provide assurance that the TWIC program technology works effectively before it is implemented nationwide and ensure that their investments in this technology and infrastructure would be worthwhile. Stakeholders also suggested that TSA and the Coast Guard closely coordinate with maritime stakeholders that have implemented or are currently using biometric access control systems. For example, Florida is currently implementing a statewide uniform port access biometric credential program, similar to the TWIC program. Coordinating with Florida and other stakeholders could enable TSA and the Coast Guard to learn from these stakeholders’ experiences and potentially test key components of the TWIC program and develop solutions to the various implementation challenges identified during testing. As discussed earlier, in August 2006, DHS decided that the TWIC program would be implemented using two separate rules, one for enrolling workers and issuing cards and the second for implementing TWIC access control technologies, such as biometric card readers. DHS made this decision following numerous maritime industry comments about whether the access control technologies necessary to operate the TWIC program will work effectively. According to TSA, the agency is working with NIST to ensure that the biometric identification cards and card readers to be used for the TWIC program meet federal standards for identification and access controls. We requested additional information from TSA on the time frames on the second TWIC rulemaking and how this rulemaking will ensure that TWIC access control technologies, such as biometric card readers, will work effectively in the maritime environment. TSA officials told us that they could not provide us any details about the second rulemaking. As a result, it is not clear how the TWIC cards will initially be used to permit workers to enter secure areas without requirements for TWIC access control technologies, such as biometric card readers. Under the TWIC proposed rule, maritime facility and vessel owners and operators would be responsible for installing biometric card readers and linking them to individual facility or vessel access control systems, to ensure that only those with valid TWIC cards, who have been granted access rights by the facility, have unescorted access to secure areas. According to the TWIC testing contractor’s report, only 10 of the 28 TWIC testing facilities linked card readers to the local facility access control system. The report did not specifically discuss the effectiveness of the link between card readers and the facility access control system at these 10 locations. TSA said it was unable to identify the specific testing locations where card readers were linked to local access control systems or any additional results regarding the link between card readers and access control systems. According to TSA and the testing contractor, they encountered difficulties in linking card readers to access control systems during testing because many facilities lacked the infrastructure necessary to do so. For example, TSA and testing contractor officials told us that at most maritime facilities participating in testing, electrical power supplies and high-speed communications lines were not available at all of the access control points where card readers were needed, especially those far away from the facility’s central access control system. As a result, linking card readers to the access control system would have been too difficult and costly to perform during testing. In addition, because TSA did not install TWIC card readers on vessels during testing, the agency did not test the link between card readers and vessel access control systems. Industry stakeholders have expressed concern that TSA conducted only limited testing of the link between biometric card readers and local facility access control systems. In addition, the difficulties encountered by the TWIC testing contractor in establishing this link raises questions about the difficulty in doing so during TWIC implementation. For example, some stakeholders stated that they tried but were unable to link biometric card readers to the computers and computer software running their current access control systems. An official at one testing facility told us that his facility spent its own money to hire a technology integrator to link TWIC card readers to the facility access control system because TSA and the testing contractor did not do so during testing of the TWIC program. Stakeholders also expressed concerns that the new biometric TWIC card readers will not be compatible with their existing access control systems and as a result, they will incur additional costs if they are required to purchase new access control systems. According to TSA, while facility and vessel owners and operators will be required to install TWIC card readers, it is up to these facilities and vessels whether they want to link these card readers to their access control systems. TSA recently announced that requirements for purchasing and installing card readers will not be implemented until the public is afforded additional time to comment on that aspect of the TWIC program and the details of this approach will be explained in the next rulemaking. A key security component of the TWIC program is the ability to quickly revoke a worker’s unescorted access privileges to secure areas if TSA identifies a worker as a security threat or if the worker’s TWIC card is lost or stolen. This requires that (1) TSA identify that a worker is a threat to security or that their card has been lost or stolen and invalidate their TWIC card from the national TWIC database; (2) TSA quickly communicates information to facilities regarding those workers whose TWIC cards have been invalidated; and (3) the facility removes a worker’s access privileges to secure areas from their local access control system. However, according to TSA, the testing contractor encountered problems in connecting the national TWIC database to local facilities’ access control systems during testing of the TWIC program. As a result, TSA did not test this connection at any of the 28 testing locations. Several TWIC testing facilities that we visited lacked the technology, such as computer systems and high-speed communications lines, to connect with TSA’s national TWIC database to obtain information on workers that may pose a potential threat or whose TWIC cards had been lost or stolen. An independent contractor’s assessment of the testing also found that TSA did not test the connection between the national TWIC database and local facility access control systems. The independent assessment characterized this as a critical failure because a worker posing a threat could access secure areas of a facility if that facility had not been informed that TSA revoked his or her TWIC card. TSA officials stated that, while they did not test the connection between the national TWIC database and facilities in the field, they tested this component in a laboratory. However, TSA officials said they were unable to provide any reports on this laboratory testing. According to TSA officials, under the TWIC proposed rule, this problem will be resolved because facilities and vessels can download updates from the national TWIC database on a regular basis regarding workers who pose a threat as an alternative to directly connecting with the national database. Since this approach was not used during TWIC program testing, it is important that it be tested to ensure that it works effectively during implementation. The TWIC proposed rule requires that each facility and vessel have the capability to verify that a worker that has been issued a TWIC card has not subsequently been identified by TSA as a threat and that a TWIC card has not been lost or stolen. The proposed rule allows facilities and vessels the option of directly interfacing with TSA’s national TWIC database or routinely downloading a list of invalid TWIC cards from TSA through a secure Web site. In commenting on the TWIC proposed rule, numerous stakeholders expressed confusion about how to connect to TSA’s national TWIC database and what technology they will need to do so. Stakeholders participating in TWIC program testing also expressed concern that TSA did not test this connection at any of the TWIC testing locations. In addition, some stakeholders were concerned about how vessels at sea without internet or satellite service would connect with the national TWIC database to get updates regarding workers who pose a threat or whose TWIC cards have been lost or stolen because TSA also did not test this connection. According to TSA, these issues will be addressed as part of the second rulemaking on TWIC access control technologies. In addition to concerns about whether or not the access control technology will work effectively in the maritime environment, facility and vessel owners and operators are also concerned about the cost and security of technology necessary to implement the TWIC program. TSA and the Coast Guard estimate that, on average, a maritime facility will spend $90,000 per facility to upgrade or install access control systems, including biometric card readers. However, in commenting on the TWIC proposed rule, stakeholders stated that they believe that upgrading and installing access control systems at maritime facilities will cost much more than the TSA and the Coast Guard estimate. For example, one port facility has 37 individual terminals, several of which could require 20 or more card readers for entry and exit lanes at one terminal alone. Port officials estimated that it could cost up to $300,000 per terminal to install the necessary TWIC card readers. Several stakeholders are also concerned that TSA and the Coast Guard cost estimates do not take into account the facilities’ costs to maintain equipment and technology, such as card readers, or the cost to hire additional staff needed to perform such maintenance. Facility and vessel owners also stated that the cost of installing TWIC card readers and other equipment necessary to use TWIC may be a hardship for smaller facilities and vessel operators. We requested additional information on how TSA and the Coast Guard developed the cost estimates in the proposed rule, however, DHS could not provide this information. As a result, we were unable to determine if these estimates were reasonable. Further, industry stakeholders are concerned about the security of the personal information given to TSA to conduct TWIC background checks. For example, stakeholders commenting on the TWIC proposed rule questioned how TSA will ensure the security of workers’ information in light of the fact that other government agencies have mishandled and lost private personal information. In an August 2006 report, the DHS Inspector General highlighted shortcomings in information security for the TWIC program. According to the report, TSA faces numerous challenges in ensuring that security vulnerabilities—which could compromise the confidentiality, integrity and availability of sensitive TWIC data—are remedied and key program policies, regulatory processes, and other work are completed to support the full implementation of the TWIC program. According to the report, TSA agreed with these findings and plans to take steps to correct the security concerns identified. DHS officials acknowledged that there are challenges in ensuring that the TWIC technology works effectively in a maritime environment. Accordingly, DHS decided in August 2006 that it will not require maritime facilities and vessels to implement TWIC card readers and other TWIC access control technologies until the maritime industry has additional time to comment on these aspects of the program. However, TSA is not planning to conduct any additional testing of TWIC program technologies. TSA officials said that the agency is working with NIST to ensure that the biometric identification cards and card readers to be used for the TWIC program meet federal standards for identification and access controls. Specifically, these standards concern the use of biometric identification and access control systems for federal employees and contractors. According to TSA, although these standards are not specifically directed at the TWIC program, the agency believes it is important for the program to comply with these standards. However, NIST’s review of the TWIC program does not involve any actual testing of the TWIC program technology, such as the use of biometric card readers in a maritime environment. In addition to ensuring that key components of the TWIC program work effectively, another challenge DHS faces is balancing the added security components of the TWIC program with the potential effect that the program could slow the daily flow of maritime commerce. If implemented effectively, the security benefits of the TWIC program in preventing a terrorist attack could save lives and avoid a costly disruption in maritime commerce. Alternatively, if key components of the TWIC program, such as biometric card readers, do not work effectively, it could slow the daily flow of maritime commerce. Our discussions with industry stakeholders at facilities that participated in TWIC testing and stakeholder comments on the TWIC proposed rule identified four concerns about the potential impact of TWIC on maritime commerce. According to stakeholders, for the TWIC program to work effectively in the maritime environment without slowing commerce, TWIC cards must be issued within a few days after enrollment, or workers should be allowed interim access to secure areas to perform their job duties while they wait to receive a TWIC card. Several maritime facility officials stated that without quick issuance or interim access, they will have difficulty in staffing and performing operations. Some passenger vessel owners and operators stated that waiting 30 to 60 days to receive a TWIC card could hinder their ability to allow workers to access secure areas to perform their job duties while they are waiting to receive their TWIC cards. According to the TWIC proposed rule, it could take 30 to 60 days for TSA to perform background checks, produce the TWIC cards, and issue these cards to workers. TSA said that they are considering adding a provision to the proposed rule to allow workers temporary access to secure areas while they wait to receive their TWIC cards. Adding such a provision to the rule would address maritime industry concerns. According to TSA officials, the agency hopes to issue TWIC cards sooner than 30 days after a worker enrolls. According to several industry stakeholders, the use of biometric card readers could disrupt the flow of commerce entering and exiting a port if each person or vehicle is not processed in a few seconds or if the readers experience technical problems. Specifically, if a worker or truck driver has problems with their fingerprint verification on a biometric card reader, they could create a long queue delaying several other workers and trucks waiting in line trying to enter secure areas of a port. According to the testing contractor’s report, TWIC card readers rejected workers’ access to secure areas in 4.8 percent of total access attempts during testing. These reject rates were comprised of two types. First, legitimate rejects were workers not allowed access to secure areas because they were not authorized to do so. Second, false rejects were workers not allowed to access secure areas although they were authorized to do so. According to TSA officials, the testing contractor did not determine what percentage of the total 4.8 percent reject rate was legitimate versus false rejects. In addition, neither the testing contractor’s report nor TSA provided any information regarding wait times or delays experienced due to these reject rates at access control points during TWIC testing. The TWIC testing contractor attributed the cause of the reject rates during testing to transportation workers having rougher fingerprints than the average population, making it more difficult for card readers to verify their fingerprints. However, neither TSA nor the testing contractor developed solutions to the problem of reject rates that can be used during implementation of the TWIC program. Several port officials we spoke with told us that delaying cargo entering and exiting a port could result in thousands of dollars lost by port terminal operators in the short term and millions in the long term. Stakeholders have suggested that TSA and the Coast Guard address concerns about delays by conducting additional testing of the TWIC program at a limited number of maritime facilities and vessels. Figure 4 shows a line of trucks transporting cargo into a large port facility through an access control point. TSA and the Coast Guard officials stated that they recognize stakeholders’ concerns regarding the potential impact of access control technology on the flow of commerce and, as a result, plan to obtain additional stakeholder input and comments as part of the second rulemaking to help address these concerns. We requested additional information from TSA on this rulemaking and how it would address concerns regarding the impact on commerce, however, TSA could not provide us any details. Industry stakeholders have stated that they generally support the TWIC program and its requirement that background checks be conducted on workers with unescorted access to secure areas to help ensure that these individuals do not pose a security threat. However, the stakeholders have also expressed some concern that certain disqualifying offenses may be too stringent and could lead to workers unnecessarily losing their jobs. For example, stakeholders stated that the disqualifying offenses should be terrorism related and not include lesser felonies currently in the TWIC proposed rule, such as fraud. In addition, stakeholders expressed concern that according to the TWIC proposed rule, being found guilty of certain disqualifying criminal offenses, such as racketeering, will disqualify a person from receiving a TWIC card for their whole life, regardless of how long ago the worker committed the crime. The TWIC proposed rule would permit workers that do not pass the background check to appeal or request a waiver to obtain a TWIC card. Under the TWIC proposed rule, all Maritime Transportation Security Act (MTSA) regulated facilities and vessels would be required to use a TWIC card to control unescorted access to secure areas. Some industry stakeholders, however, disagree with applying uniform standards to all facilities and vessels in the maritime sector, regardless of size. Small facility and vessel officials providing comments on the TWIC proposed rule stated that if they are required to implement these requirements, they will have to conduct unnecessary checks of workers entering secure areas. For example, smaller vessels may have crews of less than 10 people, and checking TWIC cards each time a person enters a secure area is not necessary. In addition, stakeholders suggested that there should be flexibility in the final TWIC rule to exempt smaller facilities and vessels from requirements more applicable to large facilities and vessels. TSA and Coast Guard officials acknowledge the difficulties in applying the TWIC regulation to the entire maritime sector, and stated that they will obtain additional comments from stakeholders as part of the rulemaking process regarding the potential impact that the TWIC program could have on the flow of maritime commerce. TSA experienced problems in planning for and overseeing the contract to test the TWIC program. Specifically, poor planning for the contract to test the TWIC program resulted in significant contract changes shortly after TSA awarded the contract, which contributed to a doubling of contract costs. According to TSA officials, delays in program development and pressure to begin TWIC testing caused the agency to award the contract before they had sufficient time to plan for and identify all of the requirements necessary to test the TWIC program in the initial contract. In addition, while the contract required testing certain key components of the TWIC program, TSA did not ensure that these key components were tested by the contractor. In addition to poor oversight, stakeholders told us that TSA did not effectively communicate and coordinate with them regarding any problems that arose during testing at their facility. TSA officials stated that the agency lacked adequate personnel to provide effective oversight of the contract to test the TWIC program and thus relied on the contractor to provide oversight of its own work and the work of its sub-contractors. Our previous reports have identified similar contract planning and oversight problems at TSA that led to increased contract costs. Specifically, in reports issued in 2004 and 2005, we found that both TSA and DHS contract policies did not adequately ensure that contract requirements and deliverables were clearly defined, and did not provide adequate oversight of contractor performance. Since TSA will rely heavily on a private contractor to implement the TWIC program, it is important that comprehensive and clearly defined requirements are included in the implementation contract and contractor performance is closely monitored to help ensure effective and efficient accomplishment of contract purposes and to hold down costs. TSA awarded the contract to test key components of the TWIC program in August 2004 for about $12 million. By the end of the testing phase, the total cost of the TWIC testing contract increased to over $27 million. According to the testing contractor, the cost increased because TSA added several key requirements that were necessary for testing the TWIC program to the contract after it was awarded. TSA officials confirmed that the addition of these key requirements caused the contract cost to increase. First, according to TSA and the testing contractor, although the initial contract did not stipulate a date to begin program testing, they initially agreed that the contractor should begin testing the TWIC program in April 2005. However, TSA officials moved up the start date to November 2004 to try to complete testing sooner. According to TSA and the testing contractor, the contractor incurred additional costs to move up the schedule. Second, TSA’s initial testing contract was amended to require the contractor to install infrastructure necessary to test the TWIC program at transportation facilities. TSA added this requirement right after it awarded the contract because the agency learned that many testing facilities needed additional infrastructure to support testing the TWIC program and lacked the necessary funding to pay for it. According to TSA and the testing contractor, requiring the contractor to install infrastructure further increased the cost of the contract. Lastly, TSA changed the requirements after it awarded the testing contract to facilitate the enrollment of all port workers that were already enrolled in Florida’s uniform port access credential program. This required the testing contractor to use a different approach to enrolling workers in Florida than was used at other TWIC testing locations. TSA did not include this approach in the original contract. According to TSA officials, these modifications were not included in the initial TWIC testing contract because TSA officials were under pressure to begin TWIC testing and did not have sufficient time to ensure that the contract included comprehensive and clearly defined requirements. TSA officials also stated that they knew they could modify the contract after it was awarded. TSA is required to use the Federal Aviation Administration’s (FAA) acquisition management system to guide government procurements, including contract planning and oversight, rather than the Federal Acquisition Regulation (FAR), which applies to most other federal agencies. Although TSA is not subject to the requirements of the FAR, the FAR’s requirements are designed to help ensure adequate contract planning. Specifically the FAR states that government personnel should avoid issuing contract requirements on an urgent basis, as was done during the TWIC testing contract, since this could increase contract prices. In addition, best practices for contract planning include defining key contract requirements and making critical decisions before moving forward and committing funds or resources to a major system, or acquisition, such as the TWIC program. We have also previously reported that the development of any new system should follow a knowledge-based approach, including clearly defining system requirements through advanced planning, to achieve successful outcomes. Adequate planning also includes making decisions before moving forward and taking action to prevent increases in cost, schedule delays, and degradations in performance and quality. Although contract requirements are often amended or added after initial contracts are awarded, the failure to consider and include critical requirements necessary to fully test the TWIC program and the resulting cost increases encountered is reflective of poor contract planning. According to TSA, the agency is taking steps to address contract planning problems experienced during TWIC testing. Specifically, TSA officials told us that the TWIC program office has hired additional certified program managers and staff with technical expertise to assist in developing comprehensive and clearly defined requirements for the future contract to implement the TWIC program. However, it is not clear to what extent these actions will ensure that the contract to implement the TWIC program will include comprehensive and clearly defined contract requirements. The TWIC testing contract required the contractor to test key components of the TWIC program and detect and resolve weaknesses identified during testing. TSA was responsible for ensuring that the contractor met all contract requirements. However, TSA did not effectively oversee the contractor’s performance to ensure that key components of the program were tested. For example, the contractor was required to test the capability of the TWIC program to communicate information from a central database, such as TWIC cards that should be revoked if a worker is identified as a threat to security, to local facilities. However, TSA did not ensure that the contractor tested this capability. The independent contractor’s assessment confirmed this component was not tested. The assessment also found that the testing contractor did not fulfill 25 percent of the TWIC operational and performance contract requirements, such as the requirement that lost or stolen TWIC cards be revoked prior to issuing a new card. The independent assessment characterized the failure to meet this requirement during testing as a critical problem, as a terrorist could potentially use the lost or stolen card to access secure areas. In addition, TSA officials did not perform certain tasks that are included in the agency’s guidelines for contract oversight. TSA officials acknowledged that these functions were not performed because they lacked the oversight resources necessary to perform all of these tasks. For example, TSA officials acknowledged that the agency did not follow its contract oversight guidance in the following areas: Performance and cost efficiency reporting. A contracting officer technical representative (COTR) is a federal employee with technical knowledge of a specific program appointed by the contracting officer to ensure that contract requirements are met and to monitor the performance of the contractor. TSA’s COTR guidelines state that one of the primary responsibilities of the COTR is to identify and report opportunities to improve contractor performance or cost efficiency to the contracting officer. However, according to TSA officials, no such performance reports were submitted by the COTR during the testing of the TWIC program. Quality assurance planning. The COTR guidelines require that the COTR follow a quality assurance plan for monitoring contractor performance. However, TSA officials stated that, although some limited monitoring and surveillance of the TWIC testing took place, they did not develop a quality assurance plan for the TWIC testing. Evaluating contractor performance. The COTR guidelines also state that the COTR is required to write their own evaluation of the contractor’s technical performance. However, over 1 year after the completion of TWIC testing, TSA officials told us that an evaluation of the TWIC testing contractor’s technical performance will be completed after the TWIC testing contractor completes transitional tasks. According to TSA officials, the lack of TWIC program personnel as well as an over-reliance on the testing contractor to provide oversight of its own work and that of subcontractors caused inadequate oversight of the TWIC testing contract. The TWIC program office within TSA had seven individuals on staff and one person, the COTR, directly responsible for contract oversight. According to the COTR, more staff were needed to provide adequate oversight of nearly 30 TWIC testing locations and multiple testing subcontractors. The COTR also stated that the TWIC testing contract was just one of several contracts that she was responsible for overseeing. As a result, the COTR visited only one location during TWIC program testing. According to TSA officials, the agency is taking steps to improve contract oversight practices. Specifically, TSA officials stated that the agency hired additional certified program managers, staff with technical expertise, and a new COTR to provide oversight of the future contract to implement the TWIC program. In addition, these officials told us that TSA has established a special office dedicated to managing TWIC contracts. However, until TSA develops its plans for monitoring contractor performance, it is not clear to what extent these actions will ensure that contractor performance and costs will be closely monitored. In addition to oversight problems, stakeholders at all 15 TWIC testing locations we visited told us that TSA did not effectively communicate and coordinate with them regarding any problems that arose during testing at their facility. For example, at two maritime facilities we visited, officials told us that communication and coordination with TSA was the most significant problem they encountered during TWIC program testing. These officials stated that all communications from TSA and the testing contractor would stop for months during TWIC testing and that questions to TSA regarding the status of testing and various problems encountered often went unanswered. Another example of poor communication and coordination cited by stakeholders was that TSA never provided any results of the TWIC testing, including the final testing report, to the facilities that participated in the testing. According to TSA, the agency did not provide the final testing report to stakeholders because the report contained sensitive security information. Stakeholders stated that if TSA had an effective stakeholder feedback mechanism in place, TSA may have learned of testing problems and contractor performance issues sooner. In addition, an independent contractor’s assessment of the TWIC testing also identified communication and coordination problems during their own site visits to 18 of the 28 TWIC testing locations. The independent contractor recommended that TSA develop procedures to provide more open and timely communication to stakeholders. TSA officials acknowledged that the agency could have better communicated with stakeholders at the TWIC testing locations. We have previously highlighted the importance of effective communication and coordination between TSA and industry stakeholders to ensure that the agency is able to test and deliver programs that work effectively. As a result, we recommended that TSA better communicate and coordinate with industry stakeholders and create a formal mechanism to ensure this communication and coordination takes place. According to TSA officials, the agency recognizes that stakeholders involved in the TWIC testing should have been provided results of testing at their facilities and acknowledges that the agency did not establish a means of communicating and coordinating with stakeholders as part of the oversight process. Another issue that arose during TWIC testing concerned TSA’s decision to contract with the same company that was conducting the TWIC testing to provide the agency’s TWIC program office management support, technical expertise, and assistance in providing contract oversight. The program management contractor staff worked in TSA’s TWIC program office and helped evaluate contract deliverables submitted by its own company, such as the final report summarizing the results and conclusions of the TWIC testing. Although TSA said that the two contracts involved separate teams from the same company, conflict of interest concerns in this particular situation were such that TSA required the contractor to address organizational conflict of interest concerns in a mitigation plan and paid an independent contractor to review the TWIC testing. Further, the independent assessment contractor found that there were problems with the testing contractor’s report, such as inaccurate and missing information. The assessment also stated that TSA did not adequately (1) define testing contract requirements, (2) develop a comprehensive implementation plan to secure adequate stakeholder involvement, or (3) monitor TWIC program schedules and costs. As a result, the independent assessment recommended that the contractor’s final report not be relied upon when making decisions about the implementation of TWIC until these problems were corrected. In previous reports, we identified problems with TSA’s contracts and contractor oversight practices, including contracts without clearly defined requirements and inadequate oversight that caused initial TSA contract costs to increase. We have also reported on TSA and DHS’s lack of policies that provide clear guidance on defining contract requirements or contract oversight. For example, the report notes that clearly defining requirements allows more precise cost estimates for specific contracts as well as better approximations of the timelines for completion. In addition, inadequate oversight increases the risk that costs will increase in a labor hour and cost reimbursement contract as used here. The TWIC program was established in response to congressional direction to mitigate the threat of terrorists and other unauthorized persons from accessing the nation’s ports and other transportation facilities. The maritime industry and other transportation stakeholders are generally supportive of the TWIC program as a means to strengthen access control security and establish a national standard for worker identification credentials. TSA tested the TWIC program at a select number of transportation facilities to identify problems, develop solutions to these problems, and help determine how TWIC can be effectively implemented across the nation. However, the TWIC testing fell short of meeting its goals. Specifically, during testing, TSA issued cards to only about 1,700 workers and tested card readers at 19 facilities, a much smaller population than planned, and TSA did not fully test all key components of the TWIC program, such as biometric card readers. As a result, TSA faces the challenge of transitioning from this limited testing to successful implementation of the program on a much larger scale covering 750,000 workers at over 3,500 maritime facilities and 10,800 vessels. While TSA has taken some actions to address problems identified during TWIC program testing, the agency and the maritime industry still face key challenges in ensuring that the program will meet its intended goal of providing an effective means of preventing unauthorized access to secure areas. TSA has recently announced that it will use two separate rulemakings to implement the TWIC program. The first will provide the requirements for enrolling workers, conducting background checks, and issuing TWIC cards. A subsequent rule will include requirements for purchasing and installing TWIC access control technologies. Postponing the issuance of requirements for TWIC access control technologies will afford the maritime industry additional time to comment on these requirements. However, it is not clear what, if any, additional testing of the TWIC access control technologies will be conducted as part of this subsequent rulemaking to ensure that they work effectively. Moreover, TSA’s decision to issue two TWIC rules poses an additional challenge in that TSA will need to ensure that the TWIC cards issued to workers enrolled under the first rule will be compatible with the card reader technologies that will be part of the second rule. TSA’s decision to rapidly move forward with implementation of the TWIC program without developing and testing solutions to identified problems could lead to additional problems, increased costs, and further program delays without achieving the program’s intended goals. Considering the large investment that the federal government and maritime industry will be required to make to implement the TWIC program, it is particularly important that solutions to the problems and challenges facing the program be developed and tested before implementation to avoid wasting resources. We have found during prior work that in a rush to implement programs quickly, TSA has not always followed a disciplined development process, including conducting appropriate systems testing, and did not always follow their own systems development guidance when developing programs. As a result, they experienced program delays and cost overruns, and lacked assurance that the programs would meet their intended goals. TSA’s lack of contract planning, oversight, and communication and coordination with stakeholders during testing of the TWIC program, and past contract planning and oversight problems, raise questions about whether TSA can ensure that the contract to implement the TWIC program will include comprehensive and clearly defined requirements or that the agency will provide adequate oversight of contractor performance. TSA officials stated that the agency has taken steps to address these problems by hiring additional staff with technical and program management expertise to assist in developing contract requirements and providing oversight. While these actions may address problems that occurred during TWIC program testing, whether they will resolve all of the contract planning and oversight problems will not be clear until TSA develops and awards the contract to implement the TWIC program and develops plans for overseeing and evaluating contractor performance and communicating and coordinating with maritime industry stakeholders. To help ensure that the TWIC program can be implemented as efficiently and effectively as possible, we recommend that the Secretary of Homeland Security direct the Assistant Secretary of Homeland Security for the Transportation Security Administration, in close coordination with the Commandant of the U.S. Coast Guard, to take the following two actions: 1. Before TWIC is implemented in the maritime sector, develop and test solutions to the problems identified during TWIC program testing, and raised by stakeholders in commenting on the TWIC proposed rule, to ensure that all key components of the TWIC program work effectively. In developing and testing these solutions, TSA should: ensure that the TWIC program will be able to efficiently enroll and issue TWIC cards to large numbers of workers; ensure that the technology necessary to operate the TWIC program will be readily available to industry stakeholders and will function effectively in the maritime sector, including biometric card readers and the capability to link facility access control systems with the national TWIC database; ensure that the TWIC program balances the added security it provides with the potential effect that the program could have on the flow of maritime commerce; and closely coordinate with maritime industry stakeholders— particularly those that are currently implementing or using biometric access control systems—to learn from their experiences. 2. Strengthen contract planning and oversight practices before awarding the contract to implement the TWIC program to achieve the following purposes: ensure that the contract to implement the TWIC program contains comprehensive and clearly defined requirements; ensure that resources are available and measures are in place to provide effective government oversight of the contractor’s performance; and establish a communication and coordination plan to capture and address the views and concerns of maritime industry stakeholders during implementation. We provided a draft of this report to DHS for review and comment. On September 22, 2006, we received written comments on the draft report, which are reproduced in full in appendix II. DHS concurred with the findings and recommendations and stated that the report will help improve TSA’s management of the TWIC program and strengthen oversight of contractor performance. DHS further stated that the report’s recommendations will help facilitate the nationwide implementation of the TWIC card and thus, the agency has already taken steps to implement them. Regarding our recommendation to develop and test solutions to the problems identified during TWIC program testing, and raised by stakeholders in commenting on the TWIC proposed rule, DHS stated that it is taking a number of actions. Specifically, to ensure that the TWIC program will be able to efficiently enroll and issue TWIC cards to large numbers of workers, TSA is using experience gained during TWIC testing to improve the enrollment and card issuance process, which should address the problems encountered during testing. For example, TSA plans to use an easier and faster form of scanning to capture workers’ fingerprints and is taking additional steps to ensure that the process for enrolling workers and issuing TWIC cards is efficient. In addition, according to DHS, TSA is seeking an experienced and capable contractor to enroll workers and operate the information technology systems necessary to support the program. Taking these steps should help TSA to address the problems experienced during testing regarding enrollment and card issuance. Nevertheless, TSA will face the challenge of enrolling and issuing TWIC cards to a significantly larger population of workers than was enrolled during testing. Concerning our recommendation that DHS ensure that the technology necessary to operate the TWIC program will be readily available to industry stakeholders and will function effectively in the maritime sector, including biometric card readers and the capability to link facility access control systems with the national TWIC database, DHS stated that TSA and the Coast Guard will not require maritime facilities and vessels to purchase or install card readers as part of the first rulemaking process. Instead, requirements for biometric card readers and access control technologies will be part of a subsequent rulemaking. According to DHS, the two-phased rulemaking process allows more time for maritime facility and vessels owners and operators to plan for the installation of biometric card readers and access control infrastructure and allows the public additional opportunity to comment on this aspect of the program. In addition, TSA is considering additional field testing of biometric card readers within the funding and schedule parameters of the TWIC program and has already solicited stakeholders’ involvement in these tests. Furthermore, according to DHS, the General Services Administration (GSA) and NIST are currently testing products, including biometric card readers, for compliance with FIPS 201 standards. GSA is also developing a list of qualified access control technology products and vendors that will be available for purchase by maritime facilities and vessels to implement the TWIC program in the future. Obtaining additional comments from the public regarding TWIC access control technology requirements, conducting additional testing of TWIC program technologies in the maritime environment, and ensuring that access control technologies are compliant with FIPS 201 standards are important steps for ensuring that the TWIC program works effectively in the maritime environment. In regard to linking facility access control systems with the national TWIC database, DHS stated that facilities and vessels will be provided secure web access to a list of TWIC cards that are lost, stolen, expired, or belong to individuals found to pose a threat to security. In addressing our recommendation that TSA and the Coast Guard ensure that the TWIC program balances the added security it provides with the potential effect that the program could have on the flow of maritime commerce, DHS stated that TSA and the Coast Guard have reviewed industry comments, are cognizant of stakeholder concerns, and acknowledge the potential impact that the TWIC program could have on the flow of maritime commerce. As a result, TSA and Coast Guard plan to obtain additional comments on this issue from industry stakeholders in the second rulemaking pertaining to access control technology. Soliciting additional comments from maritime industry stakeholders should help TSA and the Coast Guard balance the added security of the TWIC program with the potential affects on the flow of maritime commerce. Conducting additional testing of TWIC in the maritime environment would further help TSA and the Coast Guard determine how to balance security and the flow of maritime commerce. With regard to our recommendation that DHS closely coordinate with maritime industry stakeholders—particularly those that are currently implementing or using biometric access control systems—to learn from their experiences, DHS stated that the TWIC program is considering field testing of biometric card reader technology to support the second phase of the TWIC program within the funding and schedule parameters of the program. According to DHS, multiple TWIC stakeholders have expressed an interest in participating in this field testing. In addition, TSA and the Coast Guard plan an upcoming conference of TWIC qualified contractors and TWIC stakeholders to discuss experiences during TWIC testing. DHS also stated that the agency has invited other stakeholders to provide feedback on the TWIC program. Taking action to better coordinate with maritime stakeholders are steps in the right direction and will be essential to effectively implementing the TWIC program. In response to our recommendation that TSA strengthen contract planning and oversight practices before awarding the contract to implement the TWIC program, DHS stated that it is taking several actions to implement this recommendation. Specifically, to ensure that the contract to implement the TWIC program contains comprehensive and clearly defined requirements, TSA has recently selected qualified contractors and released the request for proposal (RFP) to implement the TWIC program. The TWIC RFP includes a detailed requirements document that identifies the performance outcomes expected to be met by the contractor selected to implement the TWIC program. According to DHS, any future changes to the TWIC requirements will be managed under a formal change control process. If properly implemented, these actions should better position TSA to ensure that the TWIC implementation contract contains comprehensive and clearly defined requirements. Regarding our recommendation that TSA ensure that resources are available and measures are in place to provide effective government oversight of the contractor’s performance, DHS stated that the TWIC program has recently established a Program Control Office to help oversee contractor performance and deliverables. In addition, the TWIC program has developed a Quality Assurance and Surveillance Plan and acceptable quality levels of performance in the TWIC RFP to provide a foundation for contract management and oversight. TSA has also hired additional staff to provide better program management and improved oversight of TWIC contracts. Allocating additional resources and taking steps to ensure that TSA provides effective oversight of the TWIC implementation contract are important steps toward improving contract oversight. If properly implemented, these actions should address the intent of this recommendation. Concerning our recommendation that TSA establish a communication and coordination plan to capture and address the views and concerns of maritime industry stakeholders during implementation, DHS stated that the TWIC program has increased its communication and coordination efforts with stakeholders during the TWIC rulemaking process and plans to continue these activities during implementation of the program. According to DHS, the TWIC program office has developed a communication strategy and plan and the TWIC RFP requires the TWIC implementation contractor to establish a communications plan to provide information to stakeholders and address their concerns during implementation. Developing plans to better communicate and coordinate with stakeholders will be key to the success of the TWIC program. DHS also offered technical comments and clarifications, which we have considered and incorporated where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 21 days after its issue date. At that time, we will provide copies of this report to the Secretary of Homeland Security, Assistant Secretary of the Transportation Security Administration, Commandant of the U.S. Coast Guard, and other interested congressional committees as appropriate. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or you staff have any questions about this report, please contact me at (202) 512-3404 or at berrickc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to answer the following questions: (1) What problems, if any, did testing of the TWIC program identify and what challenges, if any, do DHS and industry stakeholders face in implementing the program? and (2) To what extent, if at all, did TSA experience problems in planning for and overseeing the contract to test the TWIC program? To address our first objective, to identify the problems, if any, during testing of the TWIC program and the challenges, if any, DHS and industry stakeholders face in implementing the program, we interviewed TSA and Coast Guard officials regarding the development of the TWIC program, results of TWIC program testing, and challenges identified with implementing the program. To determine the status of the TWIC program, goals, and requirements of TWIC testing and testing results, we obtained and analyzed TWIC program documents, including program management plans, the final report on TWIC testing, an independent contractor’s assessment of TWIC testing, the TWIC proposed rule, and the TWIC regulatory impact analysis. We also reviewed applicable laws, regulations, policies, and procedures to determine the requirements for implementing the TWIC program. We attended public meetings held by TSA and the Coast Guard in Newark, New Jersey; Tampa, Florida; and Long Beach, California; to obtain industry comments on the TWIC proposed rule. We also reviewed stakeholder comments submitted to TSA and the Coast Guard during the rulemaking process. We conducted site visits to 15 of the 28 facilities that participated in testing the TWIC program in California, Delaware, Florida, New Jersey, New York, and Pennsylvania to obtain information on stakeholder experiences regarding the TWIC testing, observe the operation of the TWIC program at these facilities, and discuss any challenges associated with implementing TWIC. We visited testing facilities in each of the three testing regions—East Coast, West Coast, and Florida—as well as locations representing the maritime, aviation, and rail modes of transportation. We selected the 15 facilities based on geographic location, mode of transportation, and diversity of facility size and area of business operations. Table 3 lists the 15 facilities we visited that participated in TWIC testing. To address our second objective, to determine to what extent, if at all, the contract to test the TWIC program identified contract planning and oversight problems that should be addressed before implementing the program, we interviewed TSA officials regarding the planning for and oversight of the contract to test the TWIC program. We obtained and analyzed TWIC program documents, including the TWIC testing contract and report, an independent contractor’s assessment of TWIC testing, and TSA’s internal contract planning and oversight guidance. We interviewed TWIC contractor officials regarding contract requirements, testing results, and TSA’s planning for and oversight of the testing contract. We also interviewed officials from the independent contractor that assessed the TWIC testing to discuss the results of this assessment. Further, we reviewed the methodology of the independent contractor’s assessment by examining documents, interviewing contractor officials, and performing internal analyses to help ensure data reliability. Our work was also informed by our prior reports and testimony related to TWIC, maritime and transportation security, and TSA and DHS contracting practices. We conducted our work from August 2005 through September 2006 in accordance with generally accepted government auditing standards. In addition to the contact above, John Hansen, Assistant Director, Chris Currie, Nicholas Larson, Michele Mackin, Geoff Hamilton, Katherine Davis, Chuck Bausell, Michele Fejfar, Richard Hung, and Pille Anvelt made key contributions to this report. | The Transportation Security Administration (TSA) is developing the Transportation Worker Identification Credential (TWIC) to ensure that only workers that do not pose a terrorist threat are allowed to enter secure areas of transportation facilities. TSA completed TWIC program testing in June 2005 and is moving forward with implementing the program in the maritime sector by the end of this year. To evaluate the status of the TWIC program, GAO examined (1) what problems, if any, were identified during TWIC program testing and what key challenges, if any, do the Department of Homeland Security (DHS) and industry stakeholders face in implementing the program; and (2) to what extent, if at all, did TSA experience problems in planning for and overseeing the contract to test the TWIC program. To address these issues, GAO interviewed DHS officials and industry stakeholders, reviewed documentation regarding TWIC testing, and conducted site visits to testing locations. DHS and industry stakeholders face three major challenges in addressing problems identified during TWIC program testing and ensuring that key components of the TWIC program can work effectively in the maritime sector. First, enrolling workers and issuing TWIC cards in a timely manner to a significantly larger population of workers than was done during testing of the TWIC program. Second, ensuring that the TWIC technology, such as biometric card readers, works effectively in the maritime sector. TSA has obtained limited information on the use of biometric readers in the maritime sector because most facilities that tested the TWIC program did not use these types of readers. Thirs, balancing the added security components of the TWIC program with the potential impact that the program could have on the flow of maritime commerce. An independent contractor's assessment found deficiencies with TWIC program testing and recommended that additional testing be conducted to determine its effectiveness. TSA has acknowledged that there are challenges to implementing the TWIC program and has taken some actions to address these issues, including allowing more time to consider requirements for installing TWIC access control technologies. However, TSA plans no additional testing of the TWIC program. Rapidly moving forward with implementation of the TWIC program without developing and testing solutions to identified problems to ensure that they work effectively could lead to further problems, increased costs, and program delays without achieving the program's intended goals. TSA experienced problems in planning for and overseeing the contract to test the TWIC program. Specifically, TSA made a number of changes to contract requirements after the contract was awarded, contributing to a doubling of contract costs, and TSA did not ensure that all key components of the program were tested. TSA has acknowledged that problems with contractor oversight occurred because the agency did not have sufficient personnel to monitor contractor performance. TSA has taken some actions to address this problem. However, until TSA issues the contract for TWIC implementation and develops its plans for monitoring contractor performance, it is not clear to what extent these actions will ensure that the contract to implement the TWIC program will include comprehensive and clearly defined requirements and that contractor performance will be closely monitored to ensure that the program is implemented successfully and costs are controlled. |
The Comptroller General convened this expert panel from the U.S. and abroad to advance a national dialogue on strengthening the use of risk management principles to better manage homeland security programs. The forum brought together a diverse array of experts from the public and private sectors, including, from the public sector, a former governor, a former DHS under secretary, a U.S. Coast Guard Admiral, and senior executives from DHS, the U.S. Army, and the National Intelligence Council, as well as state and local officials with homeland security responsibilities. From the private sector, participants included executives from leading multinational corporations such as Swiss Re, Westfield Group, JPMorgan Chase, and Wal-Mart. In addition, several of the world’s leading scholars from major universities, the National Research Council, and the RAND Corporation participated in the forum. (See app. I for a list of participants.) Recognizing that risk management helps policymakers make informed decisions, Congress and the administration have charged federal agencies to use a risk-based approach to prioritize resource investments. Nevertheless, federal agencies often lack comprehensive risk management strategies that are well integrated with program, budget, and investment decisions. To provide a basis for analyzing these strategies, GAO has developed a risk management framework based on industry best practices and other criteria. This framework, shown in figure 1, divides risk management into five major phases: (1) setting strategic goals and objectives, and determining constraints; (2) assessing risks; (3) evaluating alternatives for addressing these risks; (4) selecting the appropriate alternatives; and (5) implementing the alternatives and monitoring the progress made and results achieved. Our work has indicated that while DHS is making progress in applying risk management principles to guide its operational and resource allocation decisions, challenges remain. GAO has assessed DHS’s risk management efforts across a number of mission areas—including transportation security, port security, border security, critical infrastructure protection, and immigration enforcement—and found that risk management principles have been considered and applied to varying degrees. For example, in June 2005 we reported that the Coast Guard had developed security plans for seaports, facilities, and vessels based on risk assessments. However, other components had not always utilized such an approach. As we reported in August 2007, while the Transportation Security Administration has developed tools and processes to assess risk within and across transportation modes, it had not fully implemented these efforts to drive resource allocation decisions. Moreover, in February 2007, we reported that DHS faced substantial challenges related to strengthening its efforts to use information on risk to inform strategies and investment decisions, for example, by integrating a consideration of risk into annual budget and program review cycles. We also reported that while integrating a risk management approach into decision-making processes is challenging for any organization, it is particularly difficult for DHS given its diverse set of responsibilities. The department is responsible for dealing with all-hazards homeland security risks—ranging from natural disasters to industrial accidents and terrorist attacks. The history of natural disasters has provided experts with extensive historical data that are used to assess risks. By contrast, data about terrorist attacks are comparatively limited, and risk management is complicated by the asymmetric and adaptive nature of our enemies. In addition to helping federal agencies like DHS focus their efforts, risk management principles can help state and local governments and the private sector—which owns over 85 percent of the nation’s critical infrastructure—prioritize their efforts to improve the resiliency of our critical infrastructure and make it easier for the nation to rebound after a catastrophic event. Congress has recognized state and local governments and the private sector as important stakeholders in a national homeland security enterprise and has directed federal agencies to foster better information sharing with these partners. Without effective partnerships, the federal government alone will be unable to meet its responsibilities in protecting and securing the homeland. A shared national approach— among federal, state, and local governments as well as between public and private sectors—is needed to manage homeland security risk. Participants discussed effective risk management practices used in the public and private sector. For example, they discussed the concept of a chief risk officer but did not reach consensus on how to apply the concept to the public sector. The participants also identified examples of public sector organizations that effectively integrated risk management into their operations and compared and contrasted public and private sector risk management practices. Participants said that private sector organizations have established the position of the chief risk officer, an executive responsible for focusing on understanding information about risks and reporting this information to senior executives. One key practice for creating an effective chief risk officer, participants said, was defining reporting relationships within the organization in a way that provides sufficient authority and autonomy for a chief risk officer to report to the highest levels of the organization. However, participants did not reach consensus on how to apply the concept of the chief risk officer to the public sector. Participants stated that the U.S. government needs a single risk manager. One participant suggested that this lack of central leadership has resulted in distributed responsibility for risk management within the administration and Congress and has contributed to a lack of coordination on spending decisions. Another participant stated that the Secretary of DHS fills the chief risk officer role. Participants identified various challenges associated with appointing a chief risk officer within the public sector, including (1) balancing the responsibilities for protection against seizing opportunities for long-range risk reduction, (2) creating a champion but not another silo that is not integrated with other components of the organization, and (3) generating leadership support for the position. Participants identified examples of organizations that effectively integrated risk management into the operations of public sector organizations, including the U.S. Coast Guard, the U.S. Army Corps of Engineers, and the Port Authority of New York and New Jersey. Participants stated that the Coast Guard uses risk management principles to allocate resources, balance competing needs of security with the efficient flow of commerce, and implement risk initiatives with its private sector partners, for example, through Area Maritime Security Committees. According to another participant, the Army Corps developed flood risk management practices that he saw as notable because this information was used to digest and share critical information with the public. One participant noted that the Port Authority of New York and New Jersey developed and implemented a risk assessment program that guided the agency’s management in setting priorities for a 5-year, $500 million security capital investment program. According to this participant, this methodology has since been applied to over 30 other transportation and port agencies across the country, and the Port Authority has moved from conducting individual risk assessments to implementing an ongoing program of risk management. Participants observed that while, in some instances, the public and private sector should apply risk management principles in similar ways, in other instances, the public and private sectors manage risk differently. One participant stated in both the public and private sectors the risk management process should include the systematic identification and assessment of risks through scientific efforts; efforts to mitigate risks; and risk adaptation to address financial consequences or to allow for effective transfer of risk. However, participants noted that the private and public sectors also manage risk differently. One participant said the private sector manages risk by “pre-funding” and diversifying risk through insurance. In addition, the private sector creates incentives for individuals to lower the risks they face from, for example, a car accident or a natural disaster, by offering to reduce insurance premiums if the policy holder takes certain steps to mitigate these risks. Similarly, the public sector also plays a unique role in managing risk, for instance, regulating land use and establishing building codes; organizing disaster protection, response, and recovery measures; setting regulatory frameworks; and supplementing the insurance industry. In addition, participants noted that the private sector organizations have more flexibility than the public sector to select which risks to manage. For instance, participants stated that the private sector could avoid risks in cases where the costs of ensuring these risks are too high. Additionally, a participant noted that the private sector tends to naturally consider opportunity analysis—or the process of identifying and exploring situations to better position an organization to realize desirable objectives—as an important part of risk management. In contrast, participants observed, public sector organizations have less flexibility to select which risks to address through protective measures. Like the private sector, the government has to makes choices about which risks to protect against—since it cannot protect the nation against all hazards. Unlike the private sector, the government has a wide responsibility for preparing for, responding to, and recovering from all acts of terrorism and natural or manmade disasters and is accountable to the public for the investment decisions it makes. Participants identified three key challenges to strengthening the use of risk management in homeland security—risk communication, political obstacles to making risk-based investments, and a lack of strategic thinking. Participants also recommended ways to address them. Many participants, 35 percent, agreed that improving risk communication posed the single greatest challenge to using risk management principles (see fig. 2 below). Further, 19 percent of participants stated political obstacles to risk-based resource allocation was the single most critical challenge, and the same proportion of participants, 19 percent, said the single most critical challenge was a lack of strategic thinking. The remaining participants identified other key challenges, for example, technical issues such as the difficult but necessary task of analyzing threat, vulnerability, and consequences of a terrorist attack in order to assess and measure risk reduction; and partnership and coordination challenges. Participants identified several risk communication challenges and recommended actions to address them as follows: Educate the public about risks and engage in public discourse to reach consensus on an acceptable level of risk. Participants said that the public lacks a fact-based understanding of what homeland security risks the nation faces. Participants attributed these problems to media coverage that undermines a fact-based public discussion of risk by sensationalizing acts of terrorism that have dramatic consequences but may be unlikely to occur. In addition, participants stated that even though it is not possible to prevent all disasters and catastrophes, public officials need to engage the public in defining an acceptable level of risk of a terrorist attack or natural disaster in order to make logical, risk-based resource allocation decisions. To communicate with the public about risks in a meaningful way, participants recommended educating the public on how risk is defined, providing fact-based information on what risks we face and the probability they might occur, and explaining how risk informs decision-making. One expert recommended the government communicate about risks through public outreach in ways that calms the public’s fears while raising awareness of risks. Another participant recommended that the country engage in a national public discourse to reach consensus on an acceptable level of risk. Educate policymakers and establish a common lexicon for discussing risk. Participants emphasized the importance of educating elected officials on risk management. Several participants believed that the distinction between risk assessment—involving scientific analysis and modeling—and risk management—involving risk reduction and evaluation—is not widely understood by policymakers. In addition, one expert also noted that the nation should do more to train a cadre of the next generation of risk management professionals. Given differences in education and levels of understanding about risk management, the participants felt it would be important to develop a common lexicon that can be used for dialogue with both the layman and the subject matter expert. Without a common, shared understanding of risk management terms, communicating about risks is challenging. Some members of our expert panel recommended focusing specifically on educating elected officials and the next generation of policymakers about risk management. One participant pointed out that a new administration and Congress will soon enter office with a new set of policy objectives, and it will be important to highlight the importance of risk management to incoming policymakers and to persuade them to discuss it. Panelists also recommended creating a common vocabulary or lexicon that defines common risk management terms. Develop new risk communication practices to alert the public during emergencies. Participants said that government officials lack an understanding of what information to share and how to communicate with the public during an emergency. Participants said that risk analysis, including predictive modeling, tends to neglect a consideration of how the public’s expectations and emotions can impact the effectiveness of response efforts and affect the likelihood the public will respond as predicted or directed by government officials during an emergency. According to one participant, Hurricane Katrina demonstrated that the efficacy of emergency response efforts depends on how the public behaves, as some people chose to shelter in place while others followed directions to evacuate. Participants recommended that governments consider what information should be communicated to the public during a crisis and how best to communicate that information. For instance, one participant suggested that experts look at existing risk communication systems, such as the National Weather Service, that could be used as models for a homeland security risk communication system. The participant noted that the service provides both national and local weather information, looks at overall risks, and effectively provides actionable information to be used by both the public and private sectors. Participants criticized the current color-coded DHS Homeland Security Advisory System as being too general, suggesting that the public does not understand what is meant by the recommended actions such as being vigilant. Participants said political obstacles pose challenges to allocating homeland security resources based on risk. Participants identified the reluctance of politicians and others to make risk-based funding decisions. Participants noted that elected officials’ investment priorities are informed by the public’s beliefs about which risks should be given the highest priority, beliefs that are often based on incomplete information. As a result, participants stated that there is less incentive for officials to invest in long-term opportunities to reduce risk, such as investing in transportation infrastructure, when the public does not view these investments as addressing a perceived risk. To better allocate resources based on risk, participants recommended that public officials and organizations consider investing in protective measures that yield long- term benefits. Participants agreed that a lack of strategic thinking was a key challenge to incorporating risk-based principles in homeland security investments. In particular, participants noted that challenges existed in these areas: A national strategic planning process is needed to guide federal investments in homeland security. Participants said there is a lack of a national strategic planning process to guide federal investments in homeland security. Balancing the security concerns of various federal government agencies that have diverse missions in areas other than security, such as public safety and maintaining the flow of commerce, poses a significant strategic challenge, some participants stated. One participant stated that the President had developed a strategy to guide, organize, and unify the nation’s homeland security efforts in the October 2007 National Strategy for Homeland Security. However, several other participants said that a better process is needed for strategic planning. For example, to think strategically about risk they recommended that stakeholders discuss trade-offs, such as whether more resources should be spent to protect against risks from a conventional bomb, nuclear attack, biological attack, or a hurricane. Another participant noted that the purpose of risk assessment is to help answer these strategic questions. One participant also recommended that the short-term goal for a national strategic planning process should be identifying the big problems that strategic planning needs to address, such as measuring the direct and indirect costs of reducing risk. Fragmented approaches to managing security risk within and across the federal government could be addressed by developing governmentwide risk management guidance. Some participants agreed that approaches to risk management were fragmented within and across the federal government. For example, one participant said that each of the Department of Defense combatant commands has its own perspective on risk. According to this participant, this lack of consistency requires recalculations and adjustments as each command operates without coordinating efforts or approaches. Three participants also said that there is a lack of governmentwide guidance on using risk management principles to manage programs. To address this problem, participants said governmentwide guidance should be developed. Two participants suggested that OMB or another government agency should play a lead role in outlining goals and general principles of risk assessment and getting agencies to implement these principles. Participants agreed that risk management should be viewed as the responsibility of both the public and private sector. They identified challenges related to public-private collaboration: Private sector should be more involved in public risk assessments. Participants said that public-private partnerships are important and should be strengthened. One reason partnerships may not be as strong as they could be is that the private sector may not be appropriately involved in the public sector’s risk assessments or risk-based decision-making. Participants agreed that the private sector should be involved in developing risk assessments because when these stakeholders are not sufficiently involved they lose faith in government announcements and requirements related to new risks and threats. To this end, DHS has established coordinating councils for critical infrastructure protection that allow for the involvement of representatives from all levels of government and the private sector, so that collaboration and information sharing can occur to assess events accurately, formulate risk assessments, and determine appropriate protective measures. Increase the involvement of state and local practitioners and experts. Participants observed that intergovernmental partnerships—between federal, state, local, and tribal governments—are important for effective homeland security risk management. They recommended that more state and local practitioners and experts become involved in applying risk management principles to homeland security. This concludes my prepared statement. I would be pleased to answer any questions you and the Subcommittee Members may have. Deputy Secretary for Public Safety State of New York Director, Group Communications Head of Issue Management & Messages Swiss Re Howard Heinz University Professor Department of Social and Decision Sciences and Department of Engineering and Public Policy Carnegie Mellon University President Highland Risk & Crisis Solutions, Ltd. Kenneth L. Knight, Jr. National Intelligence Officer for Warning National Intelligence Council Office of the Director of National Intelligence Cecilia Yen Koo Professor Department of Decision Sciences and Public Policy Wharton School, University of Pennsylvania Co-Director Wharton Risk Management and Decision Processes Center Group Managing Director Westfield Group Director of the Center for Economics U.S. Government Accountability Office Chief Economist U.S. Government Accountability Office Director, Emergency Management and Security Port Authority of New York and New Jersey Senior Security Consultant Talisman, LLC Director, International Center for Enterprise Preparedness New York University Managing Director Head of Corporate Operational Risk JPMorgan Chase Senior Vice President for Global Security, Aviation and Travel Wal-Mart Stores, Inc. William F. Vedra, Jr. Executive Director Ohio Homeland Security Professor, Industrial and Systems Engineering Viterbi School of Engineering, University of Southern California Professor of Public Policy and Management School of Policy Planning Director Center for Risk and Economic Analysis of Terrorism Events University of Southern California Director, Board on Mathematical Sciences and Their Applications National Research Council . In addition to the contacts named above, Anne Laffoon, Assistant Director; Tony Cheesebrough; Jason Barnosky; David Messman; and Maylin Jue managed all aspects of the work, and Susanna Kuebler and Adam Vogt made important contributions to producing this report. Aviation Security: Transportation Security Administration Has Strengthened Planning to Guide Investments in Key Aviation Security Programs, but More Work Remains. GAO-08-456T. Washington, D.C.: February 28, 2008. Transportation Security: Efforts to Strengthen Aviation and Surface Transportation Security are Under Way, but Challenges Remain. GAO- 08-140T. Washington, D.C.: October 16, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-454. Washington, D.C.: August 17, 2007. Homeland Security: Applying Risk Management Principles to Guide Federal Investments. GAO-07-386T. Washington, D.C.: February 7, 2007. Homeland Security Grants: Observations on Process DHS Used to Allocate Funds to Selected Urban Areas. GAO-07-381R. Washington, D.C.: February 7, 2007. Passenger Rail Security: Enhanced Federal Leadership Needed to Prioritize and Guide Security Efforts. GAO-07-225T. Washington, D.C.: January 18, 2007. Critical Infrastructure Protection: Progress Coordinating Government and Private Sector Efforts Varies by Sectors’ Characteristics. GAO-07- 39.Washington, D.C.: October 16, 2006. Interagency Contracting: Improved Guidance, Planning, and Oversight Would Enable the Department of Homeland Security to Address Risks. GAO-06-996. Washington, D.C.: September 27, 2006. Border Security: Stronger Actions Needed to Assess and Mitigate Risks of the Visa Waiver Program. GAO-06-1090T. Washington, D.C.: September 7, 2006. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO 06-618. Washington, D.C.: September 6, 2006. Aviation Security: TSA Oversight of Checked Baggage Screening Procedures Could Be Strengthened. GAO-06-869. Washington, D.C.: July 28, 2006. Border Security: Stronger Actions Needed to Assess and Mitigate Risks of the Visa Waiver Program. GAO-06-854. Washington, D.C.: July 28, 2006. Passenger Rail Security: Evaluating Foreign Security Practices and Risk Can Help Guide Security Efforts. GAO-06-557T. Washington, D.C.: March 29, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005. Passenger Rail Security: Enhanced Federal Leadership Needed to Prioritize and Guide Security Efforts. GAO-05-851. Washington, D.C.: September 9, 2005. Strategic Budgeting: Risk Management Principles Can Help DHS Allocate Resources to Highest Priorities. GAO-05-824T. Washington, D.C.: June 29, 2005. Protection of Chemical and Water Infrastructure: Federal Requirements, Actions of Selected Facilities, and Remaining Challenges. GAO-05- 327.Washington, D.C.: March 28, 2005. Transportation Security: Systematic Planning Needed to Optimize Resources. GAO-05-357T. Washington, D.C.: February 15, 2005. Homeland Security: Agency Plans, Implementation, and Challenges Regarding the National Strategy for Homeland Security. GAO-05- 33.Washington, D.C.: January 14, 2005. Homeland Security: Observations on the National Strategies Related to Terrorism. GAO-04-1075T. Washington, D.C.: September 22, 2004. 9/11 Commission Report: Reorganization, Transformation, and Information Sharing. GAO-04-1033T. Washington, D.C.: August 3, 2004. Critical Infrastructure Protection: Improving Information Sharing with Infrastructure Sectors. GAO-04-780. Washington, D.C.: July 9, 2004. Homeland Security: Communication Protocols and Risk Communication Principles Can Assist in Refining the Advisory System.GAO-04-682. Washington, D.C.: June 25, 2004. Critical Infrastructure Protection: Establishing Effective Information Sharing with Infrastructure Sectors. GAO-04-699T. Washington, D.C.: April 21, 2004. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspections. GAO-04-557T. Washington, D.C.: March 31, 2004. Rail Security: Some Actions Taken to Enhance Passenger and Freight Rail Security, but Significant Challenges Remain. GAO-04- 598T.Washington, D.C.: March 23, 2004. Homeland Security: Risk Communication Principles May Assist in Refinement of the Homeland Security Advisory System. GAO-04- 538T.Washington, D.C.: March 16, 2004. Combating Terrorism: Evaluation of Selected Characteristics in National Strategies Related to Terrorism. GAO-04-408T. Washington, D.C.: February 3, 2004. Catastrophe Insurance Risks: Status of Efforts to Securitize Natural Catastrophe and Terrorism Risk. GAO-03-1033. Washington, D.C.: September 24, 2003. Homeland Security: Information Sharing Responsibilities, Challenges, and Key Management Issues. GAO-03-1165T. Washington, D.C.: September 17, 2003. Homeland Security: Efforts to Improve Information Sharing Need to Be Strengthened. GAO-03-760. Washington, D.C.: August 27, 2003. Homeland Security: Information Sharing Responsibilities, Challenges, and Key Management Issues. GAO-03-715T. Washington, D.C.: May 8,2003. Transportation Security Research: Coordination Needed in Selecting and Implementing Infrastructure Vulnerability Assessments. GAO-03- 502. Washington, D.C.: May 1, 2003. Information Technology: Terrorist Watch Lists Should Be Consolidated to Promote Better Integration and Sharing. GAO-03-322. Washington, D.C.: April 15, 2003. Homeland Security: Voluntary Initiatives Are Under Way at Chemical Facilities, but the Extent of Security Preparedness Is Unknown. GAO-03- 439. Washington, D.C.: March 14, 2003. Critical Infrastructure Protection: Challenges for Selected Agencies and Industry Sectors. GAO-03-233. Washington, D.C.: February 28, 2003. Major Management Challenges and Program Risks: Department of Homeland Security. GAO-03-102. Washington, D.C.: January 30, 2003. Critical Infrastructure Protection: Efforts of the Financial Services Sector to Address Cyber Threats. GAO-03-173. Washington, D.C.: January 30, 2003. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation’s Issues. GAO-01-1158T. Washington, D.C.: September 21, 2001. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99- 163.Washington, D.C.: September 7, 1999. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | From the terrorist attacks of September 11, 2001, to Hurricane Katrina, homeland security risks vary widely. The nation can neither achieve total security nor afford to protect everything against all risks. Managing these risks is especially difficult in today's environment of globalization, increasing security interdependence, and growing fiscal challenges for the federal government. Broadly defined, risk management is a process that helps policymakers assess risk, strategically allocate finite resources, and take actions under conditions of uncertainty. GAO convened a forum of 25 national and international experts on October 25, 2007, to advance a national dialogue on applying risk management to homeland security. Participants included federal, state, and local officials and risk management experts from the private sector and academia. Forum participants identified (1) what they considered to be effective risk management practices used by organizations from the private and public sectors and (2) key challenges to applying risk management to homeland security and actions that could be taken to address them. Comments from the proceedings do not necessarily represent the views of all participants, the organizations of the participants, or GAO. Participants reviewed a draft of this report and their comments were incorporated, as appropriate. Forum participants identified what they considered to be effective public and private sector risk management practices. For example, participants discussed the private sector use of a chief risk officer, though they did not reach consensus on how to apply the concept of the chief risk officer to the public sector. One key practice for creating an effective chief risk officer, participants said, was defining reporting relationships within the organization in a way that provides sufficient authority and autonomy for a chief risk officer to report to the highest levels of the organization. Participants stated that the U.S. government needs a single risk manager. One participant suggested that this lack of central leadership has resulted in distributed responsibility for risk management within the administration and Congress and has contributed to a lack of coordination on spending decisions. Participants also discussed examples of public sector organizations that have effectively integrated risk management practices into their operations, such as the U.S. Coast Guard, and compared and contrasted public and private sector risk management practices. According to the participants at our forum, three key challenges exist to applying risk management to homeland security: improving risk communication, political obstacles to risk-based resource allocation, and a lack of strategic thinking about managing homeland security risks. Many participants agreed that improving risk communication posed the single greatest challenge to using risk management principles. To address this challenge, participants recommended educating the public and policymakers about the risks we face and the value of using risk management to establish priorities and allocate resources; engaging in a national discussion to reach a public consensus on an acceptable level of risk; and developing new communication practices and systems to alert the public during an emergency. In addition, to address strategic thinking challenges, participants recommended the government develop a national strategic planning process for homeland security and governmentwide risk management guidance. To improve public-private sector coordination, forum participants recommended that the private sector should be more involved in the public sector's efforts to assess risks and that more state and local practitioners and experts be involved through intergovernmental partnerships. |
The FBI, DHS, and DOD are responsible for managing and maintaining the following major biometric systems: (1) FBI’s Integrated Automated Fingerprint Identification System (IAFIS). Established in July 1999 and managed by the FBI’s Criminal Justice Information Services division, IAFIS is a national fingerprint and criminal history system that stores, searches, matches, and shares fingerprints. The FBI is currently in the process of transitioning from IAFIS to the Next Generation Identification system, which will include an expansion to biometrics storage and search capabilities for fingerprints; scars, marks, and tattoos; faces; irises; and palms. The Next Generation Identification system is a multiyear effort with six increments that is expected to be completed by 2014. (2) DHS’s Automated Biometric Identification System (IDENT). Established in 1994 and managed by the United States Visitor and Immigrant Status Indicator Technology program, which falls under the purview of the National Protection and Programs Directorate within DHS, IDENT is used by DHS and State for many purposes including border security, information on persons undergoing naturalization and visa processes, and in the agencies’ counterterrorism efforts. IDENT stores, searches, matches, and shares fingerprints. According to DHS officials, the department is beginning to look at the collection of irises and has a goal to begin collecting iris images and facial biometrics by 2013. (3) DOD’s Automated Biometric Identification System (ABIS). Established in July 2004 and managed by the Biometrics Identity Management Agency (BIMA, formerly the Biometric Task Force)—which falls under the purview of the Army—ABIS information is used by DOD to identify and verify non-U.S. persons as friend, foe, or neutral, and to help determine if the individual poses a threat or potential threat to national security. BIMA updated ABIS to the Next Generation ABIS in January 2009, which stores, searches, matches, and shares face, fingerprint, iris, palm, and latent fingerprint biometrics. Several DOD organizations are involved in the management of the biometrics program and in developing guidance on the collection and sharing of biometric information. In July 2000, Congress designated the Secretary of the Army as the Executive Agent for Defense Biometrics. Subsequently, the Secretary of the Army designated the Director of the Army’s Biometrics Task Force as the Executive Manager for Biometrics making this office responsible for developing guidance for collecting and processing biometric information. In March 2010, the Biometric Task Force’s name was changed and it became the Biometrics Identity Management Agency. Additionally, DOD appointed the Director, Defense Research and Engineering, as the Principal Staff Assistant for Biometrics. In February 2008, DOD issued a biometrics directive identifying organizational roles and authorities for managing biometrics. Within DOD, biometric capabilities were initially used in the late 1990s as a tool to protect U.S. forces in Korea, and in Kosovo as an intelligence tool. Since the September 11, 2001, terrorist attacks, DOD’s mission has included military operations in both Iraq and Afghanistan—where a biometric system was used to protect U.S. soldiers and allies from an unidentified enemy by screening and vetting non-U.S. persons. DOD collects biometric information from persons seeking access to U.S. installations in Iraq and Afghanistan, detainees, and persons encountered by U.S. forces during military operations. (See fig. 1 below.) In January 2007, DOD issued a memorandum stating that DOD would immediately adopt the practice of sharing unclassified DOD biometric information collected from non-U.S. persons with other U.S. departments and agencies having a counter-terrorism mission. DOD considers the variety of mission-needs for collecting biometric information, such as counterintelligence screening, and detainee management and interrogation, and in business operations, such as base access control to verify Common Access Card credentials, which take place in a combat environment. However, DOD’s reasons to collect biometric data continuously change as DOD’s role evolves wherever military operations are under way; whether in a desert environment fighting insurgents or on the high-seas fighting piracy. DOD’s directive that describes the purpose, scope, policy, and responsibilities for the biometrics program uses terms defined by the National Science and Technology Council Subcommittee on Biometrics Glossary. Included in the list of terms and their respective definitions are the following. Collect—capture biometric and related contextual data from an individual, with or without his or her knowledge. Create and transmit a standardized, high-quality biometric file consisting of a biometric sample and contextual data to a data source for matching. Match—for the purpose of DOD’s Directive on biometrics, the process of accurately identifying or verifying the identity of an individual by comparing a standardized biometric file to an existing source of standardized biometric data. Matching consists of either one to one (verification) or one to many (identification) searches. Share—exchange standardized biometric files and match results among approved DOD, interagency, and multinational partners in accordance with applicable law and policy. Store—the process of enrolling, maintaining, and updating biometric files to make available standardized, current biometric information on individuals when and where required. To achieve interoperability, policies and implementation guidance on the collection, storage, and sharing of information should be created to ensure compatible implementation of systems based on standards. Standards are developed by Standards Development Organizations, including the National Institute of Standards and Technology, to provide rules and guidelines to promote interoperability among various systems, including biometric systems. Standards Development Organizations also provide rules and guidelines for testing biometrics and for testing conformance to biometric standards. Standards are generally developed through a consensus process that includes the input of various stakeholders from various sectors such as government, academia, and industry. Federal agencies, such as DOD, adopt standards developed by Standards Development Organizations. For example, DOD used standards recommended by the American National Standards Institute and the National Institute of Standards and Technology as a basis to develop DOD’s Electronic Biometric Transmission Specification (DOD EBTS). DOD has adopted standards for collection of biometric information to facilitate sharing of that information with other federal agencies. DOD recognized the importance of such interoperability and directed adherence to internationally accepted biometric standards. Moreover, DOD has applied the standards to some of its collection devices. However, DOD has not applied the adopted standards to the Army’s primary handheld collection device used in Iraq and Afghanistan. As a result, DOD is unable to automatically transmit information collected by this device, which is about 13 percent of approximately 4.8 million biometric records maintained by DOD, to federal agencies, such as the FBI. Further, DOD has not taken certain actions that would help ensure its collection devices meet new and updated standards. First, DOD does not have an effective process, procedure, or timeline for implementing updated standards. Second, DOD does not routinely test devices at sufficient levels of detail for conformance to these standards. Third, DOD has not fully defined roles and responsibilities that specify accountability needed to ensure its collection devices meet new or updated standards. DOD adopted a standard—DOD EBTS—to facilitate the collection of biometrics and to enhance interoperability of biometrics collected by DOD with other federal agencies’ biometric systems. The first version, DOD EBTS version 1.0, was published on August 19, 2005, and the standard has since been updated three times, with the most recent update, DOD EBTS version 2.0, adopted for use by DOD in April 2010. (See fig. 2 for timeline of DOD’s biometric standard.) These DOD standards are based on recommended standards from the American National Standards Institute and the National Institute of Standards and Technology; these standards are also used by the FBI as the basis for its mission-specific requirements. The conformance of biometric collection devices to standards promotes their interoperability with biometric systems within DOD and with other federal agencies, though it does not guarantee interoperability. Prior to adopting DOD EBTS in 2005, DOD had recognized the importance of interoperability and directed adherence to internationally accepted biometric standards. According to a February 2004 DOD’s Chief Information Officer memorandum on DOD compliance with international standards, standardization and interoperability are important for success in fighting terrorism. Success, the memorandum continued, could be enhanced with systems that communicate and share fingerprint data on “red force” personnel, such as detainees, enemy combatants, and foreign persons of interest as national security threats, with other U.S. government systems. Further, DOD’s Chief Information Officer directed that all new and upgraded DOD biometric collection devices used to collect certain data must conform to the FBI’s mission-specific requirement and the devices must be certified as interoperable with the FBI’s biometric systems. In November 2005, the Army’s Chief Information Officer reiterated the importance of standardization and interoperability of DOD’s biometric systems in fighting terrorism and stated that conformance to standards strengthens DOD’s abilities to fulfill its missions. The memorandum further stated that all new or updated DOD collection devices must meet the DOD EBTS standard and be interoperable with DOD’s biometric system ABIS. Consistent with the Army’s position on interoperability, the DOD Directive on Biometrics, issued in February 2008, stated that collection and transmission of biometric information shall be controlled through the use of DOD adopted standards to enhance consistency and interoperability of biometric information. A 2009 Joint Interoperability report, which reviewed selected biometric systems that interfaced with DOD’s ABIS and analyzed data collected by these systems for conformance issues that have an impact on interoperability, stated that several DOD biometric collection devices meet DOD adopted standards. For example, the Guardian, Fusion, and Secure Electronic Enrollment Kit for Identification all meet the EBTS standard current at the time of the report, specifically, EBTS version 1.2. DOD has not taken certain actions necessary to help ensure that its collection devices adhere to new and updated standards, including not having an effective process, procedure, or timeline for implementing updated standards, not routinely testing collection devices at sufficient levels of detail for conformance to these standards, and not fully defining roles and responsibilities to ensure accountability. For example, a collection device used by the Army to meet an urgent need in 2005 and currently still in use in Iraq and Afghanistan, did not meet the standard current at the time of the 2009 Joint Interoperability report, and according to DOD officials, continues to not adhere to DOD EBTS version 1.2 or the more current version 2.0. As of late 2009, this collection device, known as the Handheld Interagency Identity Detection Equipment or HIIDE, continued to be purchased by DOD. According to DOD officials, DOD continues to use the HIIDE because it meets DOD’s mission needs and since it was developed as an urgent mission need for Central Command to collect and authenticate the identity of individuals, it does not have to adhere to DOD’s information technology standards. Those standards are included in the DOD Information Technology Standards Registry, the central repository for DOD-approved information technology standards, and are mandated for programs of record for biometric technologies, which are considered permanent capabilities. Therefore urgent needs do not have to adhere to DOD adopted standards. According to information provided by BIMA about the composition of ABIS as of September 2010, the HIIDE device is responsible for the collection of 13 percent of the biometric records in ABIS, the largest number of submissions by a handheld device. Because the HIIDE device does not conform to standards, DOD cannot seamlessly share biometric information from this device with other federal agencies. For example, of the approximately 4.8 million biometric records maintained by DOD, approximately 630,000 HIIDE biometric records cannot be searched automatically against the approximately 94 million biometric records in the FBI’s system. Further, if the biometric information collected by the HIIDE is not stored in the FBI IAFIS system, DHS loses the benefit of searching its 119 million biometric records against HIIDE information as well. Both DOD and DHS access FBI’s IAFIS in order to share information. Therefore, if FBI does not have access to DOD information, for example, HIIDE biometric records, then neither does DHS when they search against IAFIS. However, according to DHS and DOD officials, DOD manually provides biometric records of individuals on its watch list, which can include HIIDE-collected biometric information. These records are then manually added to DHS’s IDENT. Without biometric collection devices that conform to DOD adopted standards, DOD limits its and federal partners’ ability to identify potential criminals or terrorists who have biometric records in other federal agency’s biometric systems. Information Technology Standards Registry, the central repository for DOD-approved information technology standards, as the biometric standard for use in all collection devices. According to BIMA, additional guidance was not necessary for the current update to the DOD EBTS 2.0 standard because biometric stakeholders knew about the update since DOD EBTS version 2.0 was an emerging standard. BIMA also stated that emerging standards are provided to help military services plan for updates to DOD adopted standards, and an emerging standard should become a DOD adopted standard within 3 years. However, without timely guidance that documents and communicates a process, procedure, or timeline for updating biometric capabilities from one version of a standard to another, the military services may continue to lack accurate information that is necessary to implement new or updated standards during the acquisition process. Specifically, military services may not have information on when an emerging DOD standard will become mandated within the 3-year time frame, but must ensure that collection devices being developed conform to the DOD mandated standard, not the emerging standard. The Army established the Biometrics Standards Working Group based on the 2008 biometric directive that, among other activities, it should provide guidance for consistent standards implementation, however, the 2009 DOD joint interoperability assessment found that DOD lacked a process beyond the Working Group to address the impact of changes to the DOD adopted standards. Further, absent such a process, procedure, or timeline to manage the update to new standards, the military services may also face increased costs in developing biometric collection devices when time frames for the update of standards are not documented or managed. Service officials said that the Navy’s collection device would have to be updated to the new version of EBTS at the next major development milestone, incurring an additional cost for the development of the collection device. Navy officials estimate that the service will incur $3.4 million in additional costs because of the delay. DOD tests collection devices for conformance to adopted standards, but testing efforts have not always been at a sufficient level of detail or integrated to facilitate interoperability across DOD and federal agencies. The National Science & Technology Council’s policy for enabling the development, adoption, and use of biometric standards acknowledges that the capability to share biometric information will be dependent on rigorous conformance testing. BIMA conducts standards conformance testing to evaluate conformance of collection devices to DOD adopted standards, but the 2009 joint interoperability assessment found that conformance testing efforts have not been integrated and formalized into the biometric enterprise’s processes and procedures that are necessary to facilitate interoperability across DOD and with interagency partners. In addition, a BIMA official told us that the conformance testing done at BIMA is not sufficiently detailed to ensure that collection devices conform to DOD adopted standards. Since certain DOD collection devices were acquired to meet urgent needs, DOD may have relied on vendors to provide devices that purport to, but may not, conform to DOD adopted standards. Without an integrated and formalized process for sufficiently detailed conformance testing, DOD has no mechanism to hold vendors accountable for ensuring that biometric collection devices meet DOD adopted standards. DOD issued a biometrics program directive in February 2008, and a companion draft instruction could provide some guidelines, including on the testing of biometric collection devices for conformance to standards and interoperability. Based on our review of the draft instruction though, it is unclear that it will provide guidance on a process that holds DOD biometric stakeholders accountable for collection devices that conform to standards. Without a process that ensures collection devices are tested at a sufficiently detailed level to conform to DOD adopted standards and that holds DOD biometric stakeholders accountable for device conformance, DOD limits its ability to collect biometric information that is interoperable with other federal agency systems. DOD has a biometric program directive, but could more fully define the roles and responsibilities of DOD entities with the intention of instilling accountability for ensuring its collection devices meet new or updated standards. The Office of Management and Budget guidance on establishing internal controls emphasizes that agencies should ensure accountability for results, and our work on internal controls states that defined roles and responsibilities are needed to achieve an organization’s mission. DOD’s February 2008 biometric program directive assigned some roles and responsibilities to DOD biometric stakeholders, such as designating the Office of the Director for Defense, Research and Engineering, as the Principal Staff Assistant responsible for oversight of DOD biometrics programs and policies. However, based on our review of the directive and according to agency officials, DOD has not fully clarified the differ ing responsibilities that each DOD biometric stakeholder has in ensuring that collection devices conform to adopted standards. In addition, according to DOD officials, DOD has not clarified roles and responsibilities for DOD biometrics and this has caused confusion related to overlapping responsibilities and accountability within Army entities, such as whether BIMA can send requirements for acquiring biometrics capabilities directly to the program manager or whether such requirements should be provided by Army officers and staff responsible for operational requirements. The Office of Management and Budget’s guidance on establishing internal controls emphasizes that agencies should design management structures Moreover, GAO’s for programs to help ensure accountability for results. Standards for Internal Control in Federal Government states that management structures should establish and document roles and responsibilities needed to achieve an organization’s mission and objectives, and that such documentation should be approved, current, and binding on all appropriate stakeholders. DOD recognized that further guidance may be needed to implement the biometrics directive and began developing a draft instruction that would clarify the roles and responsibilities of DOD biometric stakeholders. However, the instruction has been in draft since 2008, and continues to be in draft as of February 2011. A DOD official told us that the instruction is being updated to include a larger oversight role for the Office of the Director for Defense, Research and Engineering, especially for oversight of the Army’s role as DOD’s biometrics Executive Agent. It is not clear that DOD’s draft instruction, when completed, will improve stakeholders’ understanding of roles and responsibilities for DOD biometric activities. For example, with the March 2010 DOD change of the Biometrics Task Force to BIMA it is unclear if the new instruction would include redefined roles and responsibilities associated with BIMA. DOD officials told us that the only documentation they received about the change of the Biometrics Task Force to BIMA was a memorandum in March 2010 that simply stated the name change, but contained no additional information on roles and responsibilities. Further, DOD documents that could provide some clarity to roles and responsibilities by assigning specific actions to DOD biometric stakeholders have not been updated to reflect the change, such as the Biometric Enterprise Strategic Plan 2008-2015 and the corresponding Implementation Plan. According to BIMA officials, both the Biometric Enterprise Strategic Plan and its corresponding Implementation Plan are currently being revised. DOD has an opportunity to further clarify roles and responsibilities through its implementing instruction to help ensure that collection devices are interoperable with other federal agencies. DOD is sharing its biometric information and has an agreement to share biometric information with DOJ, which allows for direct connectivity and the automated sharing of biometric information between their biometric systems. However, DOD’s ability to optimize sharing is limited by not having a finalized sharing agreement with DHS, and its capacity to process biometric information. Currently, DOD and DHS do not have a finalized agreement in place to allow direct connectivity between their biometric systems, due to the need for additional reviews of the proposed agreement by certain DHS officials, among others. DOD is working with DHS to develop a memorandum of understanding to share biometric information now scheduled for completion in May 2011; however, without the agreement, it is unclear whether direct connectivity will be established between DOD and DHS, which affects response times to search queries. In addition, agencies’ biometric systems have varying system capacities based on their mission needs, which affects their ability to similarly process each other’s queries for biometric information. Moreover, the advancements other agencies make in their biometric systems may continue to overwhelm DOD’s efforts as it works to identify its long-term biometric system capability needs and associated costs. DOD is sharing its biometric information and has an agreement to share biometric information with DOJ, which allows for direct connectivity and the automated sharing of biometric information between their biometric systems. DOD and the FBI (a component of DOJ) have an agreement in place that allows for direct connectivity and the automated sharing of unclassified biometric information between their biometric systems. Until DOD and DHS establish direct connectivity between their two biometric systems, they have the option to use the FBI’s biometric system as an indirect link to share limited biometric information (see fig. 3 below). Additionally, as mentioned earlier, according to DOD and DHS officials, DOD manually provides DHS with biometric records on watch listed individuals. In support of national directives and laws directing federal agencies to share information, the DOD directive on biometrics directs the development of interagency agreements for biometrics activities, as appropriate, to maximize effectiveness. According to officials from the Office of the Under Secretary of Defense for Policy, in 2003 the FBI formally requested that DOD share biometric information, and from that point, the agencies established data sharing with each other. DOD and the FBI finalized the memorandum of understanding in 2009 to provide for the sharing of, among other things, unclassified biometric information, as part of the agencies’ efforts to comply with the National Security Presidential Directive-59/Homeland Security Presidential Directive-24. As part of the memorandum, DOD and the FBI agree to share their biometric information with each other in a timely manner when their respective missions require access to such data. In addition to DOD and the FBI’s agreement to share biometric information, DHS, State, and DOJ have agreements in place that allow for direct connectivity and the automated sharing of biometric information among their biometric systems—capabilities that support the collection, storage, use, and sharing of biometric data. Specifically, DHS and State established a memorandum of understanding in 2005 to facilitate interagency cooperation and sharing of, among other things, biometric information on visa applicants and biometric information stored on DHS’s biometric system, to enhance border security and facilitate legitimate travel. State uses DHS’s biometric system for storing and sharing copies of their biometric information. Additionally, DHS, DOJ, and State established a memorandum of understanding in July 2008 to improve information sharing among the three agencies for the purposes of such missions as national security, law enforcement, immigration, and border management. The July 2008 memorandum included an agreement to share, among other things, biometric information through interoperability between the agencies’ biometric systems. According to FBI officials, the FBI initiated the interoperability agreement in 2005 to exchange biometric information between DOJ’s and DHS’s biometric systems and gained access to DHS’s full biometric system in 2008. However, according to DHS officials, initial sharing of DHS high priority biometric information with DOJ’s biometric system began in 2006, such as information on individuals expedited for removal and those denied visas. DOD and DHS currently do not have an agreement in place that allows for direct connectivity between their biometric systems; however, DOD is currently in the process of working with DHS to develop a memorandum of agreement to share biometric information. DOD also does not have an agreement in place to directly share information with State; however, according to DOD officials, State sharing requirements will be covered in the agreement between DOD and DHS. According to the draft memorandum, the intent of the document is to formalize the ongoing relationship between DOD and DHS and to clarify their commitment to permitting the maximum amount of biometric information sharing permitted by law. Among other delays, in July 2010, DOD officials informed us that the draft memorandum was undergoing a subsequent review at DHS because some individuals at DHS had been inadvertently left off the initial review. As of January 2011, DOD and DHS have not signed an agreement that allows for direct connectivity between their biometric systems. We reported in 2008 that DHS officials acknowledged that establishing a sharing agreement with DOD would increase sharing of biometric information between the agencies and close any gaps. According to DHS officials, having such an agreement in place would allow DOD and DHS to access each other’s biometric systems when needed for reasons such as detainee screening and airport passenger screening. Direct access would reduce response times to search queries because currently DOD and DHS biometric systems do not have direct connectivity and therefore do not have automated search capabilities so the response times vary. We recognize that developing an agreement to share information takes time; for example, it took over 5 years to develop the memorandum of understanding between DOD and the FBI. DOD and DHS officials stated they had hoped to have the memorandum completed by the end of 2010; however, as of January 2011 the agreement had not yet been completed. Several dates of completion and reasons for delay of the memorandum between DOD and DHS were provided to us by DOD officials throughout our review. In December 2010, DOD anticipated completing a signed agreement with DHS no later than May 31, 2011. According to DOD and DHS officials, some sharing of information is occurring between DOD, DHS, and State, even though DOD and DHS do not have a finalized sharing agreement. We reported in 2008 that DOD and DHS had not established direct connectivity between their two biometric systems and relied on the FBI’s biometric system as an indirect link between DOD and DHS. At the time, while limited occasional sharing of DOD and DHS biometrics occurred, it did not happen on a regular basis. According to DOD, DHS, and FBI officials, the indirect sharing arrangement through the FBI’s biometric system is still in place, as shown in figure 3. The FBI maintains an Interim Data Sharing Model, which consists of two parts—the FBI provides a set of data to DHS for DHS stakeholders to access and DHS provides a set of data to the FBI for FBI stakeholders to access, to include DOD, which includes biometric information on individuals with expedited removals and individuals who were denied visas. Furthermore, the FBI retains on its IAFIS some biometric information from DOD on non-U.S. persons, such as those who have criminal records, which allows DHS and State to access limited information from DOD through the FBI biometric system. However, both DOD and FBI officials noted that the FBI may be terminating its Interim Data Sharing Model as the FBI transitions to its new biometric system. In March 2011, FBI officials reported that DOD searches of the portion of the Interim Data Sharing Model containing information on expedited removals and individuals who were denied visas were discontinued on January 20, 2011. However, FBI’s IAFIS will continue to facilitate searches of DHS information for DOD until a direct connection has been established between DHS and DOD’s biometric systems, according to FBI officials. Since we reported in 2008, DOD and DHS have established a manual process for sharing information on at least a daily basis—once every 24 hours—through the use of a secured Web site. DOD manually inputs to this web site copies of critical DOD biometric information that DHS can manually access to place onto its own biometric system. The State Department can access this information once it is stored on DHS’s biometric system. However, DHS and State may not be able to take immediate action should they have a query prior to DOD’s once-a-day update. In addition, as noted in our 2008 report, if DHS and State do not have access to DOD biometric information on individuals trying to enter the United States, then they may not be able to determine whether those individuals should be denied entry, and potential harm could come to U.S. interests from individuals inadvertently allowed into the United States. Officials from DOD, DHS, and the FBI have discussed the goal for direct connectivity among their biometric systems to better enable automated sharing of biometric information (see fig. 4). However, as noted earlier, without a finalized agreement between DOD and DHS, it remains unclear when or whether direct connectivity will be established between DOD’s and DHS’s biometric systems. To enable agencies to meet the demand for searching stored biometric information on their systems, agencies’ biometric systems have varying system capacities based on their mission needs, which affects their ability to similarly process each other’s queries for biometric information. As noted previously, the FBI’s IAFIS is a national fingerprint and criminal history system, while DHS’s IDENT is used for many purposes, including border security and visa and naturalization processing. DOD’s Next Generation ABIS is used to identify and verify non-U.S. persons and helps determine if the individual poses a threat or potential threat to national security. DOD’s Next Generation ABIS is currently capable of handling 8,000 transactions per day. In contrast, according to FBI officials, the FBI’s IAFIS system currently performs over 100,000 to 200,000 search queries a day, while DHS manages over 160,000 search queries a day, according to DHS officials. DOD has plans to increase the capacity to 22,000 transactions per day in the third quarter of fiscal year 2011 and upgrades to later bring capacity up to 45,000 transactions per day, according to DOD officials. DOD officials do not believe that they need to match other agencies’ biometric system capacities because they do not anticipate receiving the same number of queries given differences in mission. However, DOD and other agency officials have expressed concern that DOD’s biometric system is limited in its ability to maximize sharing of biometric information. The FBI has reported that DOD is currently meeting their needs by supporting a capacity of 3,000–4,000 transactions per day, for which the FBI could query DOD’s Next Generation ABIS to search against. However, FBI officials told us that they are concerned with DOD’s capacity as the Next Generation ABIS is not capable of handling all of the queries that the FBI receives. FBI officials noted that DOD does not want the FBI to send every search query it receives through DOD’s biometric system. At this time, the FBI and DOD are working to target and define a set of search queries for the FBI to send through Next Generation ABIS, according to FBI officials. However, a maximum transaction capacity has not yet been set for FBI submissions to DOD. Additionally, DHS officials believe DOD will need more capacity to handle search queries in order for direct interoperability between DOD and DHS to occur. DHS reported in November 2010 that when it establishes direct interconnectivity with DOD, DHS plans to send 13,000 search queries in 2011 and 14,000 search queries in 2012 to DOD’s Next Generation ABIS for searching per day. DHS noted in January 2011 that transaction volumes for search queries from DHS to DOD’s biometric system are currently in flux and have not been finalized. However, DOD officials have acknowledged that their current system’s transaction capacity is limited for sharing because the number of queries from other federal agencies currently exceeds their biometric system capacity of 8,000 transactions per day. The advancements other agencies continue to make in their biometric systems may overwhelm DOD’s efforts as it works to identify its long-term biometric system capability needs and associated costs. At the same time that DOD carries out these expansion efforts, other agencies continue to make advancements in their biometric systems and will continue to do so in the future for various reasons, including the addition of new technology and biometric modalities as emerging technologies and modalities are identified and matured. For example, as previously mentioned, DHS is considering iris and facial biometrics for future incorporation into its biometric system. In addition, the FBI is moving to an enhanced biometric system that will incorporate scars, marks, tattoos, face, iris, and palm biometrics. Such agency biometric system advancements could exceed DOD’s biometric system’s capability to respond. In light of this, DOD may not be able to facilitate sharing of biometric information across federal agencies in a timely and efficient manner, in accordance with DOD policies. Specifically, DOD’s biometric directive requires that biometric systems be interoperable with other identity management capabilities and systems both internal and external to DOD, to maximize effectiveness, as well as information-sharing efforts. Furthermore, DOD’s biometrics strategic plan outlines as a primary objective that DOD operate and maintain biometric systems that enable sharing with other biometric systems as part of DOD’s goal to meet the warfighters’ needs in a timely manner. National security challenges from multiple sources continue to increase, therefore making it critical that federal agencies find effective ways to collaborate and share information—particularly biometric information— on those who would threaten the United States. DOD has taken steps to adopt biometric standards that could improve the quality of biometric information collected and has increased its efforts to share biometric information with key federal agencies. However, DOD could take certain actions to help improve its ability to collect and share biometric information with other federal agencies. For example, DOD has adopted standards for the collection of biometrics to enhance interoperability with other key federal agencies’ biometric systems, but at least one DOD device responsible for the collection of over 600,000 biometric records, does not meet DOD adopted standards, such as a handheld biometric collection device used by the Army. DOD can take steps to improve conformance to DOD adopted standards with a process for implementing updated standards for biometric collection devices that are in the acquisition process, more sufficient testing of devices for conformance to adopted standards to better facilitate interoperability with federal agencies, and more fully defining the roles and responsibilities of DOD entities to ensure its collection devices meet DOD adopted standards. Without these steps, DOD limits its ability to identify potential criminals or terrorists who have biometric records in other federal agency’s biometric systems, and may result in the military services incurring delays and additional costs if they find they have acquired a device that is no longer acceptable to DOD. In addition, DOD has agreements in place with key federal agencies such as DOJ to help facilitate direct connectivity between their biometric systems, but it has not finalized an agreement with DHS and by extension the State Department. This has an impact on timely interoperability. Finally, the varying system capacities at these key federal agencies exceeds that of DOD to the extent that agencies have expressed concern that DOD’s biometric system may be unable to meet the search demands from their own biometric systems within useful response time frames. Without efforts to address these issues, the quality and process of collecting and sharing biometrics may continue to limit DOD’s ability to identify potential criminals or terrorists who have biometric records in other federal agency’s biometric systems in a timely manner, and ultimately these challenges to interoperability may place U.S. national security at greater risk. To improve DOD’s ability to collect and help ensure that federal agencies are sharing biometric information on individuals who pose a threat to national security to the fullest extent possible, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics, as the Principal Staff Assistant responsible for the oversight of DOD biometrics, to take the following five actions in collaboration with other key federal agencies and internal DOD stakeholders, including BIMA, U.S. Army, U.S. Navy, U.S. Marines, and U.S. Air Force: Implement a process for updating collection devices to adopted standards to help ensure that all DOD systems related to biometrics, including collection devices, conform to adopted standards. Implement a process for testing collection devices at a sufficiently detailed level to help ensure that all DOD systems related to biometrics, including collection devices, conform to adopted standards. More fully define and further clarify the roles and responsibilities needed to achieve DOD’s biometric program and objectives for all stakeholders that include ensuring collection devices conform to adopted standards. Complete the memorandum of agreement with the Department of Homeland Security regarding the sharing of biometric information as appropriate and consistent with U.S. laws and regulations and international agreements, as well as information-sharing environment efforts. Identify its long-term biometric system capability needs, including the technological capacity and associated costs needed to support both the warfighter and to facilitate sharing of biometric information across federal agencies, and take steps to meet those capability needs, as appropriate and consistent with U.S. laws and regulations, international agreements, and available resources. In written comments on a draft of this report, DOD agreed with all of our recommendations. DOD’s comments appear in their entirety in appendix III. DHS DOJ, State, and the Department of Commerce/National Institute of Standards and Technology also reviewed a draft of this report. We received technical comments from DHS and DOJ, which we have incorporated as appropriate. DOD agreed with our recommendation to implement a process for updating collection devices to adopted standards to help ensure that all DOD systems related to biometrics, including collection devices, conform to adopted standards. In its response, DOD noted that the legacy HIIDE devices are near the end of their service life and are being retired. DOD intends to procure an updated handheld device compliant with the mandated data standard to replace the HIIDE, which was EBTS 1.2 at the time the solicitation was developed and published, and as required by DOD Directive 8521.01E for all new acquisitions. DOD expects to award this contract in April 2011, with fielding in August 2011. DOD further stated that DOD’s Biometrics Standards Conformity Assessment Test Program plans to verify compliance of the updated handheld devices before deployment, and DOD plans additional engineering efforts to update devices to the recently adopted EBTS 2.0 standard to ensure compatibility with interagency partners. DOD agreed with our recommendation to implement a process for testing collection devices at a sufficiently detailed level to help ensure that all DOD systems related to biometrics, including collection devices, conform to adopted standards. In its response, DOD stated that it has established a Biometrics Standards Conformity Assessment Test Program, accredited in January 2011 as part of the National Institute of Standards and Technology’s (NIST) National Voluntary Laboratory Accreditation Program (NVLAP) for biometric testing. Relevant tests include conformance tests to DOD EBTS and FBI Electronic Fingerprint Transmission Specification, as well as evaluations and assessments of biometric-enabled devices and systems that interoperate with the authoritative biometrics database and other repositories of biometric data. DOD added that the current DODD 8521.01E already requires such compliance testing for new biometrics acquisitions, but DOD noted and we agree that the directive does not fully address quick reaction capabilities such as the HIIDE. DOD further added that it plans to work with the FBI to develop a co-sharing arrangement to leverage existing standards compliance testing at the FBI Biometric Center of Excellence to strengthen interagency interoperability. DOD stated that it plans to include these requirements in the biometric DOD directive no later than September 2011. We agree that incorporating into the biometric DOD directive the requirements of conformance testing of biometric systems through the newly established Biometrics Standards Conformity Assessment Test Program, conformance testing for all biometric devices, and co-sharing arrangements with FBI Biometric Center of Excellence would be beneficial. DOD agreed with our recommendation to more fully define and further clarify the roles and responsibilities needed to achieve DOD’s biometric program and objectives for all stakeholders that include ensuring collection devices conform to adopted standards. In its response, DOD indicated that it is updating DOD Directive 8521.01E “Defense Biometrics,” which establishes policy, assigns responsibilities, and describes procedures for DOD biometrics. DOD further noted that the update to the DOD biometrics directive will more fully define and clarify the roles and responsibilities of biometrics stakeholders, including responsibilities for testing collection devices for compliance with adopted standards. According to DOD, the biometric directive will be completed by September 2011. DOD agreed with our recommendation to complete the memorandum of agreement with the Department of Homeland Security regarding the sharing of biometric information as appropriate and consistent with U.S. laws and regulations and international agreements, as well as information- sharing environment efforts. On February 14, 2011, we provided DOD a draft of this report for review and comment. In response to our draft recommendation, and while the report was under review, DOD finalized an agreement with DHS regarding biometric sharing on March 3, 2011. DOD agreed with our recommendation to identify its long-term biometric system capability needs, including the technological capacity and associated costs needed to support both the warfighter and to facilitate sharing of biometric information across federal agencies, and take steps to meet those capability needs, as appropriate and consistent with U.S. laws and regulations, international agreements, and available resources. In its response, DOD noted that ABIS is currently meeting all the sharing transactions required by DHS and FBI, and DOD has expansion plans in place to increase ABIS’s capability to over 40,000 daily transactions, which according to DOD will continue to meet the 14,000 daily biometrics transaction rate articulated by DHS for 2012. Further, DOD stated that it continues to work closely with the interagency Interoperability Executive Steering Committee to ensure DOD has visibility as new interagency requirements coalesce, and can modify ABIS expansion plans to be responsive to our interagency sharing responsibilities. According to DOD, it expects to have an updated ABIS sizing plan to support the projected future DOD and interagency transaction requirements by July 2011. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretary of Defense; the Secretary of State; the Attorney General; Secretary of Commerce; the Secretary of Homeland Security, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-5431 or at dagostinod@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be founds on the last page of this report. Key contributors to this report are listed in appendix IV. This report addresses the extent to which DOD (1) adopted standards and has taken actions to facilitate the collection of biometrics that are interoperable with other key federal agencies, and (2) shares biometric information across key federal agencies. To address our objectives, we reviewed prior GAO reports related to the collection, storage, use, sharing, and management of biometric information and interagency sharing of information for national security purposes. We also analyzed a number of Presidential Directives, Executive Orders and Memorandums, and laws that affect the collection and sharing of biometric and biographic information. For example, we analyzed the National Security Presidential Directive-59/Homeland Security Presidential Directive-24 and the companion action plan for Biometrics for Identification and Screening to Enhance National Security, which establish a framework to ensure that federal executive departments and agencies use compatible methods and procedures for the collection and sharing of identity information across federal departments and agencies. In addition, we reviewed national strategies focused on information sharing and national security to gain an understanding of how biometrics collection and sharing plays a part in achieving national goals of gathering and sharing information to protect the United States. We contacted and obtained information from officials and entities associated with the collection, storage, use, and sharing of biometric information across the Department of Defense (DOD), as well as other key federal agencies, including the Department of Justice (DOJ)/Federal Bureau of Investigation (FBI), Department of State (State), and the Department of Homeland Security (DHS). Further, we conducted an interview with officials of the National Science and Technology Council to determine the role and interests that the White House has in biometrics. We conducted site visits to a selection of facilities that analyze, store, and share biometric information, including the Army’s National Ground Intelligence Center, in Charlottesville, Virginia; the Army’s Biometric Identity Management Agency; and the FBI’s Criminal Justice Information Services complex, both located in Clarksburg, West Virginia; to discuss the use of applicable standards, federal agency biometric systems interoperability, and to gain perspective on the sharing of biometric information between federal agencies. We met with U.S. Central Command and U.S. Special Operations Command officials to obtain their views on how these two combatant commands had operationalized the collection of biometric information. More detailed information on the federal agencies and officials we obtained information from on the collection, use, storage, and sharing of biometric information during our review appears below in table 1. To determine the extent to which DOD adopted standards and has taken actions to facilitate the collection of biometrics that are interoperable with other ke key DOD memoranda, directives, and guidance, such as the DOD Directive on Biometrics. In addition, we interviewed officials from DHS, State, an DOJ/FBI to gain their perspective on the collection and sharing of comparable biometric information am national standards and requirements for the electronic formatting of biometric information to see whether key federal agencies follow a common set of standards for the collection of biometric information. For example, we reviewed DOD’s Electroni Specification, which is based on recommended standards from the ute of American National Standards Institute and the National Instit Standards and Technology. We interviewed officials from th Institute for Standards and Technology in order to obtain thei f biometric on the use of standards for the consistent collection o information and how these standards are adopted by federal agen help ensure interoperability of the devices used to collect biometric information. We reviewed a DOD interoperability assessment report o Automated Biometric Identification System and Army evaluations of th Interagency Identity Detection Equipment to identify DOD’s Handheld ithin these systems. We interoperability and conformance to standards w did not evaluate the technical performance of collection devices used to gather identity info potential impact of adopted standards on information. In addit to determine DOD biometrics and interviewed k management of the c Specifically, using criteria on internal control and program management y federal agencies, we interviewed DOD officials and reviewed ong federal agencies. We reviewed rmation. We discussed with federal agency officials the collection devices and systems that do not conform to their ability to collect comparable biometric ion, we reviewed key DOD biometric documentation management practices related to the collection of ey officials from DOD responsible for the ollection of biometrics. (See above table 1). from the Office of Management and Budget and the Project Management Institute’s The Standard for Program Management, we analyzed DOD guidance on the collection of biometrics to determine whether any internal control or program management weakness may reduce its ability to collect biometric information and meet biometric mission objectives. To gather the perspective of DOD biometric program management, we interviewed DOD biometric stakeholders such as the military services, Biometric Identity Management Agency, and combatant commands. In addition, we interviewed agency officials from the FBI and DHS to gather their perspectives on DOD’s management practices related to the collection of biometrics. To determine the extent to which biometric information is shared and has the system capacity needed to facilitate biometric sharing across key federal agencies, including DOD, we interviewed officials from DOD, DHS, State, and the FBI on the policies, governance processes, and systems in place for sharing biometric information—DOD’s Automated Biome Identification System (ABIS), DHS’s Automated Biometric Identification System (IDENT), and the FBI’s Integrated Automated Fingerprint Identification System (IAFIS). We analyzed the formal and draft agreements for sharing biometric information between agencies to better as any understand the scope of the biometric information shared, as well limitations, and the degree to which they help facilitate direct conne between the biometric systems to promote automated sharing. In ctivity addition, we collected and reviewed federal policies, guidance, and othe r documentation that covered the sharing of biometric information and the current and planned systems that support biometric information s haring. For example, we reviewed DHS’s IDENT Data Response Sharing Policy, which reinforces the DHS agreement with State and DOJ/FBI on shar ing biometric information. We reviewed information provided by the FBI on IAFIS and their planned changes to the Next Generation Identification system that would expand their biometric capabilities from fingerprints include the collection, matching, storage, and sharing of other biometrics to such as facial and iris images. In order to confirm information provided by agency officials in interviews on the three primary biometric systems, we developed a structured questionnaire that was pre-tested and provided to key agency officials responsible for each of the three biometric systems. We conducted this performance audit from December 2009 through 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit t obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Based on the figures provided by DOD as of November 2010, about $3.5 billion has been or will be spent to fund its biometrics programs from fiscal year 2007 through fiscal year 2015. DOD reports that almost two- thirds of the funding for its biometric program from fiscal year 2007 through fiscal year 2015 is drawn from the supplemental budget, which is in excess of DOD’s base defense budget. Specifically, DOD reports that for fiscal years 2007 through 2011, supplemental funding accounts for over $2.0 billion for DOD’s biometric programs with less than $500 million from defense base funding (see table 2). In contrast, in fiscal years 2012 through 2015 DOD is estimating base funding at more than $600 million, with no funding from supplements (see table 3). The change in funding, from supplemental support to base funding, is due in part to efforts to make a permanent program of record of DOD’s biometric systems. DOD has begun to establish a more formal biometric program by identifying the requirements needed by the warfighter, assessing gaps in warfighting capabilities, and recommending solutions to resolve those gaps. DOD officials explain that as biometric technologies and systems become programs of records, funding should be built into base defense funding, rather than supplemental funding. As shown, table 2 includes fiscal year 2007 through and including fiscal year 2011, and identifies biometric program base and supplemental funding while table 3 sets out fiscal year 2012 through fiscal year 2015, where it is currently unknown whether supplemental funding for the biometrics program will be requested. We have previously recommended that DOD shift certain contingency costs into the annual base budget to allow for prioritization and trade-offs among its needs and to enhance visibility in defense spending. With regard to its biometric program, DOD fiscal year 2012 through fiscal year 2015 budget plans shift funding into the base defense budget; however, DOD officials told us they anticipate continued need for supplemental funding to support the war efforts, but were unable to provide an estimate. Aevelopinarfighter needs related to ds DOD identifies the wg future biometric capabilities, these requirements wy affect its future ill likel budget requests. In addition to the contact named above, Penney Harwell Caramia, Assistant Director; Rebekah Boone; John Clary; Grace Coleman; Michele Fejfar; Lori Kmetz; Katherine Lenane; Amber Lopez Roberts; Greg Marchand; Jennifer Neer; Maria Stattel; Amie Steele; and Sonja Ware made key contributions to this report. Homeland Security: Key US-VISIT Components at Varying Stages of Completion, but Integrated and Reliable Schedule Needed. GAO-10-13. Washington, D.C.: November 19, 2009. Defense Management: DOD Can Establish More Guidance for Biometric Collection and Explore Broader Data Sharing. GAO-09-49. Washington, D.C.: October 15, 2008. Defense Management: DOD Needs to Establish Clear Goals and Objectives, Guidance, and a Designated Budget to Manage Its Biometrics Activities. GAO-08-1065. Washington, D.C.: September 26, 2008. Information Sharing Environment: Definition of the Results to Be Achieved in Improving Terrorism-Related Information Sharing Is Needed to Guide Implementation and Assess Progress. GAO-08-492. Washington, D.C.: June 25, 2008. Homeland Security: Strategic Solution for US-VISIT Program Needs to be Better Defined, Justified, and Coordinated. GAO-08-361. Washington, D.C.: February 29, 2008. GAO Management Letter to the Secretary of Defense. Washington, D.C.: December 13, 2007. Terrorist Watch List Screening: Opportunities Exist to Enhance Management Oversight, Reduce Vulnerabilities in Agency Screening Processes, and Expand Use of the List. GAO-08-110. Washington, D.C.: October 11, 2007. Border Security: Security of New Passports and Visas Enhanced, but More Needs to Be Done to Prevent Their Fraudulent Use. GAO-07-1006. Washington, D.C.: July 31, 2007. Border Security: Strengthened Visa Process Would Benefit from Improvements in Staffing and Information Sharing. GAO 05-859. Washington, D.C.: September 13, 2005. Port Security: Better Planning Needed to Develop and Operate Maritime Worker Identification Card Program. GAO-05-106. Washington D.C.: December 10, 2004. Border Security: Joint, Coordinated Actions by State and DHS Needed Guide Biometric Visas and Related Programs. GAO-04-1080T. Washington, D.C.: September 9, 2004. Border Security: State Department Rollout of Biometric Visas on Schedule, but Guidance is Lagging. GAO-04-1001. Washington, D.C.: September 9, 2004. Technology Assessment: Using Biometrics for Border Security. GAO-03-174. Washington, D.C.: November 15, 2002. | Biometrics technologies that collect and facilitate the sharing of fingerprint records, and other identity data, are important to national security and federal agencies recognize the need to share such information. The Department of Defense (DOD) plans to spend $3.5 billion for fiscal years 2007 to 2015 on biometrics. GAO was asked to examine the extent to which DOD has (1) adopted standards and taken actions to facilitate the collection of biometrics that are interoperable with other key federal agencies, and (2) shares biometric information across key federal agencies. To address these objectives, GAO reviewed documents including those related to standards for collection, storage, and sharing of biometrics; visited selected facilities that analyze and store such information; and interviewed key federal officials. DOD has adopted a standard for the collection of biometric information to facilitate sharing of that information with other federal agencies. DOD recognized the importance of interoperability and directed adherence to internationally accepted biometric standards. DOD applied adopted standards in some but not all of its collection devices. Specifically, a collection device used primarily by the Army does not meet DOD adopted standards. As a result, DOD is unable to automatically transmit biometric information collected to federal agencies, such as the Federal Bureau of Investigation (FBI). For example, this device is responsible for 13 percent of the records maintained by DOD--the largest number of submissions collected by a handheld device, according to DOD. Further, this constitutes approximately 630,000 DOD biometric records that cannot be searched automatically against FBI's approximately 94 million. DOD has not taken certain actions that would likely improve its adherence to standards, all of which are based on criteria from the Standard for Program Management, the National Science and Technology Council, and the Office of Management and Budget guidance, respectively. First, DOD does not have an effective process, procedure, or timeline for implementing updated standards. Second, DOD does not routinely test at sufficient levels of detail for conformance to these standards. Third, DOD has not fully defined roles and responsibilities specifying accountability needed to ensure its collection devices meet new and updated standards. DOD is sharing its biometric information and has an agreement to share biometric information with the Department of Justice, which allows for direct connectivity and the automated sharing of biometric information between their biometric systems. DOD's ability to optimize sharing is limited by not having a finalized sharing agreement with DHS, and its capacity to process biometric information. Currently, DOD and DHS do not have a finalized agreement in place to allow direct connectivity between their biometric systems. DOD is working with DHS to develop a memorandum of understanding to share biometric information now scheduled for completion in May 2011; however, without the agreement, it is unclear whether direct connectivity will be established between DOD and DHS, which affects response times to search queries. Further, agencies' biometric systems have varying system capacities based on their mission needs, which affects their ability to similarly process each other's queries for biometric information. As a result, DOD and other agency officials have expressed concern that DOD's biometric system may be unable to meet the search demands from their other biometric systems over the long-term. DOD officials do not believe that they need to match other agencies' biometric system capacities because they do not anticipate receiving the same number of queries given differences in mission. However, the advancements other agencies make in their biometric systems may continue to overwhelm DOD's efforts as it works to identify its long-term biometric system capability needs and associated costs. To improve DOD's ability to collect and share information, GAO recommends that DOD implement processes for updating and testing biometric collection devices to adopted standards; fully define and clarify the roles and responsibilities for all biometric stakeholders; finalize an agreement with the Department of Homeland Security (DHS); and identify its long-term biometric system capability needs. DOD agreed with all of GAO's recommendations. |
Definitions vary somewhat, but for the purpose of this report, we define alternative jet fuels as drop-in liquid fuels that are derived from non- petroleum feedstocks, including renewable biomass (such as crop and tree residues, algae, or separated municipal solid waste) and some nonrenewable sources (such as natural gas or coal). Because they are “drop-in,” alternative jet fuels can replace conventional petroleum-based jet fuel (i.e., conventional jet fuel) without the need to modify aircraft engines and fuel distribution infrastructure. This definition means that alternative jet fuels are a substitute for conventional jet fuel and would directly compete with it on the commercial market. Most development activities—both federal and private industry activities—are focused on alternative jet fuels derived from renewable sources, in part, because of the prominence of environmental sustainability in national strategies and a federal requirement that any alternative jet fuel for operational use procured by any federal agency must have lifecycle greenhouse gas emissions less than or equal to conventional fuel of the same purpose. Alternative jet fuels are generally derived from the same feedstocks and processes as other advanced fuels (e.g., renewable diesel) and even some industrial products (e.g., plastics). As a result, alternative jet fuel producers have some flexibility in producing a variety of fuels and other products, seeking to maximize their profit relative to demand and costs. Figure 1 describes the six segments of the supply chain for the development and use of alternative jet fuels. The first three segments of the alternative jet-fuel supply chain— feedstock production, feedstock logistics, and fuel production—are common to all alternative fuels, and the first two segments are generally independent of the end-use product or co-products that are produced. For example, the camelina crop can be processed, converted, and refined into multiple end-use products, including renewable gasoline, diesel, and jet fuel, with the relative quantities of each of these end-use products determined by the fuel refiner. The activities associated with producing the camelina (e.g., growing the crop) and the logistical activities of collecting, storing, and transporting the camelina feedstock before it is converted to fuel are the same regardless of the relative quantities of renewable diesel or jet fuel produced. In the third segment of the supply chain, alternative jet fuels are generally co-produced with other end-use products, although fuel producers do have some flexibility in the mix of products they make. The remaining three segments of the supply chain—fuel testing and approval, fuel distribution, and end use—include activities that are specific to jet fuel. For example, this supply chain includes a segment for fuel testing and approval because before any alternative jet fuel can be approved for commercial or military use, it must meet unique safety and performance standards that are more rigorous than standards for other alternative transportation fuels. Standards for alternative jet fuels are set out in applicable standards controlled by ASTM International and the appropriate military department within DOD (the Navy and the Air Force). The requirements for fuel testing and approval under ASTM International and military standards vary by the characteristics of the fuel and the feedstock and production process used, but generally the requirements include a significant amount of fuel, engine, and aircraft testing. Initially, a fuel may be tested in a laboratory using small quantities of fuel (as little as 500 milliliters), but as it progresses through the approval process, fuel quantity requirements could reach as much as 225,000 gallons if extensive engine testing is required. The last two segments of the alternative jet fuel supply chain—fuel distribution and end use—also include activities specific to jet fuels, but not specific to alternative jet fuels relative to conventional jet fuel. Specifically, according to the applicable ASTM International and military standards, once an alternative jet fuel is produced, certified, and released under the applicable standard, it also meets the standards for conventional jet fuel. Accordingly, alternative jet fuels can be seamlessly integrated into the existing jet fuel distribution system and onto aircraft without changes to any infrastructure. Although no alternative jet fuels are currently available in the United States at a competitive price, as discussed later in this report, two alternative jet fuel production processes—referred to as the Fischer- Tropsch process and the HEFA process—are approved for use by commercial and military aviation. Under the previously mentioned ASTM International and military standards, fuels produced through these two processes are approved for up to a 50 percent blend with conventional jet fuel. As with conventional jet fuel production processes, both of these processes produce multiple end-use products, including, for example, diesel and jet fuel. The fuel producer, to a limited extent, determines the relative quantities of jet fuel, diesel, and other products to be produced. For many feedstocks used in the HEFA process, changing the product ratio to produce more jet fuel and less diesel, however, is more costly because it requires additional processing and increases the proportion of output that is comprised of less-valuable co-products, such as liquefied petroleum gas. Seven other production processes for alternative jet fuels, such as converting alcohols to jet fuel, are undergoing review in the ASTM International testing and approval process and in some military departments’ testing and approval processes. The White House has developed broad national strategies that promote the development of alternative fuels to help secure energy independence, foster economic development, and reduce greenhouse gas emissions. For example, the Blueprint for a Secure Energy Future (March 2011), the National Bioeconomy Blueprint (April 2012), and the President’s Climate Action Plan (June 2013), all describe how supporting the development of alternative fuels can contribute to achieving these broad national goals. However, these strategies also note that alternative fuels are only part of a wide variety of other complementary activities working toward the same goals. For example, according to the President’s Climate Action Plan, meeting U.S. greenhouse gas-emission reduction goals depends not only upon the development and use of alternative fuels, but also on numerous other activities such as increasing fuel economy standards, expanding and modernizing the electric grid, and improving the energy efficiency of homes and businesses. Some initiatives led by one or more of our five selected federal agencies involved in the development or use of alternative jet fuel help support these broad national strategies and are often similarly broad—focusing on issues that are common to a variety of alternative fuels, including alternative jet fuels. For example, to support the development of technologies and processes necessary for the commercial production of biofuels at prices that are competitive with conventional fuels, USDA and DOE jointly administer the Biomass Research and Development Initiative, which assists in developing these technologies through research, development, and demonstration projects. According to USDA and DOE officials, most of the initiative’s projects do not target alternative jet fuels specifically. However, officials from these agencies noted that general scientific advancements that broad initiatives identify can also advance alternative jet fuels development specifically. For example, research that improves the efficiency of a particular fuel-conversion process often supports the development of all fuels that the process could produce. In 2007, the Energy Independence and Security Act (EISA) of 2007 expanded the Renewable Fuel Standard (RFS) to cover most surface transportation fuels, such as fuels for use in motor vehicles and engines and nonroad vehicles and engines, but not jet fuel. Overall, jet fuel is a fraction of the total transportation fuel consumed in the United States. The expanded RFS generally required that covered transportation fuels contain 9-billion gallons of renewable fuels in 2008, with renewable fuels’ volumes increasing annually to 36-billion gallons in 2022. To demonstrate compliance with the RFS, fuel producers or importers use renewable identification numbers (RINs). Fuel producers or importers can obtain RINs by purchasing and blending renewable fuels themselves, or they can purchase RINs from renewable fuel producers, importers, blenders, or other parties. In this way, the renewable fuel program has created a market for RIN credits. While jet fuel is not used to calculate a fuel producer’s or importer’s renewable fuel obligation, EPA determined in its March 2010 final rule for the expanded renewable fuel program that some feedstocks and conversion processes for renewable jet fuel qualify as “advanced biofuels,” one of the new categories of renewable fuel established by EISA. In addition, through regulations issued in March 2013, EPA clarified that some renewable diesel processes that had been previously evaluated included jet fuel and also approved additional jet-fuel pathways. If alternative jet fuels produced through these approved processes generate RIN credits—which the fuel producer may then sell to others—their sale may help subsidize the cost of producing qualifying alternative jet fuels. Four of the selected agencies—FAA, DOD, USDA, and DOE—support initiatives that target alternative jet-fuel development or use specifically. FAA and DOD have established specific goals for using alternative jet fuels in commercial and military aircraft and support research and development (R&D) activities—such as testing to approve new alternative jet fuels—to help them to achieve these goals. DOD, USDA, and DOE also have initiatives that provide direct financial support for future alternative jet-fuel production on a commercial-scale. All of these agencies coordinate their alternative jet fuel-related efforts with industry and other stakeholders through partnerships and agreements. FAA and DOD have established usage goals specifically for alternative jet fuels. In fiscal year 2012, FAA set a goal for the U.S. aviation industry (including commercial and military aircraft) to use 1-billion gallons of alternative jet fuels annually by 2018 with the intent of encouraging commercial production. According to FAA officials, this represents about 5 percent of the predicted jet fuel consumption for domestic airlines and the military in 2018. Achieving this goal, however, will depend on a variety of factors, including support from other federal agencies and industry stakeholders. USDA and other nongovernmental stakeholders have stated their intent to help enable commercial production of alternative jet fuels in support of FAA’s goal through existing programs and expanded collaboration. Two of DOD’s military departments—the Navy and the Air Force—have also established usage goals for alternative fuels, including alternative jet fuels. To support these usage goals, the Navy and Air Force are willing to purchase alternative fuels that meet specific criteria, including availability at a price that is competitive with conventional fuels. The Navy’s 2010 A Navy Energy Vision for the 21st Century states that increasing its use of alternative energy—including alternative jet fuels—will help protect it from energy price volatility and supply disruptions. The plan sets a goal of deriving 50 percent of total Navy energy consumption afloat—including its jet fuel consumption—from alternative sources by 2020, which, according to Navy estimates, would require using about 336-million gallons of alternative fuels annually (both marine and jet fuels) by 2020. The Navy consumes over 600-million gallons of petroleum-based aviation fuel each year, which according to a Navy official, constitutes about 40 percent of its total petroleum consumption. In addition to setting quantitative goals, the plan established a goal of demonstrating (which the Navy completed in July 2012) and deploying the Great Green Fleet—a group of ships and aircraft fueled by alternative jet fuels and other alternative energy sources—by 2016. The Air Force’s 2013 U.S. Air Force Energy Strategic Plan includes a goal of increasing the use of cost-competitive drop-in alternative jet- fuel blends for non-contingency operations to 50 percent of total consumption by 2025. According to the plan, the Air Force consumes about 2.5 billion gallons of jet fuel each year, accounting for about 80 percent of its total energy consumption. The plan states that using alternative jet fuels could help to diversify the types and secure the quantities of energy that are needed to perform the Air Force’s missions, which are currently “heavily dependent” upon petroleum and petroleum-derived fuels, posing significant strategic and security vulnerabilities on energy supplies. Two types of federal initiatives specifically support the development and use of alternative jet fuels—sponsoring R&D and direct financial support for future commercial production. FAA, DOD, and DOE support R&D activities to target alternative jet fuels specifically, such as testing to approve new alternative jet fuels or research to determine the environmental or economic impact of using them. USDA, DOE, and DOD have also taken some steps to provide direct financial support for future commercial-scale production of alternative jet fuels. DOT—primarily FAA—supports activities to determine the technical feasibility and impact of using alternative jet fuels, including ways to reduce the cost of production, through FAA’s Continuous Lower Energy, Emissions and Noise (CLEEN) program and Centers of Excellence (COEs), and through other DOT activities. CLEEN Program: Launched by FAA’s Office of Environment and Energy in 2010, CLEEN is a cost-sharing program that, among other things, supports fuel-testing activities to generate data that can be used to support the approval of new alternative jet fuels. According to FAA officials, under the program, FAA is providing about $125 million in matching funds (of which about $93 million was provided through fiscal year 2013) to five projects with engine and airframe manufacturers. According to FAA officials, four of these projects address issues related to alternative jet fuels development or use. For example, through fiscal year 2013, FAA has awarded about $5.5 million through the CLEEN program to conduct laboratory and engine- component tests of advanced alternative jet fuels that could be approved for commercial use by ASTM International. FAA has announced its plans to implement a follow-on program in 2015, called CLEEN II—when FAA plans to end the initial CLEEN program. Like the initial program, CLEEN II’s goals will include developing and demonstrating drop-in sustainable alternative jet fuels. However, FAA officials told us that under CLEEN II they intend to place an emphasis on advancing coordinated test methods and capabilities to reduce testing cost and time and the possibility of redundant testing by multiple engine manufacturers. Centers of Excellence: FAA also sponsors research studies related to the environmental impact of using alternative jet fuels through its COEs. Beginning in 2003, FAA sponsored the Center of Excellence for Aircraft Noise and Aviation Emissions Mitigation, named Partnership for AiR Transportation Noise and Emissions Reduction (PARTNER), a collaborative effort that researched solutions for existing and anticipated aviation-related noise and emissions problems. According to FAA officials, five PARTNER projects focused on alternative jet fuels specifically. For example, one project studied the economic feasibility, production potential, and environmental impact of alternative jet-fuel use. Upon the expiration of PARTNER’s 10-year cooperative agreement in September 2013 and as required in the FAA Modernization and Reform Act of 2012, FAA selected a team of universities to form a new COE for Alternative Jet Fuel and Environment, named the Aviation Sustainability Center (ASCENT), with research goals that include better understanding ways to reduce the costs of production processes and ways to meet FAA’s goal of using 1-billion gallons of alternative jet fuels by 2018. ASCENT is being led by Washington State University and the Massachusetts Institute of Technology, and is expected to receive at least $4 million annually for 10 years to explore ways to meet FAA’s environmental and energy goals, including sustainable alternative jet fuels. DOT activities: DOT also helps fund research through broad agency announcements for FAA-sponsored projects that address specific alternative jet fuels’ testing needs. Specifically, in 2010 DOT invited research in four priority areas: development of novel “drop-in” alternative jet fuels, alternative jet fuels’ quality control, sustainability guidance for alternative jet fuels’ users and performance, and durability testing of new fuels. According to FAA officials, DOT’s John A. Volpe National Transportation Systems Center has administered six FAA-sponsored broad agency-announcement research projects related to alternative jet fuels at a total cost of about $7 million. For example, DOT provided funds to three alternative jet-fuel producers to develop and optimize conversion processes for alternative jet fuels. These funds better positioned the fuel producers to produce fuel for testing purposes and develop knowledge to help overcome key technical hurdles to commercial-scale production. DOD also supports activities to test and approve alternative jet fuels. All three DOD military departments coordinate their alternative jet-fuel testing and approval efforts through the Tri-Service Alternative Fuels Working Group. According to Air Force officials, this working group helps share data, reports, testing, and certification practices across DOD and is working toward developing a department-wide certification strategy. Specifically, the Army, Navy, and Air Force test alternative jet fuels to ensure that they are safe to use on military ships, aircraft, and fuel distribution systems. Their testing programs capture technical data through laboratory, component, engine, fuel system, and weapon system tests that evaluate the effects of changes in fuel chemistry and properties on the performance and reliability of military equipment. According to DOD officials, the department purchased about 1.5-million gallons of alternative jet fuels to conduct the department’s testing and approval activities from fiscal years 2007 to 2013 at a total cost of almost $40 million. Officials from the Navy and Air Force told us that these activities will help enable them to achieve their stated goals for alternative jet-fuels use. DOE also, on behalf of DOD, recently solicited applications for R&D projects that help enable conventional coal-to-liquid production plants to produce commercially viable quantities of jet fuel that have equal or lower greenhouse gas emissions and make significant progress toward being cost-competitive to conventional jet fuel. DOE expects to select and award applications by the end of August 2014, with about $20 million available under the solicitation. In addition, federal agency officials representing eight federal agencies recently formed an interagency working group that is currently drafting a national R&D strategy for alternative jet fuels. According to members of the working group, a national R&D strategy is needed to create a national vision for alternative jet fuels specifically and a unified federal government approach to help facilitate interaction with external stakeholders, such as industry and academia. As part of the working group’s efforts, in January 2014, the working group sponsored a workshop attended by government and industry stakeholders. The workshop identified a variety of challenges to making alternative jet fuels, including challenges associated with feedstock logistics, fuel production and scale-up, fuel certification and qualification, as well as other cross-cutting issues. According to members of the working group, the national strategy for alternative jet fuels will focus on R&D challenges and will not address policy issues. Direct Financial Support for Future Commercial Production In June 2011, USDA, DOE, and one of DOD’s military departments (the Navy) signed a memorandum of understanding (MOU) that initiated cooperation among these agencies in assisting the development and support of a sustainable commercial biofuels industry, which could produce alternative jet fuels among other types of biofuels. The MOU explained that given the current economic environment, significant start- up risks, and competitive barriers of an established conventional fuels market, it is necessary for the federal government to cooperate with private industry to create a strong demand signal and to make targeted investments to achieve the necessary alternative-fuels-production capacity. The stated objective is to construct or retrofit multiple domestic commercial- or pre-commercial-scale advanced drop-in biofuel production facilities. Specific characteristics required for the facilities include that the biofuels that they produce must be capable of meeting military fuel standards at a price that is competitive with conventional jet fuel and have no significant impact on the supply of agricultural commodities for the production of food. Under the MOU, USDA, DOE, and the Navy stated their intent to contribute $170 million each over 3 years, for an aggregate total of $510 million. Under the authority of the Defense Production Act, Title III, DOE and the Navy planned to fund their share of $340 million for capital investment and production. USDA planned to provide its contribution under the authority of the Commodity Credit Corporation Charter Act. Under this MOU, in June 2012, USDA, DOE, and DOD announced the initiation of and a solicitation for the Advanced Drop-In Biofuels Production Project, which would provide awards for biofuels production facilities over two phases. In May and June 2013, four private companies were selected to receive awards totaling $20.5 million, with private industry paying at least 50 percent of the cost. According to Defense Production Act Title III program officials, the Advanced Drop-In Biofuels Production Project should provide production capacity for about 35 million gallons per year of renewable jet fuels that meet military standards and are available at a price that is competitive to conventional fuels by 2016. But the amount of production capacity is dependent, in part, on the timing and the number of awards for the Advanced Drop-In Biofuels Production Project’s second phase. According to DOD officials, the department plans to make its determination for the second phase of awards in July 2014. More recently, in December 2013, the Secretaries of USDA and the Navy announced another initiative that complements the Advanced Drop-In Biofuels Production Project called Farm to Fleet, which is intended to help the Navy meet its alternative-fuels usage goals. Under the initiative, DOD plans—through its regular domestic bulk-fuel purchases—to issue solicitations in 2014 for the purchase of about 80-million gallons of any combination of jet and marine diesel fuels in 2015 that are blended with at least 10 percent alternative fuels. USDA plans to contribute up to about $161 million (under the authority of the Commodity Credit Corporation Charter Act) toward these fuel purchases to help defray any domestic feedstock costs that would have caused the final alternative fuel to not be price competitive with conventional fuels. In addition, DOE provides direct financial support for future alternative jet fuels production through its integrated biorefineries program, which was initiated in 2005. Under the program, DOE’s Bioenergy Technologies Office (BETO) works in partnership with industry to develop, build, operate, and validate integrated biorefineries at various scales (pilot, demonstration, and commercial). The purpose of these projects is to provide federal support to private industry to help bridge the gap between promising R&D scientific advancements and commercial-scale production by validating fuel conversion technologies at progressively larger scales. According to BETO, federal financial support is essential to help offset the technical and financial risks associated with producing alternative fuels at a commercial-scale. According to BETO officials, DOE has obligated almost $198 million for 14 integrated biorefinery projects related to the development or use of alternative jet fuels. For example, it obligated about $50 million to a fuel producer to demonstrate the technical and economic feasibility of refining algal oil into gasoline, diesel, and jet fuel. Because of private industry’s indispensible role throughout the alternative jet fuel supply chain—such as producing feedstock and fuel—it is critical that the federal government’s activities are coordinated with external stakeholders. As a result, USDA, DOE, FAA, and DOD participate in a variety of coordination efforts, such as partnerships with industry and other stakeholders, to identify opportunities to work toward common goals and needs. For example, FAA and other federal agencies participate in the Commercial Aviation Alternative Fuels Initiative (CAAFI), a public- private partnership formed in 2006 to facilitate the development and deployment of drop-in alternative jet fuels that are intended to reduce all aviation emissions, improve price stability, and support supply security. Key CAAFI efforts have included developing and sharing user guides and tools, as well as organizing workshops for alternative fuel producers and other stakeholders. For example, in December 2013, CAAFI published a user’s guide to help alternative jet-fuel producers understand and comply with ASTM International’s process to test and approve new alternative jet fuels. The partnership also developed a “Path to Alternative Jet Fuel Readiness” tool that describes the testing and environmental evaluations required to show a new alternative jet fuel’s suitability for aviation use and how to best facilitate ASTM International approval. In addition, in January 2013 and January 2014, CAAFI conducted workshops on current regulatory, voluntary, and research efforts related to alternative jet-fuel sustainability issues. Among other things, 2013 workshop participants identified the need to understand and reconcile differences among various approaches to calculating life-cycle greenhouse gas emissions that result from producing alternative jet fuels, and 2014 participants began assessing the differences. USDA coordinates with private industry and other governmental stakeholders through the FARM to FLY initiative, which was initially established in July 2010 to accelerate the availability of a commercially viable and sustainable domestic alternative jet-fuels industry, increase domestic energy security, establish regional supply chains, and support rural development. USDA expanded the initiative by signing a 5-year FARM to FLY 2.0 resolution with CAAFI, FAA, Airlines for America, and others. Under the expanded resolution, participants agreed to designate personnel for a working group tasked with assessing and proposing ways to support FAA’s goal of using 1-billion gallons of alternative jet fuels by 2018. The working group plans to issue a final report by the end of 2018. In addition to the two efforts described above, federal agencies participate in a variety of other coordination efforts, including the following. USDA, DOE, FAA, and DOD work with industry and other stakeholders through regional initiatives that are aimed at advancing alternative fuels within specific regions of the United States. DOE’s BETO co-sponsored a September 2013 workshop to obtain input from industry, academia, and other experts on optimizing and integrating the use of natural gas and biomass to produce liquid transportation fuels, including alternative jet fuels. FAA has signed onto or agreed to engage in activities under international cooperative agreements with four countries: Australia (2011), Brazil (2011), Germany (2012), and Spain (2013). Under each of these agreements, FAA agreed to share information about R&D efforts, fuel testing or approval requirements, and environmental or sustainability studies, among other things. According to FAA, these international partnerships contribute to FAA’s ongoing efforts to support approval of additional sustainable alternative jet fuels by ASTM International. For example, representatives from all four countries participated and shared information about their respective initiatives at a recent meeting sponsored by CAAFI. Also, FAA and German Ministry of Transport officials recently participated in a technical and coordination exchange to share details on fuel testing and approval activities to identify complementary activities, among other things. The dates FAA and DOD have established for meeting their alternative jet-fuel usage goals are several years or more away, and, to date, all alternative jet fuels purchased in the United States have been for fuel testing, approval, or demonstration activities, not for day-to-day operations. For example, in November 2011, two domestic airlines purchased alternative jet fuels for a limited number of commercial flights. According to DOD officials, DOD purchased about 150,000 gallons in fiscal year 2012 (about 1.5-million total gallons since fiscal year 2007) of alternative jet fuels, all for fuel testing and approval activities, including about 100,000 gallons for the Navy’s Great Green Fleet demonstration in July 2012. Commercial and military use is constrained because alternative jet fuels are not yet produced on a commercial-scale at a price that is competitive with conventional jet fuel. While FAA officials acknowledged that FAA’s usage goal is “aspirational,” they noted that alternative jet fuel use could increase substantially once the industry is capable of producing alternative jet fuels at a commercial scale and at a price that is competitive with conventional jet fuels. Currently, the price for alternative jet fuels exceeds that of conventional jet fuel. Jet fuel end users—both commercial airlines and DOD—are extremely price sensitive when making purchasing decisions. Fuel purchasers are either unwilling to pay a premium for alternative jet fuels as compared to conventional jet fuel, or in the case of DOD, are precluded by law and department policy from doing so. The actual price differential depends on the feedstock, the production process used to produce the alternative jet fuel, as well as fuel distribution and quantities produced. Of the two alternative jet-fuel production processes approved for use in commercial and military aircraft (Fischer-Tropsch and HEFA), DOD, according to a DOD official, paid from about $3 to $150 per gallon. These prices, however, reflect purchases of small quantities of fuel for testing and approval activities, which according to government officials and a fuel producer we interviewed and literature we reviewed, are higher than what the price would be if the quantities were produced at a commercial scale. A study conducted by one of FAA’s COEs in March 2013 estimated that alternative jet fuels produced on a commercial scale using the HEFA process would require a subsidy of $0.35 to $2.86 per gallon to be price-competitive with conventional jet fuels in 2020. Recent developments indicate that alternative jet fuel use may increase in the future, which could contribute to achieving FAA’s and DOD’s usage goals. For example, as of January 2014, seven new potential alternative jet-fuel production processes are undergoing review by ASTM International for approval—at least one of which FAA officials told us may be approved by June 2014. According to a couple of government officials and an industry representative whom we spoke with, a range of approved production processes could diversify and expand the future supply of alternative jet fuels. Another potential alternative jet-fuel production process that will be submitted to ASTM for approval involves using a type of renewable fuel that is currently used in ground transportation. Because production capacity already exists for this fuel, it could be made available more quickly to meet demand from the aviation industry. In addition, two airlines—United Airlines and Alaska Airlines—have entered into agreements, known as “off-take agreements,” to purchase alternative jet fuels from fuel producers’ future production. We interviewed 23 academic, federal government, and private industry stakeholders with expertise in various segments of the supply chain to help identify challenges to developing and using alternative jet fuels (see app. I for more information on the criteria used to select the stakeholders interviewed). Through interviews with these stakeholders using open- ended questions, we identified two major factors—high development costs for alternative jet fuels and uncertainty with respect to federal regulations and policies—as the primary contributors to the overarching challenge that alternative jet fuels are not commercially available at a price that is competitive with conventional jet fuel. Almost all of the stakeholders whom we interviewed (22 of 23) cited at least one factor related to high development costs. While one of these stakeholders discussed the challenges associated with high development costs broadly, the remaining 21 of them highlighted development cost challenges associated with specific supply-chain segments. Feedstock production: Stakeholders we interviewed most commonly cited the high cost of feedstock in connection with the first segment of the supply chain (15 of 23 stakeholders). Five of these stakeholders noted that for fuel produced using the HEFA production process, the cost of some types of feedstock—even before it is transported or converted—currently exceeds that of conventional fuel. For example, when comparing conventional jet-fuel prices reported by the Energy Information Administration and soybean oil prices reported by the World Bank between 1990 and 2012, the price per gallon of soybean oil exceeded the price per gallon of conventional jet fuel in almost every year. In addition, an increase in demand for alternative jet fuels could increase the derived demand for feedstocks (as alternative jet fuel producers increase their production output) and the price of feedstocks could rise. Six stakeholders noted that the expansion of low cost natural gas production in the United States could help lower production costs for alternative fuels derived from nonrenewable sources, such as natural gas. However, jet fuel produced from nonrenewable sources, such as natural gas, does not meet the statutory definition of “renewable biomass,” and therefore could not generate RINs. Expanding the production of different types of renewable feedstocks could also help lower feedstock production costs, but three stakeholders noted that the agriculture community does not have much experience in growing crops that could be used to produce alternative fuel, and farmers are hesitant to grow those crops without a guarantee that they can be sold or the certainty that the energy crop would be more profitable than what the farmer could otherwise grow. Feedstock Logistics: While the logistics differ depending on the type of feedstock, one private-industry stakeholder and studies we reviewed explained that feedstock used to produce alternative fuels is generally costly to collect, store, handle, and transport. For example, oil producing feedstocks—such as camelina, soy, or jatropha—require special handling, including proper moisture and temperature conditions for storage and cleaning, drying, and de-hulling before the process for extracting the oil from the plant occurs. And, some other feedstock crops, such as wood residues or switchgrass—which are fibrous, have a low energy density, and have variable moisture content—are costly to collect, store, and transport because of this complexity. Moreover, to be more cost effective, these feedstocks may need to be grown near fuel production facilities; otherwise, they may need to be shipped by bulk freight transportation (such as by rail or pipeline), which increases the transportation cost. That is why a demonstration-scale ethanol biorefinery that we visited that anticipates producing alternative jet fuels acquires its feedstock (woody biomass) from a poplar tree plantation less than 10 miles away (see fig. 2). Operators of the biorefinery told us that the close proximity of the feedstock source to the biorefinery helps reduce its feedstock logistics costs. Fuel production: More than half of the stakeholders we interviewed (14 of 23), as well as literature we reviewed, indicated that the high costs associated with transitioning to commercial scale production— such as the capital costs required to construct a commercial-scale production plant—is a key contributing factor affecting the cost of producing alternative jet fuels. For example, one study conducted by the National Research Council estimated that the costs to construct a single biorefinery converting biomass into a liquid transportation fuel using different conversion technologies range from $200 million to $606 million. Stakeholders (5 of 23) also noted that the capital investment costs for constructing alternative fuel-production plants would be even higher when producing fuel from nonrenewable feedstock, such as natural gas. Ten stakeholders we interviewed highlighted fuel producers’ difficulty in obtaining the private investment needed to help construct commercial-scale alternative fuel production plants. According to stakeholders and literature we reviewed, private financiers are hesitant to invest, in part, because of risks associated with the uncertainty about access to a steady supply of feedstock, high feedstock and capital costs, and an unwillingness on the part of fuel end users to pay a premium price for alternative jet fuels. This is generally true of many capital-intensive start-ups, including other renewable energy industries; four stakeholders noted to us, for example, that the ethanol industry would not be as commercially viable today without considerable federal support. One private- industry stakeholder whom we spoke to noted that fuel producers learn and adapt their processes as they gain experience building and operating commercial-scale production plants. In other words, once fuel producers construct a commercial-scale plant and begin operating it, they can work to create efficiencies in the production process to reduce costs in other ways. Another private-industry stakeholder underscored that the amount of time and funding it takes to move from a good idea in the lab to a commercial scale of production is substantial. Another stakeholder highlighted the cyclical nature of the challenge—that is, fuel producers typically require outside investment finance to construct a commercial-scale plant that can create efficiencies sufficient to decrease costs, while a private financier is hesitant to invest funds unless the producer can lower development costs and guarantee that the fuel price will be competitive with the price of conventional fuels. Fuel testing and approval: Ten stakeholders, as well as literature we reviewed, explained that the time and testing requirements associated with the testing and approval process for alternative jet fuels add additional cost—in part because alternative jet fuels, in contrast to alternative fuels used for other purposes such as surface transportation, require a more rigorous testing and approval process. According to an industry report, the ASTM International’s testing and approval process can last as long as 3 years and cost upwards of $30 million. One stakeholder explained that the fuel testing and approval process requires producers to demonstrate that their production processes are “robust and repeatable” and can reliably control product quality. With regard to the time required to reach fuel approval, the commercial and military approval process for HEFA generally took about 3 years. One stakeholder highlighted that given the amount of time required to get approval for alternative jet fuels, producers may opt to produce other products, such as diesel, that they can get to market more quickly. In addition, as an alternative fuel progresses through the testing and approval process, the sequence of tests—ranging from laboratory tests on the fuels to potentially full-scale aircraft tests—require an increasing quantity of fuel to conduct. Some stakeholders (5 of 23) elaborated that since most fuel producers are generally companies with limited funds and small-scale operations, it is extremely costly for them to produce fuel in large quantities. For both the Fischer-Tropsch and HEFA approval processes, the federal government has in some cases provided funding—including purchasing fuels for testing and providing the equipment needed to conduct the tests—to help relieve some of the costs for producers. Air Force officials estimated that DOD generated about 80 percent of the testing data for past approvals, while private parties are leading most of the current fuel certifications. Four government and private-industry stakeholders whom we interviewed, as well as other FAA and DOD officials, expressed concerns that recent cuts to the Air Force Alternative Fuels Certification Division overseeing testing and approval will add to the time and cost of getting additional alternative jet fuels approved. Specifically, the Air Force’s Alternative Fuels Certification Division was eliminated in fiscal year 2013 and the funding for the other Air Force division involved in fuel testing and certification—the Air Force Research Lab—is also being cut. A senior Air Force Energy official noted that the Air Force plans to request that funding for its fuel testing and approval activities be restored once the budget situation has improved. The official did not know what the consequences of the recent cuts would be, but indicated that because none of these alternative jet fuels would be immediately commercially available, the short-term impact would be minimal. As discussed above, federal policies and regulations can support the development and use of alternative jet fuels, but uncertainty regarding the future of this federal support may limit the support that these policies provide to the alternative jet-fuels industry. More than half of the stakeholders we interviewed (13 of 23) indicated that continued uncertainty in federal regulations and policy contribute to the overarching challenge of making the price of alternative jet fuels competitive with the price of conventional jet fuels, a situation that undermines the viability of the alternative jet-fuels industry as compared to conventional jet fuel. Specifically, these stakeholders cited uncertainty about the RFS and federal tax expenditures as a challenge to developing and using alternative jet fuels. RFS: Government, academic, and private-industry stakeholders (8 of 23) highlighted legal and political challenges to the RFS, which creates uncertainty about requirements in the future. These challenges include multiple lawsuits filed regarding the validity of the program’s volumetric requirements, and political opposition for the program from some lawmakers. The uncertainty about the future of the RFS contributes to private financiers’ hesitancy to invest in the biofuels industry, including alternative jet fuels. Even though renewable jet fuels produced from specified feedstocks and conversion processes can qualify as a renewable fuel to meet the advanced biofuels requirement of the RFS, EPA officials reported that no RINs for the production of renewable jet fuel have been generated as of January 2014. However, four stakeholders noted that alternative jet-fuel producers could use RIN credits to help offset fuel development costs and to make the fuel’s price more competitive with the price of conventional jet fuel. Four stakeholders, as well as a private financier whom we spoke with, noted that, generally, private financiers wholly or partly discount any potential RIN credit value when evaluating a fuel producer’s financial prospects and deciding whether to invest because of uncertainty about the future of the program. Discounting the value of a RIN credit in financial models makes the investment in an alternative fuel producer look less profitable and overall less attractive than the same investment would look with a stable and fully valued RIN credit. Thus, while the uncertainty does not increase the actual price of alternative jet fuel, it may hinder investments that could make alternative jet fuels more price-competitive to conventional jet fuels. Two other stakeholders pointed to the EPA’s recent proposal to reduce the total advanced biofuels standard in 2014 under the RFS as indicative of this uncertainty. Specifically, one stakeholder highlighted that when EPA uses its statutory authority to reduce the statutory standards, for example, for volume under RFS, it creates uncertainty, which ultimately makes supporting the research, development, and production of alternative fuels—including alternative jet fuels—less attractive to private financiers. Tax expenditures: Two private-industry stakeholders pointed out that tax credits to incentivize alternative jet fuels and general biofuels investment and production are authorized for short periods of time, such as 1 year, and on occasion, were not renewed. For example, since its enactment in 2004, the biodiesel tax credit expired in 2010, 2011, and again, most recently, on December 31, 2013. This introduces uncertainty and, similar to the RIN credits, generally causes private financiers to minimize or discount the tax expenditures’ value when assessing fuel producers’ future expenses and profitability. Literature we reviewed highlighted the potential for stable federal tax policy to contribute to the growth of the renewable energy sector. For example, one study noted that investments in new commercial scale wind and solar power production facilities were fostered by production and investments tax credits, respectively. We also found that the long-term ethanol tax credit was important in creating a profitable ethanol industry when the industry had to fund investment in new facilities. Stakeholders and studies we reviewed identified a variety of actions that could assist in the development of alternative jet fuels, ranging from continuing the current federal efforts to providing greater regulatory and policy certainty to providing greater financial support. Stakeholders identified a variety of federal actions as being the most critical for the federal government to take. Research and development efforts: A majority of the government, academic, and private-industry stakeholders that we interviewed (19 of 23) generally agreed that the federal government should continue research and development efforts to advance the development and use of alternative jet fuels. Seven of the government, academic and private-industry stakeholders suggested, however, that the federal government could be more targeted in its feedstock research and development efforts, such as finding scientific breakthroughs to converting algae and cellulosic feedstocks into fuels; developing more feedstock options; and identifying what feedstocks may be particularly well-suited to producing alternative jet fuels. Five private-industry stakeholders noted their support of the federal government’s current efforts related to fuel testing and approval activities. For example, FAA and DOD have helped or plan to help fund specific fuel tests for seven new potential alternative jet-fuel processes that, as of January 2014, are undergoing review by ASTM International for approval. According to federal agency officials, they anticipate that at least one of these new processes may be approved by June 2014. Four government and private-industry stakeholders proposed that the federal government use and increase access to its jet engine assets for testing purposes or help streamline the fuel-testing and approval process for commercial use. For example, one of these government stakeholders suggested that the federal government use an outside contractor to manage the fuel-testing and approval process with subcontracts with facilities for fuel testing, so fuel samples would be sent to a centralized location. All entities with a role in the fuel-testing and approval process (laboratories, engine and aircraft manufacturers, and administrative functions) could then coordinate and agree upon a final approval determination within several months. Ultimately, this approach, the stakeholder noted, could help shorten the ASTM International’s approval timeline, lower the fuel testing costs, and add clarity and regularity to the process. Three stakeholders believed that expanding the range of approved production processes would diversify and expand future supply of alternative jet fuels. Regulatory and policy certainty: More than half of the government, academic, and private-industry stakeholders we interviewed (15 of 23) generally agreed that the federal government should provide greater regulatory and policy certainty to support the development and use of alternative jet fuels. The most commonly cited actions (12 of 23 stakeholders) were providing greater certainty that the renewable fuel program will not be repealed and changes minimized—specifically, reductions—in mandated volumes from year to year. Multiple stakeholders (8 of 23) told us that this may encourage private financiers to value RIN credits when making investment decisions. Ultimately, these stakeholders believe that this certainty would attract more private investment in the industry and help it advance. Other actions mentioned by stakeholders included expanding the RFS by expanding the types of feedstocks that would generate a RIN or mandating a minimum volume standard for alternative jet fuels. EPA officials told us, however, that because the definition of transportation fuel in Section 211(o) of the Clean Air Act, which pertains to the RFS, does not include jet fuels, having the fuel refiner or importer include the jet fuel they produce or import in determining their annual renewable fuel obligation would require Congress to revise the program. Two stakeholders noted that some tax expenditures that are authorized for short periods of time, such as 1 year, should be authorized for a longer period of time—such as a minimum of 10 years. They said that this would be helpful in spurring additional private investment in producing alternative fuels, including alternative jet fuels. Researchers currently studying the impacts of government subsidies on this industry told us that government mandates, such as the RFS, and tax subsidies, as well as their duration, are key factors in making the production of alternative jet fuels profitable. Direct financial support: A majority of stakeholders (15 of 23) and literature we reviewed also highlighted that greater financial support for alternative jet fuels derived from both renewable and nonrenewable sources would advance the industry. The two most common potential federal government actions cited by stakeholders included entering into long-term contracts for fuel purchases (6 government, academic, and private-industry stakeholders) and stabilizing existing federal direct support programs (6 government and private-industry stakeholders). Both of these actions would provide greater certainty and reduce some of the risks of private investment. Currently, DOD has statutory authority to enter into certain contracts procuring services or property, including contracts for the purchase of alternative fuels, for up to 5 years with an option to extend the contracts up to 10 years. However, stakeholders we interviewed, as well as industry experts said that the length of the initial contract period is too short to stimulate the private capital market or to encourage potential alternative fuels suppliers to construct or expand production facilities. Alternative fuels producers have told DOD that initial contracts for fuel purchases of at least 10 years in duration would help advance the industry beyond the small production volumes currently planned. Because this would require a statutory change to DOD’s authority to contract for jet fuel purchases, DOD has drafted legislative proposals over the past several fiscal years that would allow it to enter into longer-term contracts. One proposal advanced for congressional consideration, but was not adopted. A senior DOD official told us that under current law, DOD would be required to obligate sufficient funds in the first year of a long-term contract—for example, 10 years in length—to pay for the total guaranteed minimum purchases over the duration of the contract, unless it received a specific statutory exemption to do otherwise. Without such an exemption, according to this official, long-term contracts would have a major effect on DOD’s budgets and obligational authority. Also, according to a senior DOD official, obtaining and exercising the authority to enter into longer-term contracts could commit DOD to a particular type of alternative jet-fuel production process, while the technological advancements in this industry are changing quickly and could provide a newer and potentially less expensive production process, which DOD may not be able to take advantage of. This DOD official also noted that conventional fuel providers prefer 1-year fuel purchase contracts. Thus, if DOD had and exercised authority to issue longer-term solicitations for bulk fuel purchase, the department could find itself subjected to a bifurcated procurement strategy where the majority of its fuel contracts would be 1-year contracts with conventional fuel providers and the remainder would be longer-term contracts with alternative fuel providers. Some stakeholders (6 of 23) suggested ensuring stability in the funding stream for the existing Advanced Drop-In Biofuels Production Project, discussed previously, which was to be jointly funded by the USDA, DOE, and DOD. To date, the funding put toward the project is less than the intended amount of $170 million from each federal agency for an aggregate total of $510 million. According to DOD officials, $100 million in fiscal year 2012 funds were applied to this project. For fiscal year 2013, the explanatory statement for the Consolidated and Further Continuing Appropriations Act, 2013, listed an additional $60 million for this purpose, which was conditioned by a provision in the authorization act providing that the funds appropriated would not be obligated or expended until matching funds were received from DOE and USDA. And, only in fiscal year 2014, did DOE receive specific authorization to contribute $45 million, which will be applied to Phase 2 of the project. According to a senior USDA official, while USDA was apportioned about $23 million in fiscal year 2013 for these activities, it has not expended any funds to date toward this initiative. In May and June 2013, four private fuel producers were selected to receive awards totaling $20.5 million for Phase 1. The 23 stakeholders whom we interviewed had varying views about the future of the alternative jet-fuels industry if current federal activities continue at the same level and no additional federal government action is taken. Specifically, 5 stakeholders believed that the industry would remain commercially unviable—that is, continue to produce small quantities of fuel at prices that are not competitive to conventional jet fuels’ prices. Another 6 stakeholders believed that the alternative jet-fuels industry would continue to progress, but at a slow pace. The remaining 12 stakeholders did not articulate a specific prediction for the future of the alternative jet-fuels industry. Some highlighted their concerns that existing policies and programs—such as, the RFS or the Advanced Drop-In Biofuels Production Project—would be repealed, while others expressed pessimism about the future of the industry if the federal government does not address larger industry-related economic or policy challenges. More than half of the stakeholders (15 of 23) highlighted that market factors, such as the favorable economics for developing competing products (e.g., diesel), will ultimately be a key factor in determining the long-term success of the alternative jet-fuels industry. These stakeholders highlighted three key market factors—favorable economics for competing end-use products or co-products, dependence on commodity markets, and the cost of conventional jet fuel—that they believe will affect the future prospects for the alternative jet-fuels industry. The remaining 8 stakeholders did not offer comments on market factors. Favorable Economics for Competing End Products: Nine stakeholders, as well as literature we reviewed, highlighted that, currently, end-use products or co-products (such as diesel fuel, naphtha, cosmetics, and plastics) from the same production processes used to produce alternative jet fuels are often cheaper and easier to produce and therefore more profitable as compared to alternative jet fuels. For example, a study of the HEFA process, funded, in part, through one of FAA’s COEs, found that if the production goal was to maximize the total amount of all types of liquid fuel, rather than specifically jet fuel, then less than 13 percent of the product mix output would be jet fuel, while almost 70 percent would be diesel; the remaining product mix would consist of propane, naphtha, and liquefied petroleum gas. Maximizing the amount of jet fuel produced would reduce a fuel producer’s profitability due to higher operating costs and lower revenues. Two stakeholders that we spoke with who have expertise in fuel production told us that they choose to produce more of the other end products over alternative jet fuels because the other products are more profitable. For example, one stakeholder stated that he has experience selling renewable diesel in one state at a price premium and market demand is higher for renewable diesel. Dependence on Commodity Markets: Because some alternative jet fuels are made from tradable commodities, the cost of jet fuel production depends on prices in commodity markets. As noted earlier, the price of soybean oil—an input to alternative fuels—has historically exceeded the price of conventional jet fuel. Consequently, it has been impossible for a producer of alternative jet fuels that uses the HEFA production process and soybean oil as a feedstock to compete on price alone with conventional jet fuels, even if the producer’s other production and transportation costs were negligible. Furthermore, in many instances, the input commodities (feedstock) have alternative uses. For example, oil-producing and cellulosic feedstocks can be used to generate heat, power, and other ground transportation fuels. Therefore, an increase in demand for these feedstocks in alternative uses could raise their price and the costs of producing alternative jet fuels. Cost of Conventional Jet Fuels: Increases in the supply of conventional jet fuels would make it harder for alternative fuels to compete based on price alone. And although international petroleum markets heavily influence the prices of conventional fuels, which can be volatile and difficult to predict into the future, domestic policies can affect jet fuel supply and fuel prices. For example, five stakeholders highlighted that there has been significant previous federal investment in establishing the conventional petroleum industry, such as through long-standing federal tax expenditures that encourage exploration and drilling for conventional petroleum oil. In addition, two stakeholders highlighted that the price of conventional jet fuel does not reflect its full life-cycle cost. Costs not reflected in the price of conventional jet fuel could include environmental externalities, such as the impact of greenhouse gas emissions from aviation on climate change, or direct negative effects on human health from the combustion of jet fuel. If comparing the full life-cycle costs of conventional versus alternative jet fuels, alternative jet fuel could be more cost competitive. However, there is no globally recognized approach for determining the greenhouse gas effects of renewable fuels and the magnitude of any greenhouse gas reductions attributable to their production and use. We provided DOT, USDA, DOE, DOD, and EPA with a draft of this report for their review and comment. DOT provided technical comments that we incorporated as appropriate. In addition, in comments emailed to us, DOT highlighted that alongside its federal agency partners, it is fully committed to the development and use of sustainable alternative jet fuels to address the nation’s energy security, economic development and environmental needs. DOT also stated that it believes that it has taken a comprehensive approach to overcome barriers to the development and deployment of sustainable alternative jet fuels that are drop-in replacements to fuels derived from petroleum and that these fuels hold great promise and are an essential component of ensuring that the flying services the nation relies upon today remain affordable and available into the future. USDA, DOE, DOD, and EPA also provided technical comments that we incorporated as appropriate. In addition, EPA provided written comments, reprinted in appendix II, stating that the report’s findings related to approved jet fuel pathways under RFS are accurate. We are sending copies of this report to interested congressional committees and the Secretaries of Transportation, Agriculture, Energy, and Defense, and the Administrator of the EPA. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Gerald Dillingham at (202) 512-2834 or dillinghamg@gao.gov or Zina Merritt at (202) 512-5257 or merrittz@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines (1) the role of the federal government in the development and use of alternative jet fuels and (2) key challenges to developing and using alternative jet fuels and actions that the federal government plans to or could take to help address those challenges. To examine the role of the federal government, we first identified the federal agencies most involved in the development or use of alternative jet fuels through review of relevant documents and preliminary interviews with stakeholders. We selected five federal agencies: the Department of Transportation’s (DOT) Federal Aviation Administration (FAA), the Department of Agriculture (USDA), the Department of Energy (DOE), the Department of Defense (DOD) and its military departments (Army, Navy, and Air Force), and the Environmental Protection Agency (EPA). For each of these five selected federal agencies, we interviewed officials and reviewed strategic plans, performance reports, and other relevant documents to obtain information about key federal programs, initiatives, or goals that targeted the development or use of alternative jet fuels. We also reviewed literature related to alternative jet fuels and interviewed other stakeholders, including representatives from the private industry, such as fuel producers and airlines, as well as representatives from public-private partnerships. To identify activities that federal agencies are involved in to coordinate their alternative jet fuel related activities with other federal agencies, private industry, and stakeholders, we reviewed relevant memoranda of understanding; international cooperative agreements; reports from public-private partnerships—such as the Commercial Aviation Alternative Fuels Initiative—and regional initiatives; as well as interviewed officials from the five selected federal agencies and relevant interagency working groups. In addition, to identify broad federal strategies and initiatives related to alternative fuels generally, we reviewed key White House and other relevant government-wide documents, including the National Plan for Aeronautics Research and Development and Related Infrastructure (December 2007); Growing America’s Fuels strategy (February 2010); the Blueprint for a Secure Energy Future (March 2011); the National Bioeconomy Blueprint (April 2012); and the President’s Climate Action Plan (June 2013). Finally, we also reviewed applicable federal laws and regulations related to alternative transportation fuels, including the Defense Production Act of 1950; Commodity Credit Corporation Charter Act; Energy Policy Act of 2005; Energy Independence and Security Act of 2007; the Food, Conservation, and Energy Act of 2008; the FAA Modernization and Reform Act of 2012; and EPA’s regulations for the renewable fuel program. We focused our study on federal efforts, except to the extent that international or private-sector efforts are coordinated with federal efforts. We did not provide an exhaustive list of federal initiatives; rather, we discussed key programs or initiatives that federal agency officials told us and our review of agency documents identified as playing a key role in supporting the development or use of alternative jet fuels or in achieving related goals. To examine key challenges to developing and using alternative jet fuels and actions that the federal government plans to or could take to help address those challenges, we (1) identified the extent to which alternative jet fuel has been purchased for commercial and military use in the United States; (2) selected and interviewed 23 stakeholders representing government, academia, and the private sector to obtain their views on key challenges and planned or possible federal actions; (3) reviewed relevant literature on challenges to developing and using alternative jet fuels to help corroborate the views obtained from the 23 stakeholders; and (4) interviewed officials from the five selected federal agencies, as well as representatives from other non-federal entities involved in the alternative jet-fuels industry. To identify the extent to which alternative jet fuel has been purchased for commercial and military use, we obtained alternative jet-fuel purchase information for fiscal year 2012 from Airlines For America—a domestic airline industry group—and for fiscal years 2007 to 2013 from DOD. While we obtained actual quantities from Airlines For America and from DOD, we did not assess the data’s reliability because it is not material to our findings and reported the fuel purchase quantities on an order of magnitude in this report. As part of our methodology for selecting the 23 stakeholders, we first identified a list of potential stakeholders by reviewing background information, including federal agency documents; articles published in scholarly journals; and documents produced from conferences and by regional initiatives related to alternative jet fuels. In addition, we considered names of stakeholders recommended during initial interviews we conducted with federal government officials, as well as representatives from professional associations and private industry. We selected the 23 stakeholders representing government, academia, and private industry based on criteria that included: type and depth of experience and knowledge in the area of alternative jet fuels; recognition in the professional community; relevance of published work to the scope of our review; representation of a range of expertise across the alternative jet fuel supply chain, such as feedstock development or fuel production; representation of a range of stakeholders with knowledge about alternative fuels derived from renewable, nonrenewable, or both sources; and representation of a range of stakeholders with knowledge about alternative fuel use in commercial, military, or both settings. We asked each of the 23 stakeholders a series of semi-structured, open- ended questions about economic, policy, technological, and other key challenges related to developing and using alternative jet fuels and actions that the federal government plans to or could take to help address them. We synthesized the stakeholders’ views to identify categories of key challenges and planned or potential federal actions to help address those challenges. The views of these stakeholders are not generalizeable to those of all stakeholders with expertise in the area of alternative jet fuels; however, we believe that they represent a balanced and informed perspective on the topics discussed. In addition, we reviewed relevant literature obtained through background research and from federal agency officials that discussed challenges to developing and using alternative fuels and planned or potential federal actions to help address those challenges. We also conducted a literature search and reviewed five documents that we identified as relevant to challenges related to developing and using alternative jet fuels. Our literature search targeted bibliographic databases containing content on commercial and defense aviation, energy, or both, including Transportation Research International Documentation (TRID); SciSearch; and the Defense Technical Information Center (DTIC). Within these resources, the search focused on scholarly journal articles, conference papers, government reports, and industry trade press published in 2011 and forward. Through the literature search and review of abstracts, we initially identified 36 documents that potentially discussed economic, policy, technological, or other challenges to developing and using alternative jet fuels. We could not obtain 2 of the 36 documents because they were not readily available. Ultimately, after a review of the remaining 34 documents, we identified 5 relevant to our review and reviewed them to help corroborate the key challenges identified by the 23 stakeholders we had interviewed. For those studies we cited in the report, we reviewed their methods, assumptions, and limitations to ensure that they were sufficiently methodologically sound and determined that they were sufficiently reliable for the purposes of our report. Lastly, we interviewed officials from the five selected federal agencies and representatives from other non-federal entities involved in the alternative jet-fuels industry, including fuel producers, airlines, airframe and engine manufacturers, environmental groups, and a private financier to also obtain their views on key challenges to developing and using alternative jet fuels and actions that the federal government plans to or could take to help address those challenges. We did not rank the planned or potential federal actions to help address the key challenges that were identified. We conducted this performance audit from February 2013 to May 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individuals named above, Paul Aussendorf, Assistant Director; Marilyn K. Wasleski, Assistant Director; William Colwell; Leia Dickerson; Bert Japikse; Delwen Jones; Shvetal Khanna; Sara Ann Moessbauer; Chris Murray; Josh Ormond; Madhav Panwar; Richard Scott; Marylynn Sergent; Gretchen Snoey; Benjamin Soltoff; Ardith Spence; Maria Stattel; and Elizabeth Wood made key contributions to this report. | The federal government has encouraged the development and use of alternative fuels to reduce greenhouse gas emissions associated with aviation and to enhance economic development and energy security for the United States. To help achieve these goals of reducing greenhouse gas emissions, the aviation industry is actively supporting alternative jet fuels. GAO was asked to provide information on the progress and challenges to developing and using alternative jet fuels in the United States. This report examines (1) the role of the federal government in the development and use of alternative jet fuels and (2) key challenges to developing and using alternative jet fuels and actions that the federal government plans to or could take to help address those challenges. GAO interviewed officials from five federal agencies—FAA, USDA, DOE, DOD, and EPA. GAO selected these agencies for review because GAO identified them as the federal agencies most involved in the development and use of alternative jet fuels. GAO also reviewed relevant literature and federal and industry documents and discussed challenges and potential federal actions with 23 stakeholders from government, academia, and the private sector, selected to represent a range of perspectives and expertise in areas related to each step in the development and use of alternative jet fuels. GAO is not making recommendations in this report. DOT, USDA, DOE, DOD, and EPA reviewed a draft of this report and provided technical comments that were incorporated as appropriate. The federal government supports the development and use of alternative jet fuels through both broad and targeted initiatives. Broad national strategies promote the development of a variety of alternative fuels—including alternative jet fuel—to help achieve national goals, such as securing energy independence, fostering economic development, and reducing greenhouse gas emissions. In addition, the renewable fuel program—established by law in 2005 to encourage greater use of renewable fuels and administered by the Environmental Protection Agency (EPA)—requires that U.S. transportation fuels contain certain amounts of renewable fuels annually, increasing from 9-billion gallons in 2008 to 36-billion gallons in 2022. The other four federal agencies that GAO reviewed—Department of Transportation's (DOT) Federal Aviation Administration (FAA), Department of Agriculture (USDA), Department of Energy (DOE), and Department of Defense (DOD)—directly support alternative jet fuels through targeted goals, initiatives, and interagency and industry coordination efforts. For example, FAA set a goal for the U.S. aviation industry to use 1-billion gallons of alternative jet fuels annually by 2018. The four agencies also sponsor research that specifically targets alternative jet-fuel development or provide direct support for its future commercial production, or both. For example, FAA and DOD support research to determine the technical feasibility of using new alternative jet fuels on aircraft and in existing infrastructure. Also, USDA, DOE, and DOD have coordinated their activities to support the future construction or retrofit of multiple domestic commercial- or pre-commercial-scale production facilities to produce alternative fuels, including alternative jet fuels. Specifically, in May and June 2013, four private fuel producers received awards totaling $20.5 million in federal funds, with private industry paying at least 50 percent of the cost. Achieving price competitiveness for alternative jet fuels is the overarching challenge to developing a viable market. No alternative jet fuels are currently commercially available at prices competitive with conventional jet fuels. The 23 stakeholders that GAO interviewed most frequently cited high development costs and the uncertainty of federal regulations and policies as primary reasons why alternative jet fuels are not priced competitively and believe that federal activities are needed to help advance the alternative jet-fuels industry. For example, according to 10 stakeholders, fuel producers face difficulties in obtaining private investment to help construct commercial-scale fuel production facilities, in part because of concerns about the supply and high cost of feedstock (the source used to produce the fuel, such as crops) and high capital costs. Also, 13 stakeholders stated that continued uncertainty about the future of current federal policies—particularly the renewable fuel program—generally causes potential investors to discount the value of federal subsidies, discounting that, in turn, limits the support these policies may provide the industry. Stakeholders identified a variety of federal actions to advance alternative jet-fuels development, including continuing current federal research efforts, providing greater regulatory and policy certainty, and giving more direct financial support. However, even if the cost to produce alternative jet fuels is reduced, market factors may still determine the long-term success of the industry. The main market factors identified by stakeholders were (1) comparative value of competing end products, (2) feedstock prices, and (3) the costs of conventional jet fuels. |
About 90 percent of the U.S. blood supply is collected by two suppliers— the American Red Cross and independent centers affiliated with ABC. Generally, suppliers collect, test, and process blood and sell it to health care providers. FDA is responsible for ensuring the safety of the U.S. blood supply, which it does by inspecting blood collection procedures and enforcing federal regulations. Although past monitoring efforts by industry and nonprofit groups have examined supply and demand trends for blood, current efforts are focused on providing daily monitoring of hospitals’ blood inventories. In the United States, about 8 million volunteers donate approximately 14 million units of whole blood each year. Sixty percent of the population is eligible to donate blood, but in any given year only about 5 percent of those who are eligible actually do so. Eighty percent of donors are repeat donors. A typical donor gives blood approximately 1.6 times a year, but donors may give 6 times a year, or every 8 weeks, which is the period the body needs to replenish red blood cells. The Red Cross and ABC each collect about 45 percent each of the nation’s blood supply, and roughly 10 percent is supplied by other independent blood centers, DOD, and hospitals that have their own blood banks. Most hospital transfusion services purchase blood and blood components under a contract with a local supplier which describes the price and quantity of blood to be delivered. Blood suppliers use resource-sharing programs to help suppliers in high-demand areas buy blood that is not needed by the supplier that collected it. Taken together, the Red Cross, ABC, and AABB’s National Blood Exchange moved about 1.4 million units of blood—over 10 percent of the nation’s supply—among suppliers in 2000. In addition, the Red Cross has a nationwide inventory control system to facilitate the movement of its surplus blood. Donated blood is tested for blood type (A, B, AB, and O) and Rh type (positive or negative). Donors with type O Rh negative blood are known as “universal donors,” since it can be given to patients of any blood type in an emergency. Donated blood is also screened for a number of diseases and other elements that could prevent its use. For example, blood is tested for red blood cell antibodies that may cause an adverse reaction in recipients and screened for hepatitis viruses B and C, human immunodeficiency viruses (HIV) 1 and 2, other viruses, and syphilis. Most U.S. blood products are now filtered to remove a class of cells known as leukocytes (white blood cells), which have been implicated in adverse transfusion reactions. Each unit of whole blood is separated into specialized components, or “products,” consisting of various types of blood cells, plasma, and special preparations of plasma. Health care facilities transfuse the resulting 26.5 million components into about 4.5 million patients per year. Red blood cells may be stored as a liquid for up to 42 days. Blood banks maintain a supply cushion to meet the uncertain demand for blood. This means that some blood is discarded; for example, from January through August 2001, about 2 percent of the blood supply expired without being transfused. Red blood cells can also be frozen and stored for later use. The military makes extensive use of frozen blood inventories to meet wartime contingencies, maintaining stocks of frozen type O units that can be transferred into most patients regardless of their blood types. However, because freezing and thawing blood is expensive and labor intensive, civilian blood centers maintain relatively small inventories of frozen blood, primarily of rare blood types. A new device approved by FDA in May 2001 may make frozen blood more useful in the future—it can extend the shelf life of thawed, previously frozen blood from 24 hours to 14 days. There are several ways for hospitals to reduce the amount of blood they use. For example, one large hospital we contacted was able to save $1 million and 10,000 units of blood over 8 years by promoting awareness of blood use among physicians and by improving how blood is ordered and used during surgeries. A recent study of blood use during neurosurgery at a large teaching hospital found that, because the hospital’s system for ordering blood had not kept pace with advancements in surgical techniques, physicians ordered 5.5 times more blood than was transfused during surgery. One multifaceted approach to blood conservation is known as bloodless surgery. This practice involves the use of pharmaceuticals that stimulate the production of red blood cells, surgical equipment that cleans and returns lost blood to the patient, and intravenous solutions that maintain blood volume. During a pilot study of bloodless surgery techniques, one hospital successfully used these techniques instead of blood transfusions for several hundred surgical patients. Federal Regulation of Blood The Public Health Service Act (PHSA) and the Federal Food Drug and Cosmetic Act form the basis of the Public Health Service’s authority, as enforced by FDA, to ensure the safety of blood that is collected and transfused in the United States. PHSA requires that all blood and blood components distributed in interstate commerce be licensed by FDA in order to ensure that the products are safe and effective. Under PHSA, FDA can recall blood and blood components that present an imminent or substantial hazard to public health. The licensing and regulatory standards set by FDA attempt to maintain a blood supply that is both adequate and safe. Blood suppliers routinely take safety precautions beyond those required by FDA. For example, although FDA has not required nucleic acid testing (NAT), a sophisticated test to detect HIV and hepatitis C virus (HCV), virtually all blood centers perform it. Similarly, FDA has not mandated universal leukoreduction, but most blood centers have adopted the practice. When suppliers violate regulations, FDA takes legal action to prevent further violations. These legal actions can result in the parties entering into consent decrees of permanent injunction to comply with all applicable blood safety rules. Several blood and plasma suppliers as well as manufacturers of blood testing supplies are currently under consent decrees for various violations. One of the most significant of these agreements now in force is with the Red Cross, which entered into a consent decree in 1993, after FDA discovered that the Red Cross had failed to follow its own standard operating procedures, had deficiencies in its quality control processes, and had committed other violations. FDA has no authority to determine the amount of blood that should be collected or to compel suppliers to make products available. However, FDA recognizes that an insufficient blood supply is a public health risk, and it can make certain recommendations within its authority under PHSA and the Federal Food, Drug and Cosmetic Act, as amended, related to the availability of blood during public health emergencies. In an emergency, FDA and other HHS agencies can give advice to blood banks on prioritizing the use of blood and facilitating the shipment of existing inventory to the areas affected. For example, after the September 11 attacks, FDA issued emergency guidelines to speed the delivery of blood to areas affected by the attacks. The guidelines allowed donated blood to be shipped to crisis areas before NAT was completed and to allow clinical staff who were not trained in all procedures to collect blood, in order to supplement the fully trained staff. FDA’s emergency guidelines were rescinded on September 14, 2001, upon recognition that blood supplies were more than adequate to address current needs. HHS also can purchase blood and blood components and make other arrangements to respond to threats to the safety and sufficiency of the blood supply. While periodic surveys of the blood supply have been conducted for years, no data on daily, weekly, or monthly national and regional blood collections or usages were readily available to federal officials or blood suppliers until 2000. NBDRC has conducted a biennial retrospective survey of blood suppliers since 1997, and others conducted similar periodic surveys before that. NBDRC’s latest comprehensive biennial survey of blood supply and usage measured all units collected and transfused in 1999. In periods between these biennial surveys, NBDRC conducts interim retrospective studies that measure the pace and number of collections. In addition, both the Red Cross and ABC have reported their annual collections from 1996 through 2001. Both the Red Cross and ABC have taken steps recently to improve the measurement of blood collections and inventories in their own centers. For example, the Red Cross recently introduced a large-scale, centralized inventory tracking system. This system monitors blood inventories and distribution daily across all Red Cross blood centers, enabling projections of demand and potential shortages using both daily data and historic blood usage patterns. Since March 2002, the independent blood centers affiliated with ABC have participated in a less comprehensive daily inventory reporting system. In November 1999, HHS made a commitment to improve the monitoring of the blood supply as part of its Blood Action Plan announced in 1998. As a first step, the HHS Office of the Assistant Secretary for Health and NHLBI contracted with NBDRC to provide monthly data on supply and demand trends using a statistically representative sample of 26 blood suppliers that account for about one-third of U.S. blood collections. Data from this survey in 2000 indicated that the blood inventory was stable and that blood banks were absorbing the impact of the first vCJD donor deferral better than initially expected. NHLBI terminated the NBDRC contract, and OPHS assumed support for the NBDRC data collection effort through the end of 2001. NBDRC has continued this data collection effort without public funding. Partly to compensate for the loss of the NBDRC data, OPHS introduced its own early warning, or sentinel, system in August 2001. The system is designed to detect blood shortages that may adversely affect patient care and analyze demand trends at transfusion centers and hospitals nationwide. OPHS collects daily blood inventory and use data from 26 hospitals and three transfusion centers that account for about 10 percent of the national blood inventory. Although the hospital sample is not statistically representative, it includes both small and large hospitals in different geographic regions of the United States meant to serve as indicators of impending blood shortages. To obtain supply data, OPHS has also begun negotiations with ABC and the Red Cross to make available daily supply data from their collection centers, although neither ABC nor the Red Cross has yet agreed to do so. First reported in 1996, vCJD is a progressive and invariably fatal neurodegenerative disease, part of broader class of diseases known as transmissible spongiform encephalopathies (TSE). As of June 2002, there were 130 individuals with confirmed or probable cases of vCJD: 122 in the United Kingdom, 6 in France, 1 in the Republic of Ireland, and 1 in Italy. It is suspected that these individuals contracted the disease from eating meat from cattle infected with BSE (mad cow disease) in the United Kingdom before 1990. Cattle herds in the United Kingdom suffered an epidemic of BSE that peaked in 1992 and subsequently declined as a result of government actions to change the composition of cattle feed. The incubation period for vCJD is long, but its precise length is not known. This makes it difficult to project how many people will ultimately become ill. The United States has one likely case of vCJD, a 22-year-old citizen of the United Kingdom living in Florida who is thought to have acquired vCJD in the United Kingdom. There have been no confirmed cases of BSE in U.S. cattle. In response to the possibility that vCJD could be transmitted through blood transfusions, in November 1999, FDA recommended deferring by April 2000 blood collections from individuals who had resided or traveled in the United Kingdom for a total of 6 months or more from 1980 through 1996. In recognition of the evolving BSE epidemic, FDA issued a more restrictive policy in January 2002. Available data indicate that both blood collections and transfusions increased substantially from 1997 through 2001. While local and temporary blood shortages have occurred periodically, the nation’s blood supply generally is adequate. Although blood collections increased nearly 40 percent in the weeks immediately following September 11, they since have returned to pre-September 11 levels, following the pattern of collections after other emergencies. The inventory of blood in America’s hospitals was at historically high levels before the surge in collections after September 11 and has remained adequate through the first 5 months of 2002. Although no one data source has comprehensively tracked the nation’s blood supply in the past, all of the sources we identified indicated that the national supply has grown in recent years and was at historically high levels before the surge in donations that occurred after September 11. Annual blood collections have increased substantially—21 percent—since 1997, according to NBDRC measurements and estimates of annual blood collections by all blood centers. (See fig. 1.) The number of units of blood donated annually increased from 12.4 million in 1997 to an estimated 15 million in 2001. (NBDRC estimated that 2001 collections would have reached 14.5 million units, 17 percent higher than in 1997, without the post- September 11 surge.) The increase in supply has kept pace with the increase in the amount of blood transfused; for example, NBDRC data indicated that the number of red cell units transfused rose 17 percent from 1997 to 2001, from 11.5 million to 13.5 million units, and the annual number of units that were not transfused remained at about 1 million units, not counting the post-September 11 surge. Available data indicate that 2001 collections had risen even before the increase in donations following September 11. For example, the Red Cross reported a 2.2 percent growth in total collections for the first 7 months of 2001 over the same period in 2000. In addition, reflecting the success of a Red Cross campaign to increase donations, the number of units collected at Red Cross blood centers was 8 percent higher in July and August 2001 than the number collected during the same period in 2000. Similarly, NBDRC reported that the 26 blood suppliers included in its statistically representative national sample increased blood deliveries to transfusion centers by 5 percent in May, June, and July 2001, compared with that period in 2000. The increased collections placed the inventories in America’s blood banks at historically high levels just prior to the September 11 attacks. The Red Cross reported that its total red blood cell inventory was 33 percent higher in August 2001 than it was in August 2000 and that its type O inventory was 83 percent higher than it was in August 2000. The New York Blood Center (NYBC) reported that it had a 4- to 5-day supply of blood on hand in early September. On September 10, 2001, the median inventory for the hospitals in HHS’s Blood Sentinel Surveillance System for all blood types stood at approximately 7 days, and for type O Rh negative blood, at 6 days. In response to the perception that blood was needed to treat victims of the terrorist attacks, Americans greatly increased their blood donations in the weeks immediately after September 11. NBDRC estimated that total blood collections in the United States were 38 percent higher in September 2001 than average monthly collections earlier in 2001. The Red Cross reported that its national blood collections during the week of September 11 more than doubled compared with the preceding weeks. However, as with previous disasters, the sharp increase in blood collections in response to September 11 did not last. While higher than usual blood collections continued for several weeks after September 11, the number of units collected had returned to the baseline level or slightly below it by the beginning of November. The post-September 11 pattern of collections mirrors the collections after the April 19, 1995, bombing of the Edward R. Murrah Federal Building in Oklahoma City. (See figs. 2 and 3) Like the September 11 attacks, the bombing of the Murrah building was a discrete event—there were not continued attacks—and it became clear soon after the attack that a large supply of blood would not be needed for the survivors. The Oklahoma Blood Institute (OBI), the primary blood supplier for the area, recorded a nearly 45 percent increase in donations for April 1995 compared with the previous month. The spike included an increase in repeat donors and an 85 percent increase in first-time donors. But collections rapidly returned to their baseline level in May. In contrast with the Oklahoma City bombing, the Persian Gulf War was accompanied by a perceived need for blood that spanned a longer period. OBI’s data recorded a sustained increase in donations for 3 months beginning in November 1990, peaking in January 1991 at more than 25 percent higher than usual, and continuing through the end of the conflict in February 1991. But by March 1991, donations had returned to baseline levels. The limited information available to us indicates that blood collections early in 2002 were roughly comparable to the levels immediately prior to September 11. For instance, the number of units collected in April 2002 by the 26 blood centers in NBDRC’s sample was approximately equal to the number collected in August 2001. Similarly, the hospital inventories measured by HHS’s Blood Sentinel Surveillance System in early May 2002 were similar to those levels measured just prior to September 11, 2001. The high volume of blood donations made immediately after September 11, and the very small amount of blood needed to treat survivors, resulted in a national surplus—supply was substantially greater than needed for transfusions. Consequently, the proportion of units that expired and were discarded in October and November 2001 was six times higher than the proportion that expired on an average 2-month period in early 2001. Blood suppliers and the federal government are reevaluating how blood is collected during and after disasters to avert the large amounts of blood that went unused and the logistical strains of collecting unneeded blood. A task force of federal and blood supply officials has been created to coordinate blood suppliers’ response to future disasters. Incorporating the lessons learned from past disasters, the task force has recommended that blood banks focus on maintaining a consistently adequate nationwide inventory in preparation for disasters and not collecting more blood after a disaster than is medically necessary. America’s blood banks collected an unprecedented amount of blood in a short period after the September 11 attacks. HHS, ABC, and the Red Cross all issued requests for blood donations, although HHS and ABC quickly stopped issuing requests when it became clear that there were few survivors of the attacks and there was a limited additional need for transfusions. Many blood suppliers were reluctant to turn away potential donors, and some hospitals that did not have their own blood banks responded to the surge in volunteers by collecting blood anyway. This surge of donors stressed the collection system. Shortages in blood collecting supplies, phlebotomists (technicians trained to collect blood), and storage capacity occurred as more potential donors arrived. Long waiting lines developed because there was insufficient staff to draw blood. Far more blood was collected immediately after September 11 than was needed by survivors or than ultimately could be absorbed by the nation’s blood banks. Estimates of the number of additional units collected nationwide range from 475,000 to 572,000, and fewer than 260 units were used to treat victims of the attacks. A portion of this additional supply went unused, expired, and was discarded. The Red Cross reported that its collections peaked from September 11 through October 14, and that 5.4 percent of the blood it collected during that time went unused and expired. ABC officials told us that its affiliated blood banks discarded approximately 4 percent of the blood they collected after September 11, although the officials cautioned that the figures reported to them by their independent centers might have underestimated the number of units that expired. NBDRC’s monthly survey of a nationally representative sample of 26 blood suppliers found that a higher percentage of units were outdated. NBDRC reported that about 10 percent of the units collected in September and October by the suppliers it surveyed were outdated and discarded. This was nearly a five- fold increase in the proportion of units these suppliers outdated and discarded in the first 8 months of 2001—about 2 percent of their collections, on average. On the basis of NBDRC’s figures, we estimate that approximately 250,000 units of blood were outdated and discarded in October and November 2001; this is nearly six times the estimated 42,000 units discarded in an average 2-month period earlier in 2001. All of these figures may underestimate the total number of expired units, since they represent expirations at blood suppliers only and do not capture units that may have expired in hospital inventories. Increased errors in the collection process at some blood banks accompanied the surge in donations. As much as 20 percent of some blood banks’ donations were collected improperly and had to be discarded, primarily because individuals had not completed the donor questionnaire correctly. Some blood banks also suffered serious financial losses, as they incurred the costs of collecting and processing units of blood they could not sell. For example, NYBC claimed it lost from $4 million to $5 million and suffered a nearly three-fold increase in the number of units it had to discard when blood donated in response to the attack expired. Since September 11, federal public health agencies and blood suppliers have found fault with their responses to prior disasters and begun to plan for a more effective response to future emergencies. Through an interorganizational task force organized by AABB in late 2001, the focus has begun to shift away from increasing blood collections in an emergency to maintaining an adequate inventory of blood at all times. This shift was prompted by the realization that a surge in blood collections following a disaster does not help victims because disaster victims rarely require many units of blood and because newly collected blood cannot be used immediately. For example, as with September 11, only a small percentage of the additional blood collected after the Oklahoma City bombing was transfused into victims (131 units of more than 9,000 units collected). Moreover, the units used to treat victims in the hours after a disaster are those already on hand at the treating hospital or local blood bank. It takes 2 days to completely process and test a unit of newly donated blood, so existing stores of blood must be used to treat disaster casualties. Finally, military experts and blood industry officials told us that it is unlikely a discrete disaster scenario would require more blood than is normally stored in the nation’s blood inventory. They noted that large amounts of blood have not been needed in building collapses (like the September 11 attacks and the Oklahoma City bombing), nor would blood transfusions be a likely treatment for illnesses caused by a bioterrorism attack. The AABB task force report made recommendations for the emergency preparedness of the blood supply that were adopted by the HHS Advisory Committee on Blood Safety and Availability. The recommendations are aimed at having federal and other organizations that are involved in the collection or use of blood coordinate their actions in an emergency. For example, the task force recommended the designation of all blood banks as suppliers of blood in an emergency and that the Assistant Secretary for Health serve as the spokesperson for all organizations involved in managing and transporting blood in an emergency. The task force also recommended that it act as the coordinating group during emergencies to assess the medical needs of victims for blood. Both the Red Cross and ABC are independently pursuing their own plans to meet emergency and long-term needs. The Red Cross expects to increase annual collections by 9 percent during each of the next 5 years. The Red Cross also plans to implement a “strategic blood reserve” within the next 5 years using preregistered donors and a limited stock of frozen blood cells. ABC has established a “national strategic donor reserve” through which it can call on the donors it has registered, if needed. In response to the increased incidence of BSE in the cattle herds of many European countries, FDA, the Red Cross, and DOD are prohibiting blood donations from a greater proportion of individuals who have resided in countries where there is a risk of acquiring vCJD by eating contaminated meat. FDA estimates that its new deferral policy will further reduce the risk of possible exposure to vCJD by 23 percent but that it will disqualify about 5 percent of current blood donors in the United States. Nonetheless, given the overall growth in blood collections in recent years, it is likely that suppliers and others involved in blood collections, on the whole, can compensate for donor losses from the new policy. In August 1999, FDA issued guidance that recommended prohibiting donations from individuals who had resided or traveled in the United Kingdom for a total of 6 months or more from 1980 through 1996, a period during which that country experienced an epidemic of BSE in cattle. In response to the detection of BSE in cattle in European herds, in January 2002 FDA issued guidance to expand this recommended exclusion to prohibit donations from individuals who had spent a cumulative 3 months in the United Kingdom from 1980 through 1996, or 5 years or more in a European country since 1980. The portion of FDA’s new guidance pertaining to residents of the United Kingdom and France took effect on May 31, 2002, and the deferral of donors who have resided in other European countries will take effect on October 31, 2002. FDA’s guidance exempts donors of source plasma who had resided in Europe for 5 years from 1980 through 1996, but it prohibits source plasma donations from those who had resided in the United Kingdom for at least 3 months from 1980 through 1996. The guidance also recommends indefinite deferral of source plasma donors who have spent 5 or more years cumulatively in France from 1980 to present. The Red Cross and DOD have independently adopted donor deferral policies for their blood centers that are more stringent than FDA’s guidance. The Red Cross excludes donors who have spent a cumulative 3 months or more in the United Kingdom or 6 months in a European country since 1980. The Red Cross policy does not exempt plasma donors because most of its plasma is recovered plasma from donors of whole blood. DOD’s policy made minor modifications to FDA’s new deferral criteria. The new deferral policies are described in greater detail in appendix I. Because so little is known about the etiology of vCJD, estimates of the public health benefits from blood donor exclusions related to vCJD are uncertain. It has not been established that vCJD is transmissible through blood, and no tests to diagnose vCJD or detect vCJD in blood have been developed. Nonetheless, laboratory experiments point to a theoretical risk of transmission of vCJD through blood. (See app. II for a description of scientific research on vCJD.) FDA estimates that the additional risk reduction from the new vCJD donor deferral policies is substantially lower than the risk reduction derived from its initial deferral guidance. FDA estimates that its initial donor deferral that took effect in April 2000 reduced the amount of theoretical risk of vCJD transmission through blood transfusion in the United States by 68 percent and that the expanded deferral guidance is expected to reduce total risk of donor exposure to the agent that causes vCJD by an additional 23 percent, for a total risk reduction of 91 percent. Using the same methodology, FDA estimates that the Red Cross’s new donor deferral policy will decrease the total theoretical risk of exposure to the vCJD agent by 92 percent (1 percent more than FDA’s donor deferral recommendations). Estimates of the percentage of current donors who would be disqualified under the new deferral policies are substantially larger than the estimated donor losses from the first vCJD donor deferrals. On the basis of data from a 1999 survey of blood donors, FDA estimates that its new deferral policy will disqualify about 5 percent of current blood donors and that the Red Cross deferral policy will disqualify about 9 percent. On the basis of the results of a June 2001 survey of its own blood donors, the Red Cross estimated that its deferral policy would be less disruptive than FDA expects, resulting in a loss of about 4 percent of active donors. The overall growth in the U.S. blood supply in recent years and the demonstrated ability of particular blood suppliers to increase collections indicate that the blood industry as a whole can compensate for donor losses from the new vCJD donor deferrals. First, as we noted earlier, the long-term trends in blood collections are positive, and collections have increased substantially over the last 5 years. For example, prior to September 11, NBDRC had estimated that the nation’s blood collections for 2001 would exceed the number of units transfused in 2000 by more than 7 percent. Second, the Red Cross was able to increase its blood collections in early 2001—collections were 2 percent higher in the first 7 months of 2001 compared with 2000—despite the April 2000 implementation of FDA’s initial deferral guidance and Red Cross’s adoption of a new techinique to measure red blood cell levels that disqualified 6 percent of potential donors at its centers. Red Cross reported collections in July 2001 that were 8 percent higher than for the same period in 2000. Finally, before September 11, NYBC was able to increase its collections at a 12 percent annual rate over the last few years. We believe that this large and sustained increase in collections for an individual blood bank that was previously known for a chronic shortfall in collections indicates that blood centers will be able to increase collections in response to the new vCJD donor policy. Despite the adequacy of the nation’s blood supply, individual blood collection centers with a relatively large proportion of donors who have traveled to Europe will be more severely affected than others by the new exclusion policies. If these centers cannot find ways to increase local blood collections, they, or the hospitals they serve, will need to purchase blood from suppliers with an adequate inventory. The Red Cross donor survey found that its most affected regions would lose 5 percent of their donors, compared with 2 percent for the regions least affected. Blood centers in coastal urban areas that have a greater number of donors who have traveled overseas could experience deferral rates greater than 5 percent. Some other centers serving areas with many people who have lived overseas, such as DOD-affiliated personnel, will also be disproportionately affected. NYBC will probably be affected the most under FDA’s new deferral policy. NYBC currently imports about 25 percent of its supply from three European blood centers that collect blood under NYBC’s FDA license. NYBC will be unable to import blood from these centers when the second phase of FDA’s new deferral policy takes effect on October 31, 2002. Prior to September 11, NYBC was confident that it could compensate for the loss of supply from its European centers because it had substantially increased domestic collections during the last few years. However, NYBC now claims that its local donor base has decreased by about 25 percent since September 11 because many of the companies that participated in its blood drives were directly affected by the terrorist attacks and have reduced employment levels in the city. To compensate for the loss of blood from its European centers, NYBC has contracted to purchase blood from many other domestic blood suppliers, including the Red Cross and blood banks affiliated with ABC. Although blood is collected primarily from unpaid volunteers, blood banks incur costs from collecting, processing, and testing donated blood. To recover these costs, blood banks sell the processed blood to hospitals. The prices paid by different hospitals and prices for different types of blood vary substantially. Furthermore, the average price of blood has risen sharply since 1998. One of several contributing factors to these price increases has been the introduction of new blood safety measures. For example, leukoreduction adds about $30 to the price of a unit of blood. Although not widespread in 1998, leukoreduction is performed on most blood sold in the United States today. Price of Blood Varies Widely To recover the costs of collecting, processing, and testing, blood banks sell their processed blood. Because hospitals and suppliers negotiate the price and quantity of blood to be delivered, prices vary considerably depending on the size and location of the hospital and the type of blood purchased. Larger hospitals, and those in areas with more than one blood center, may sometimes pay less than other hospitals. For example, one of the hospitals we contacted told us that its average price for a unit of blood was $135, while another hospital told us that its average price was $200. Similarly, ABC told us that the list prices charged by its centers for a unit of leukoreduced red blood cells in September 2001 averaged $143, but one- quarter of the centers charged $124 or less and one-quarter charged at least $160. In addition, prices for units of the most useful blood types can be much higher than those for blood types that are in less demand. For example, in 2001, one independent blood center charged its non-sole- source customers more than $260 dollars for a unit of type O-negative blood but less than $60 for a unit of AB-positive blood. The average price of a unit of blood sold to U.S. hospitals has increased substantially since 1998. Both the Red Cross and ABC-affiliated blood banks increased average prices by more than 50 percent from 1998 through 2001 (see table 1). The Red Cross made additional price increases of 10 to 35 percent for different types of blood at the beginning of its fiscal year 2002 (which began July 1, 2001) that are not reflected in the table. Blood suppliers gave us several reasons for the recent price increases. They claimed that blood prices previously had been too low to support their blood collection and processing infrastructure. For example, according to a Red Cross official, the Red Cross revenue from blood services could not cover its costs associated with transporting blood, training and retaining staff, and obtaining and using new technologies. In addition, the Red Cross told us that it increased prices in order to hire additional staff needed to comply with the terms of its consent decree with FDA. New processing and testing steps that improve blood safety also have contributed to the price increases. The most substantial change is leukoreduction, the removal of white blood cells from blood. For example, the average nationwide price of a unit of blood from the Red Cross in fiscal year 2001 was $104 for nonleukoreduced blood and $136 for leukoreduced blood. The percentage of units that have been leukoreduced has risen sharply in recent years. The Red Cross reported that the percentage of its blood that was leukoreduced went from zero in 1998 to almost 80 percent in 2000 and to 95 percent at the beginning of 2002. ABC estimates that by December 2002 about 57 percent of the blood supplied by its affiliated blood centers will be leukoreduced. Similarly, a study commissioned by AABB has estimated that NAT added about $8 to the price of a unit of blood in 2000. Most blood supplied in the United States now undergoes NAT. The nation’s blood supply remains generally adequate, and collectively America’s blood banks probably will be able to compensate for donors lost as a result of the new vCJD donor deferral policies. Lessons learned from blood collection and usage after the September 11 terrorist attacks have prompted efforts to improve how blood suppliers respond to public health emergencies. However, questions about the adequacy of the blood supply will continue because the demand for blood is increasing and because new testing procedures and donor deferral policies that arise in response to emerging disease threats may continue to reduce the pool of potential donors. For these reasons, there is a clear need for comprehensive, long- term monitoring of the blood supply. We asked for comments on a draft of this report from HHS and DOD. HHS responded that it had no general comments. DOD concurred with our findings (see app. III). Both HHS and DOD made additional technical comments that we have incorporated where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Secretary of Health and Human Services, the Secretary of Defense, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-7119 if you have any questions about this report. Another GAO contact and staff acknowledgments are listed in appendix IV. FDA estimated 5% 91% in total 1996, of 3 months or more Cumulative travel to Europe, 1980-present, (23% from new deferral criteria) FDA recommends deferral of source plasma donors with 5 years cumulative travel in France from 1980 to the present. However, FDA’s new deferral policy for 5 years exposure elsewhere in Europe does not apply to source plasma. In part, this reflects FDA’s belief that, on the basis of the results of experiments conducted by plasma product manufacturers, the manufacturing process for plasma- derivative products minimizes the risk of transmission of vCJD through plasma. In addition, FDA is concerned that disqualifying plasma donors by extending the deferral policy to them may threaten the sufficiency of the plasma supply. The Plasma Protein Therapeutics Association (PPTA) conducted a donor travel survey in 30 plasma collection centers and found that donor losses could range from 0 to 13 percent, with the greatest losses occurring at centers located near military bases. The overall donor loss was estimated to be about 3.5 percent. A survey conducted by one of PPTA’s member companies suggested that overall donor loss would be closer to 5 percent. PPTA also expected that a ban on the use of plasma in the United States from European donors, as would occur if the vCJD deferral policy was applied to plasma, would adversely affect an already tight supply of plasma-derived therapeutics, causing some countries to reject European plasma and thus putting extreme pressure on other sources of plasma, such as the United States, to meet global demand. Transmission of vCJD by human blood or plasma has not been demonstrated, and no laboratory or epidemiological studies have shown that blood from donors infected with vCJD carries the disease. For example, at least 20 people in the United Kingdom have received blood or blood components from donors who later developed vCJD. Although relatively little time has passed, none of the recipients of the blood have developed vCJD. Studies of patients with vCJD and a prior history of receiving blood transfusions have not revealed any cases of vCJD among the donors involved. Nonetheless, laboratory experiments point to a theoretical risk of transmission of vCJD through blood. For example, tissue samples from vCJD patients have found the agents that cause vCJD, protein molecules known as prions, in human lymph tissue, such as the tonsils and the spleen. Since white blood cells known as B lymphocytes also circulate through these tissues and are potentially involved in the pathology of vCJD, researchers suggest that these circulating lymphocytes may carry infectivity in blood. Experiments with animals have shown that blood infected with vCJD-like agents contain low-levels of infectivity. In addition, one group of researchers has recently demonstrated that BSE can be experimentally transmitted between sheep by blood transfusion. However, results from this experiment may not be representative of the human manifestation of vCJD. Epidemiological Predictions Researchers are limited in the conclusions they can make concerning vCJD and blood safety, and in predicting the future number of vCJD cases. Important variables in determining the probability of BSE transmission to humans, such as route of exposure, genetic susceptibility, and dose, remain unproven. Further, the incubation period for vCJD is unknown but is probably many years. Citing the current modest number of additional deaths in the United Kingdom caused by vCJD (there were 28 confirmed or probable vCJD deaths in the United Kingdom in 2000 and 20 in 2001), some researchers suggest that the epidemic will not reach the hundreds of thousands once thought possible. As a result, the projected number of total cases has been revised downward to just a few hundred or few thousand cases, with fewer than 100 new cases occurring per year. Such revised estimates are based on varying assumptions regarding the average incubation period and when individuals were infected. The ambiguity of the scientific evidence regarding vCJD transmission through blood is reflected in the divided vote of FDA’s advisory committee (the Transmissible Spongiform Encephalopathies Advisory Committee, or TSEAC) in favor of the expanded donor deferral. The committee voted 10 to 7 in June 2001 to move forward with the proposed changes, but several members expressed concern about the expanded deferral’s impact on blood availability, the effectiveness of current efforts to control human exposure to BSE in the United Kingdom, and the reliability of European surveillance data. The scientific uncertainties surrounding vCJD would be greatly reduced if a diagnostic test existed to confirm the presence or absence of vCJD in human blood. While tests are being developed, it could be some time before an accurate test will be available to screen blood for the vCJD agent. Tests do exist to detect vCJD prions in some human tissues, such as brain tissue, tonsils, and appendixes, but no suitable tests are available to detect vCJD infections in blood. Prions are different than viral and bacterial pathogens, which contain nucleic acids. Some pathogens and viruses trigger the human body to release specific antibodies, which may be detected in the blood. For example, both HIV and hepatitis elicit antibodies in the blood that can be detected in a blood test. At this point, most scientists believe that prions, such as those involved in vCJD, do not contain nucleic acids and do not elicit the production of antibodies. This poses a challenge in designing a blood test, which must be 100,000 times as sensitive as assays that already exist for detecting prions in tissues. If a test were approved, it would be required to be extremely sensitive to minimize the possibility of false positives, which would unnecessarily defer from donating blood many individuals who did not actually have the vCJD agent in their blood. The following staff made important contributions to this work: Carolina Morgan, Sharif Idris, Mark Patterson, and Elizabeth Morrison. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | According to the American Association of Blood Banks, every year about 8 million individuals donate 14 million pints of blood, and 4.5 million patients receive life saving blood transfusions. The available data indicate that the blood supply has increased in the last 5 years and that growth has kept pace with the rise in demand. Blood suppliers received a high volume of blood donations immediately after the September 11 attacks. However, the small amount of blood needed to treat survivors of the attacks resulted in a nationwide surplus. The nation's blood supply can compensate for donors lost because of new donor restrictions designed to further reduce the risk of variant Creutzfeldt-Jakob Disease transmission. The average price of blood has risen over 50 percent since 1998. Although blood is primarily collected from volunteers, blood suppliers incur costs by collecting, processing, and testing donated blood. |
The United States Army is responsible for land-based military operations. It is the largest and oldest established branch of the U.S. military. The modern Army has its roots in the Continental Army, which was formed on June 14, 1775, before the establishment of the United States, to meet the demands of the American Revolutionary War. The Army’s mission is to fight and win our nation’s wars by providing prompt, sustained land dominance across the full range of military operations and spectrum of conflict in support of combatant commands. The Army does this by organizing, equipping, and training forces; accomplishing missions assigned by the President, the Secretary of Defense, and combatant commanders; and transforming for the future. For fiscal year 2010, Congress appropriated more than $52 billion to the “Military Personnel, Army” appropriation, which is a 1-year appropriation available for the pay, benefits, incentives, allowances, housing, subsistence, travel, and training primarily for active duty service members. The Defense Finance and Accounting Service in Indianapolis, Indiana (DFAS-IN) is responsible for accounting, disbursement, and reporting for the Army’s military personnel costs using the Defense Joint Military Pay System-Active Component (DJMS-AC). According to DFAS-IN, of the $52 billion in fiscal year 2010 military personnel appropriations, the Army’s nearly 680,000 service members received $46.1 billion in pay and allowances. To provide payroll support to the vast number of active Army service members, DFAS-IN has over 40 Defense Military Pay Offices within the United States that provide finance services to military personnel in designated geographical areas. The Statement of Budgetary Resources is the only financial statement predominantly derived from an entity’s budgetary accounts in accordance with budgetary accounting rules, which are incorporated into generally accepted accounting principles (GAAP) for the federal government. The Statement of Budgetary Resources is designed to provide information on authorized budgeted spending authority as reported in the Budget of the United States Government (President’s Budget), including budgetary resources, availability of budgetary resources, and how budgetary resources have been used. The Under Secretary of Defense for Personnel and Readiness (USD (P&R)) advises the Secretary of Defense on a number of personnel areas such as recruitment, pay and benefits, and oversight of military readiness, and serves as DOD’s Chief Human Capital Officer. The Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs (ASA (M&RA)) is responsible for setting the strategic direction and providing overall supervision for manpower, personnel, and Reserve component affairs of the Department of the Army, and serves as the Army’s lead for manpower policy and human resources, among other things. In order to fulfill these responsibilities, ASA (M&RA) relies on the Deputy Chief of Staff, G-1, for advice and assistance. In addition to being the principal military advisor to ASA (M&RA), G-1’s other responsibilities include developing policy that provides guidance for responsive and flexible human resources support of the Army and overseeing the officer accession and enlisted recruiting policy. The Human Resources Command supports the Deputy Chief of Staff, G-1, in the management of all military personnel by serving as the functional proponent for military personnel management and personnel systems. Army Human Resources Command, unit commanders, and training certification officials, among others, are responsible for providing DFAS-IN with accurate and timely information regarding changes in individual military member status necessary to maintain accurate and timely payroll accounts. Offsetting collections are collections from intragovernmental transfers, business-like transactions with the public, and collections from the public that are governmental in nature but required by law to be classified as offsetting. These collections are all authorized by law to be credited to appropriation or fund expenditure accounts. As illustrated in figure 1, military pay accounts are established as part of the enlistment process for new recruits and are based on personnel records. The recruiting office establishes the basics of the recruit’s personnel file in the Army Recruiting Information Support System (ARISS). This file contains the recruit’s full name, contact information, country of origin, social security number (SSN), and recruiting status. After applying, the recruit reports to a Military Entrance Processing Station (MEPS), which works with the recruiting office to qualify applicants for military service and serves as the quality control unit between the recruiter and the military service. The applicant concludes the MEPS visit by signing either an enlistment contract or a delayed entry contract. The contract notes the terms of enlistment, such as pay grade, which relate to basic pay information. This completes the entrance documents that the MEPS collects. MEPS personnel then administer the Oath of Enlistment. All of the documents created are electronically transmitted to the respective service recruiting system, that is, ARISS for the Army, and a paper copy, referred to as a “Packet,” is created that accompanies the recruit to the training installation where the Packet is delivered to the personnel office at the Reception Battalion. Recruits generally report to their assigned installation Reception Battalion for training within 2 weeks to 14 months after signing their enlistment contract. The service liaison counselor keeps the documentation Packet until the enlistee reports to the Reception Battalion, at which time the Packet is sent to the respective training installation. The Reception Battalion uses information contained in the Packet to create a personnel file in the Reception Battalion Automated Support System (RECBASS). The enlistee provides additional information on dependents, such as a marriage certificate, birth certificates for dependent children, and W-4 dependent information. Reception Battalion personnel staff assist the enlistee in filling out any additional forms if they were not included in the Packet, such as direct deposit, pay allotments for base housing, savings account deposits, child support, and emergency contact information. Personnel staff enter this information into the personnel system and send the information to the installation’s Defense Military Pay Office where the enlistee’s payroll account, referred to as a master military pay account, is created in DJMS-AC. Military pay starts once the payroll account is established in DJMS-AC. Army active duty military personnel receive pay and allowances based on their grade and time in service; location; and whether they are married, have dependents, or are performing special duties. The Army’s active duty service members may elect to be paid once a month at the end of the month or twice a month at mid-month and at the end of month. The service member’s pay information is consolidated into one monthly Leave and Earnings Statement. In addition to basic pay, military members may also be eligible for cash recruitment or retention incentives (i.e., bonuses). Any necessary pay change after the pay account is set up is initiated by the appropriate officials throughout the Army. These changes generally relate to promotions, special duty pay, incentive pay, Permanent Change of Station assignments, Temporary Change of Station assignments, and changes in dependents. The Defense Manpower Data Center (DMDC) is the central DOD source to identify, authenticate, authorize, and provide information on DOD- affiliated personnel. As such, it is the one, central access point for information and assistance on DOD entitlements, benefits, and medical readiness for uniformed service members, veterans, and their families. Major DMDC databases include information on pay; accessions and examinations (USMEPCOM); authorizations and requirements; military units and addresses, special purposes, for example, contingency operations; Joint Command duty assignments; retirement point repository; and mobilization/activations. DMDC has five major operating locations, including the DOD Center, in Monterey Bay, California; the Washington, D.C. area; and overseas locations in South Korea, Europe, and Southwest Asia. In addition, DMDC has 2,145 issuing stations (badge offices, etc.) at 1,400 worldwide locations. Computer support for the DOD Center in Monterey Bay is provided by the DOD Center in Monterey Bay and the Naval Postgraduate School in Monterey, California. Other computer support offices are located in Arlington, Virginia, and Auburn Hills, Michigan. Payments of basic pay and allowances to service members are made via electronic funds transfer (EFT) through DJMS-AC. At the local level, DMPOs are required to review any substantial changes (defined as +/- 150 percent) in payroll data daily. The intent of this review is to identify data input errors. In addition, a day before payroll is processed, DFAS-IN conducts a pre-payroll review. This is a manual process, where the DFAS-IN Military Pay Operations (Mil Pay Ops) staff obtain a sample of Leave and Earnings Statements from DJMS-AC and trace the information in the statements to the relevant table outside of DJMS-AC. The purpose of this review is to identify potential system problems with the pay information used to calculate the pay amount. After completing this review, DFAS-IN then sends the DJMS-AC totals to the certifying official for certification. The certification process checks the DJMS-AC totals against the disbursing file totals. The certifying official sends an authorization and voucher to the Disbursing Office requesting release of payment. The DFAS-IN Disbursing Office uses the Army’s disbursing system to send electronic payments to the Federal Reserve Banks, which in turn distribute payments to each service member’s bank account. DFAS-IN’s Accounting Division performs a number of steps to transfer payroll transactions from DJMS-AC to the Army’s general ledger accounting system. from Mil Pay Ops that contain the computed total pay costs and disbursements for the payroll transactions, which are recorded as summary records by budget activity (e.g., officer, enlisted, and cadet pay). An individual soldier’s payroll information is not recorded in the accounting system. Military pay accounting data is uploaded from the Army’s general ledger accounting system into the Defense Departmental Reporting System-Budgetary (DDRS-B) for budgetary reporting and then to DDRS-Audited Financial Statement (DDRS-AFS) for financial statement reporting. Figure 2 provides a high-level illustration of the Army’s complex environment for establishing military personnel and payroll records and processing military pay. The Standard Financial System (STANFINS) is the Army standard general ledger system currently used for recording military payroll. iPERMS does not feed DJMS-AC. Army Regulation No. 600-8-104, Military Personnel Information Management/Records, establishes requirements for the Army’s Official Military Personnel File. The Army deployed iPERMS in 2007, and certain MilPer (Military Personnel) Messages and a Department of the Army memorandum indicate that iPERMS is intended to serve as the system of record for the Official Military Personnel File. In addition, the Army is in the process of developing the Integrated Personnel and Pay System-Army (IPPS-A), which is targeted for completion in 2017. The Army could not readily identify a complete population of Army payroll accounts for fiscal year 2010, given existing procedures and systems. The Army and the Defense Finance and Accounting Service in Indianapolis (DFAS-IN) did not have an effective, repeatable process for identifying the population of active duty payroll accounts. In addition, the Defense Manpower Data Center (DMDC) did not have an effective process for comparing military pay account files to military personnel files to identify a valid population of military payroll transactions. For example, it took 3 months and repeated attempts before DFAS-IN could provide a population of service members who received active duty Army military pay in fiscal year 2010. Similarly, it took DMDC over 2 months to compare the total number of fiscal year 2010 active duty payroll accounts to its database of personnel files. Standards for Internal Control in the Federal Government requires all transactions and other significant events to be clearly documented and the documentation readily available for examination.with DOD’s own guidance or financial audit guidance. DOD’s Financial In addition, these ineffective processes are not in accord Improvement and Audit Readiness (FIAR) Guidance sets out key tasks essential to achieving audit readiness, including defining and identifying the population of transactions for audit purposes. The GAO/PCIE Financial Audit Manual (FAM) provides guidance concerning typical control activities, such as independent checks on the validity, accuracy, and completeness of computer-processed data. One example of a control in this area includes comparing data from different sources for accuracy and completeness. Without effective processes for identifying a complete population of Army military pay records and comparing military pay accounts to personnel records, the Army will have difficulty meeting DOD’s 2014 audit readiness goal and its 2017 goal for a complete set of auditable financial statements. DFAS-IN made three attempts from November 2010 through early January 2011 to provide us a Defense Joint Military Pay System-Active Component (DJMS-AC) file extract of Army service members who received active duty pay in fiscal year 2010. The first attempt included 11,940 duplicate pay accounts, and the total number of pay accounts included in the second attempt increased by 28,035 records over the first attempt, necessitating a third attempt to establish the population of fiscal year 2010 active duty pay records. We requested that DMDC compare the results of DFAS-IN’s third attempt to identify the population of Army fiscal year 2010 payroll accounts against DMDC’s compilation of monthly active duty payroll data that it received from DFAS-IN. Of the 677,024 Army active duty pay accounts, per DJMS-AC, we were able to reconcile all but 1,025 pay accounts (less than 1 percent of the total active duty pay accounts, which is not considered material) to pay account data that DFAS-IN had previously provided to DMDC. Standards for Internal Control in the Federal Government requires all transactions and other significant events to be clearly documented and the documentation readily available for examination.were unable to verify the validity of the records. Further, we did not attempt to reconcile military payroll amounts to the related disbursements because an Office of the Secretary of Defense (Comptroller) and Chief Financial Officer (OUSD(C)) contractor was in the process of performing a pilot reconciliation of payroll to disbursement data. As discussed later in this report, we DOD’s Financial Improvement and Audit Readiness (FIAR) Guidance states that being able to provide transaction-level detail for an account balance is a key task essential to achieving audit readiness. At the time we initiated our audit, Army officials told us that they had not yet focused on this area in their audit readiness efforts because the target date for The Army military pay was not until the first quarter of fiscal year 2015. inability to readily provide a population of military pay accounts impeded our efforts to accomplish our audit objectives and, if not effectively addressed, will impede the Army’s ability to meet DOD’s new Statement of Budgetary Resources audit readiness goal of September 30, 2014. Subsequent to this discussion, the Secretary of Defense issued a memo accelerating the Statement of Budgetary Resources audit readiness goal from 2017 to 2014. DMDC, these differences related primarily to personnel who had either left or were scheduled to leave the service, were reserve component soldiers released from active duty, or were soldiers who had died during fiscal year 2010. For these reasons, the service members were not included in the personnel file on September 30, 2010, that DMDC used for our initial comparison. We confirmed six duplicate SSNs in personnel records with the Social Security Administration and referred these records to DMDC and the Army for further research and appropriate action. DMDC attempted to complete our requested comparison of active duty Army pay accounts to military personnel records in January 2011, but was unable to complete the reconciliation until early March 2011. DMDC officials told us that the reasons for the delays included mainframe computer issues, staff illness and turnover, and management data quality reviews of the file comparison results, including additional file comparisons to resolve differences. Without an effective process for confirming that the Army’s active duty payroll population reconciles to military personnel records, the Army’s efforts to meet DOD’s Statement of Budgetary Resources auditability goal of September 30, 2014, will be impeded. DMDC and other DOD agencies use the Navy Postgraduate School mainframe computer to support their activities and share data processing priorities. The Army does not have an efficient or effective process or system for providing documentation that supports payments for Army military payroll. For example, DFAS-IN had difficulty retrieving and providing usable Leave and Earnings Statements files for our sample items, and the Army and DFAS were unable to provide personnel and finance documents to support our statistical tests of all 250 service members’ pay accounts for fiscal year 2010. Standards for Internal Control in the Federal Government and DOD’s FIAR Guidance require audited entities to document transactions and events and assure that supporting documentation can be identified, located, and provided for examination. In addition, DOD Regulation 7000.14-R, Financial Management Regulation (FMR), requires the military components to maintain documentation supporting all data generated and input into finance and accounting systems or submitted to DFAS. Further, DOD’s Financial Improvement and Audit Readiness (FIAR) Guidance states that identifying and evaluating supporting documentation for individual transactions and balances, as well as identifying the location and sources of supporting documentation and confirming that appropriate supporting documentation exists, is a key audit readiness step. However, because the Army was unable to provide documents to support reported payroll amounts, we were unable to determine whether the Army’s payroll accounts were valid, and we were unable to verify the accuracy of payments and reported active duty military payroll. Further, because military payroll is significant to the financial statements, the Army will not be able to pass an audit of its Statement of Budgetary Resources without resolving these control weaknesses. The following discussion summarizes the problems with the Army’s processes related to military pay audit readiness. DFAS-IN staff experienced difficulty and delays in providing usable Leave and Earnings Statement files to support our testing of Army military payroll. We selected a sample of 250 service members and requested the relevant Leave and Earnings Statement files for fiscal year 2010. After multiple discussions and requests, we ultimately obtained usable Leave and Earnings Statement files for our sample items—5 weeks after our initial request. DFAS-IN took over 2 weeks to obtain the initial set of Leave and Earnings Statement files because it retrieves the files from two areas of the Defense Joint Military Pay System-Active Component (DJMS-AC). The active DJMS-AC database holds the current month plus the previous 12 months’ data; older data are archived. When we requested Leave and Earnings Statement files for fiscal year 2010 in April 2011, a portion of these files had been archived and had to be retrieved from the archived database. In addition, the first set of Leave and Earnings Statement files that DFAS-IN provided included statements outside the requested fiscal year 2010 timeframe of our audit, thus we had to request a new set of files. It took over 1 week, including our data quality review, to obtain the second set of Leave and Earnings Statement files consisting of 445 separate files containing monthly statements for the 250 service member pay accounts in our sample. We determined that the Leave and Earnings Statements for an individual service member generally were in two or more of the files provided. Consequently, we had to combine these files into a format with each service member’s Leave and Earnings Statement files grouped together to include all of the pay and allowance information for the service members in our sample. This combining and formatting required 2 additional weeks. Although the Army deployed the Interactive Personnel Management System (iPERMS) as the Army’s Official Military Personnel File in 2007 and the requirements for assuring that adequate supporting documentation is available for audit and examination are clearly defined, the Army did not have procedures in place to assure that its military pay transactions were adequately supported and that the supporting documentation could be readily retrieved and provided for financial audit purposes. Standards for Internal Control in the Federal Government requires internal control and all transactions and other significant events to be clearly documented and the documentation readily available for examination. DOD Regulation 7000.14-R, Financial Management Regulation (FMR), requires the military components to maintain documentation supporting all data generated and input into finance and accounting systems or submitted to DFAS. This regulation also requires the components to ensure that audit trails are maintained in sufficient detail to permit tracing of transactions from their sources to their transmission to DFAS. Audit trails are necessary to demonstrate the accuracy, completeness, and timeliness of transactions as well as to provide documentary support for all data generated by the component and submitted to DFAS for recording in the accounting systems and use in financial reports. Further, DOD’s FIAR Guidance states that identifying and evaluating supporting documentation for individual transactions and balances, as well as the location and sources of supporting documentation and confirming that appropriate supporting documentation exists, is a key audit readiness step. Without the capability to readily locate and provide supporting documentation for military payroll transactions within a short timeframe, the Army’s ability to pass a financial statement audit will be impeded. A sample item constitutes a soldier’s pay account for fiscal year 2010, including reported Leave and Earnings Statements for all 12 months of the fiscal year. our 250 sample items, partial support for 3 sample items, and no support for the remaining 245 sample items. Our review of the partial documentation provided for 3 sample items showed that the Army was unable to provide supporting documentation for common elements of its military pay, including basic allowance for housing, cost of living allowance, hardship duty pay-location, and hostile fire/imminent danger pay. Specifically, The Army was unable to provide supporting documentation for basic allowance for housing for one service member. In general, a service member on active duty is authorized a housing allowance based on the member’s grade, dependency status, and location. Basic allowance for housing is based on the median housing costs and is paid independently of the service member’s actual housing costs. At the conclusion of our fieldwork, a DFAS-IN official told us they had requested this documentation from the National Archives and Records Administration (NARA); however, they had not yet received it. For two sample items, the Army did not provide adequate documentation that the two service members were appropriately paid for clothing allowance. A clothing allowance is paid to enlisted service members annually in the anniversary month each year after they received their first clothing allowance. At the end of our field work, DFAS-IN had not provided an explanation as to why these service members received their clothing allowance in a month other than their anniversary date. The Army was unable to provide supporting documentation for one service member residing within the continental United States receiving a cost-of-living allowance. A cost-of-living allowance for soldiers stationed in the United States is a supplemental allowance designed to help offset higher prices in high-cost locations. At the end of our field work, a DFAS-IN official told us they had requested the documentation from NARA; however, they had not yet received it. Finally, for two service members, the Army was unable to provide supporting documentation for hardship duty pay and hostile fire/imminent danger pay. Service members are entitled to hardship duty pay for location assignments when there is a permanent change of station duty or temporary/deployed/attached duty of over 30 days in a specific location. Additionally, a service member is entitled to hostile fire/imminent danger pay when, as certified by the appropriate commander, the member is (1) subject to hostile fire or explosion of a hostile mine; (2) on duty in an area close to hostile fire incidents and the member is in danger of being exposed to the same dangers experienced by other service members; or (3) killed, injured, or wounded by hostile fire and the service member is on official duty in a designated imminent danger pay area. At the end of our field work, the Army was unable to provide adequate support for the dates that each of these service members reported for duty at the specified location which triggered the start of these two types of pay, and it was unable to provide documentation that one service member had been ordered to report to duty in the designated location. As shown in figure 3, the Army provided complete documentation for soldier pay accounts associated with sample items #2 and #4, but it was unable to provide complete documentation for sample items #1, #3, and #5. Further, after 6 months, the Army was still unable to provide any documentation for the remaining 245 pay account sample items. One of the reasons the Army was unable to provide supporting documentation is that it does not have a centralized repository for pay- affecting documents. Army personnel and finance documentation supporting basic pay and allowances resides in numerous systems, and original hard copy documents are scattered across the country—at hundreds of Army units and NARA federal records centers. According to Army and DFAS-IN officials, there are at least 45 separate systems that the Army uses to perform personnel and pay functions with no single, overarching personnel system. Although these systems contain personnel data on military members and their dependents and feed these data to DJMS-AC, the systems do not contain source documents. Army Regulation No. 600-8-104, Military Personnel Information Management/Records, establishes requirements for the Army’s Official Military Personnel File. Army policies indicate that iPERMS is intended to serve as the system of record for the Official Military Personnel File. The Army deployed iPERMS in 2007 and designated iPERMS as the Army’s Official Military Personnel File. However, when we attempted to find supporting documents in iPERMS, we found that this system had not been consistently populated with the required service member documents, resulting in incomplete personnel records. For example, when attempting to test our sample, we discovered that documents, such as orders to support a special duty assignment, permanent change of station orders, and release or discharge from active duty, that should have been in iPERMS were not. The Army has designated the Human Resources Command as the owner of iPERMS; however, local installation personnel offices across the country are responsible for entering most documents into individual service member iPERMS accounts, and the Army has not established a mechanism for periodic monitoring, review, and accountability of iPERMS to ensure that personnel files are complete. For example, we found that documents needed to support pay transactions are not in iPERMS because (1) Army Regulation 600-8-104 does not require the personnel form to be included and (2) the documents are finance documents and not personnel documents. Efforts to achieve auditablity are further compounded by payroll system limitations. DJMS–AC, used to process Army active duty military pay, is an aging, Common Business Oriented Language (COBOL) mainframe-based system that has had minimum system maintenance because DOD planned to transition to the Forward Capability Pay System and then to the Defense Integrated Military Human Resources System (DIMHRS).DJMS-AC lacks key payroll computation abilities to pay active duty Army service members. To address these functionality limitations, DFAS has developed approximately 70 workaround procedures that are currently being used to compensate for the lack of functionality in DJMS-AC. An audit of Army military pay would necessitate an evaluation of these procedures and related controls. Another factor in the Army’s inability to provide support for military payroll is that the Army has not adequately documented its personnel processes and controls related to military pay. During our audit, we spent considerable time attempting to identify the range of personnel and finance documents that would be needed to support basic military pay and allowances reported on service members’ Leave and Earnings Statements and the appropriate office responsible for providing the documentation. According to Internal Control Standards, written documentation should exist covering the agency’s internal control structure and all significant transactions and events. for internal control includes identification of the agency’s activity-level functions and related objectives and control activities and should appear in management directives, administrative policies, accounting manuals, and other such guidance. GAO, Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.3.1 (Washington, D.C.: November 1999). the success of the Army’s efforts will be key to meeting DOD’s 2014 Statement of Budgetary Resources audit readiness goal. To meet this goal, the Army has several military pay audit readiness efforts planned or under way—most of which were begun after we initiated our audit. However, many of these efforts are in the early planning stages and will need to be carefully documented and managed to ensure effective and timely implementation. In November 2010, as part of the Office of the Under Secretary of Defense (Comptroller) (OUSD(C)) effort to provide management consulting assistance where needed on financial audit readiness, DOD provided contractor support to the Army for documenting and testing DJMS-AC application system controls. In November 2011, OUSD(C) and contractor officials provided us a status briefing that indicated DOD’s contractor expects to complete documentation and testing of DJMS-AC controls in March 2012. The November 2011 OUSD(C) status briefing also noted that the Army’s Financial Improvement Plan (FIP) team began executing discovery, documentation, and controls testing of front-end military pay business processes, including accessions; field service activities; and Military Personnel, Army appropriation budget activities. This effort includes processes executed by the Army financial management and personnel communities, including the Army Budget Office, Office of the Deputy Chief of Staff for Personnel, Army and DFAS installation finance and military pay offices, and Army installation military personnel offices. The Army FIP effort encompasses the active Army, Army National Guard, and U.S. Army Reserve. The Army plans to complete these efforts by December 31, 2012, and implement any required corrective actions by December 31, 2013. Further, as part of the Army FIP discovery effort, Army officials told us they plan to develop a repository of military pay entitlement information by entitlement type, which includes governing laws and regulations; the necessary key supporting documentation, responsible parties, and location for retrieval; as well as the automated information systems involved and their owners. The Army plans to complete these efforts by December 31, 2013. However, it is not yet clear who will be responsible for entering pay-supporting documents in the repository and what process will be used for ensuring completeness of the files. As previously discussed, Army Regulation No. 600-8-104, Military Personnel Information Management/Records, established requirements for the Official Military Personnel File, but the regulation did not include requirements for ensuring that personnel documents are centrally located, retained in the service members’ Official Military Personnel File, or otherwise readily accessible. The regulation also did not require that these files be monitored to ensure their completeness. Army officials told us that in conjunction with DFAS, it has identified other systems for SSAE No. 16 and DFAS self-reviews. Further, the Army plans to identify by March 31, 2012, all systems that have a material impact on the military pay processes and require Federal Information System Controls Audit Manual (FISCAM) assessments. The Army intends that all required reviews will be completed by December 31, 2013. Additionally, the Army is working with DFAS-IN to document processes and perform control testing of payroll accounting, referred to by the Army as back-end processes. The Army expects to implement all corrective actions on these controls by December 31, 2012. As a result of our work, Army and DFAS-IN officials told us they plan to develop a matrix of personnel documents that support military pay and allowances and identify officials responsible for providing this documentation. Army Deputy Chief of Staff, G-1, officials plan to work with IPPS-A team members to determine how IPPS-A will incorporate or link to this information. Effective development of such a matrix will be critical to ensuring that payroll transactions are supported and, therefore, audit ready. The need for such a matrix became apparent nearly 1 year ago, but the Army has not yet completed such a matrix or identified personnel responsible for providing needed documents. Further, it has not established a central repository for these documents, or designated iPERMS as the official repository, and it has not established a mechanism for periodic monitoring, review, and accountability to ensure that the central repository will be effectively maintained. In addition, the Army is in the process of developing the Integrated Personnel and Pay System-Army (IPPS-A). However, the current targeted IPPS-A implementation date of 2017 will require the Army to rely on its current systems for purposes of meeting DOD’s Statement of Budgetary Resources audit readiness date of September 30, 2014. IPPS–A is planned to be developed and implemented in two increments, with multiple releases. The Army plans to employ 14- to 18-month development cycles for each release, with the goal of fielding capabilities every 12 months. The Army intends for Increment I to consist of a trusted data source of soldier personnel and human resource data and to provide the foundation for Increment II, which is expected to provide integrated personnel and pay services, to be developed and implemented across multiple releases. In response to our findings in this report, Army IPPS-A officials told us that they have recently begun efforts to determine how IPPS-A will link to personnel records that will be needed to support Army military payroll amounts. The Army’s strategy is for each release of IPPS-A to incrementally build upon the prior release’s design and capability, to ultimately contribute toward the Army’s goal of reaching financial auditability by fiscal year 2017. Because implementation of the IPPS-A is not targeted for completion until 2017, a slippage in the implementation date could impede the Army’s efforts to support DOD’s financial statement audit readiness goal of September 30, 2017. Without timely and effective efforts to establish an electronic repository of pay- supporting documents and ensure that the documentation is complete, IPPS-A will not be able to fully support the Army’s audit readiness efforts. Active Army military payroll, reported at $46.1 billion for fiscal year 2010, is material to all of the Army’s financial statements, and as such, will be significant to the Army’s audit readiness goals for the Statement of Budgetary Resources. The Army has several military pay audit readiness efforts that are planned or under way. Timely and effective implementation of these efforts could help reduce the risk related to DOD’s 2014 Statement of Budgetary Resources audit readiness goal. However, most of these actions are in the early planning stages. Moreover, these initiatives, while important, do not address (1) establishing effective processes and systems for identifying a valid population of military payroll records, (2) ensuring Leave and Earnings Statement files and supporting personnel documents are readily available for verifying the accuracy of payroll records, (3) ensuring key personnel and other pay-related documents that support military payroll transactions are centrally located, retained in service member Official Military Personnel Files, or otherwise readily accessible, and (4) requiring the Army’s Human Resources Command to periodically review and confirm that service member Official Military Personnel File records in iPERMS or other master personnel record systems are consistent and complete to support annual financial audit requirements. These same issues, if not effectively resolved, could also jeopardize the 2017 goal for audit readiness on the complete set of DOD financial statements. In addition, the Army’s military pay auditability weaknesses have departmentwide implications as the other military components, such as the Air Force and the Navy, share some of the same military pay process and systems risks as the Army. Going forward, focused and committed leadership and knowledgeable staff in key functional areas, including personnel, systems, military payroll, and accounting will be essential to effective implementation of military pay audit readiness efforts. To help the Army develop the processes and controls necessary to achieve financial statement audit readiness for military pay, we are making the following four recommendations. We recommend that the Secretary of the Army direct the Assistant Secretary of the Army for Financial Management and Comptroller to work with Army Personnel (G-1), DFAS-IN, and audit readiness officials to document and implement a process for identifying and validating the population of payroll transactions for fiscal year periods at a minimum. identify key finance (i.e., pay-affecting) documents that support military payroll transactions and develop and implement procedures for maintaining them, including responsibility for coordination with Army Personnel (G-1) and audit readiness officials. In addition, we recommend that the Secretary of the Army direct the Assistant Secretary of the Army for Manpower and Reserve Affairs to revise AR No. 600-8-104, Military Personnel Information Management/Records, to require that key personnel and other pay-related documents that support military payroll transactions are centrally located, retained in the service members’ Official Military Personnel File, or otherwise readily accessible. Consider first using the Interactive Personnel Management System (iPERMS) for this purpose. the Army’s Human Resources Command periodically review and confirm that service member Official Military Personnel File records in iPERMS or other master personnel record systems are consistent and complete to support annual financial audit requirements. We received written comments from the Secretary of the Army on March 12, 2012, stating that the Army agreed with our four recommendations. The Army’s letter also states that our work has been extremely helpful in identifying the need to have consistent agreed-upon rules for documenting files required to support audits of military pay and cited several efforts under way to improve the auditability of its military pay. The Army’s comments are reprinted in appendix II. The Army’s letter states that it believes we found no significant issues in our review of the military pay accounts, but our report is very clear in highlighting the significance of the issues with the Army’s military payroll. For instance, our report states that without effective processes for identifying a complete population of Army military pay records and comparing military pay accounts to personnel records, the Army will have difficulty meeting DOD’s 2014 Statement of Budgetary Resources audit readiness goal and its 2017 goal for a complete set of auditable financial statements. In addition, because the Army was unable to provide documents to support reported payroll amounts, we were unable to determine whether the Army’s payroll accounts were valid, and we were unable to verify the accuracy of the payments and reported active duty military payroll. Further, in responding to our first recommendation that the Army document and implement a process for identifying and validating the population of payroll transactions for fiscal year periods, the Army stated that it validates personnel and payroll records monthly in real time, and will evaluate the value of retaining personnel files for prior years. If the monthly pay to personnel comparison is a control procedure that the Army performs regularly and intends for the auditors to rely on, the process and results must be documented and retained for the auditor to assess and test beyond the end of the fiscal year, as we recommended. We are sending copies of this report to the Secretary of Defense; the Under Secretary of Defense (Comptroller/Chief Financial Officer); the Deputy Chief Financial Officer; the Director, Financial Improvement and Audit Readiness; the Secretary of the Army; the Assistant Secretary of the Army for Financial Management and Comptroller; the Assistant Secretary of the Army for Manpower and Reserve Affairs; the Director of Army Finance Command; the Directors of DFAS and the DFAS- Indianapolis Center; the Director of the Defense Manpower Data Center; the Director of the Office of Management and Budget; and appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9869 or khana@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This audit was initiated under our mandate to audit the consolidated financial statements of the United States government. Our objectives were to perform basic audit procedures necessary to conclude about the validity and accuracy of Army’s active duty military payroll. Those basic audit procedures included (1) identifying a valid population of military payroll transactions, and (2) testing a sample of payroll transactions for validity and accuracy. To identify the population of Army active duty payroll transactions, we obtained Army active duty military payroll records from the Defense Finance and Accounting Service, Indianapolis (DFAS-IN). DFAS-IN processes military payroll for the Army. At our request, DFAS-IN made three attempts from November 2010 through January 2011 to provide us a complete file of service members who were paid in fiscal year 2010. The first attempt included 11,940 pay accounts that were duplicate, and the total number of pay accounts included in the second attempt increased by 28,035 records over the first attempt, necessitating a third request for the population of fiscal year 2010 active duty pay records. To obtain assurance that the overall population of Army fiscal year 2010 payroll accounts matched the sum of monthly payroll accounts, we requested the Defense Manpower Data Center (DMDC) compare the results of our third request for the population of Army fiscal year 2010 payroll accounts against its compilation of monthly active duty payroll data that it received from DFAS-IN. We were able to reconcile all but 1,025 pay accounts (less than 1 percent of the total, which is not considered material). We did not reconcile military payroll amounts to the related disbursements because an Office of the Under Secretary of Defense (Comptroller) and Chief Financial Officer (OUSD(C)) contractor was in the process of performing a pilot reconciliation. In addition, because the Army does not have an integrated military personnel and payroll system, we worked with the DMDC to match payroll accounts to personnel records to determine whether the population of Army military payroll accounts was in agreement with the population in the DMDC database. We relied on work performed by DMDC because we reviewed its quality control procedures and found them to be adequate, for our purposes. We compared the total number of records in DFAS-IN’s population and DMDC’s database for the service members who received active duty Army military pay in fiscal year 2010. We did not separately validate Army personnel file data. DMDC’s file comparison of Army active duty pay accounts to military personnel records identified 67,243 pay accounts that were not matched to a file of military personnel records on September 30, 2010. We asked DMDC to perform more detailed comparisons of these differences. These differences related primarily to personnel who were not active Army service members because they had either left or were scheduled to leave the service, were reserve component soldiers released from active duty, or were soldiers that had died during fiscal year 2010. Finally, we confirmed six duplicate SSNs in personnel records with different names with the Social Security Administration and referred these records to DMDC and the Army for further research and appropriate action. To address our second objective, we documented key controls, laws, and pay regulations and used the population of matched personnel and payroll records to select a statistical sample of 250 soldiers for testing the accuracy and validity of Army military payroll. We obtained 12 months of fiscal year 2010 Leave and Earnings Statement files for each soldier in our sample, the most recent data available at the time, to compare with supporting Army personnel documents indicating such information as military orders, special duty and expertise entitlements, marital status, and dependent information. We gained an understanding of Army processes with a focus on key steps involved in establishing military personnel records related to specific pay and allowance amounts. We also performed process walkthroughs at DFAS-IN and assessed key controls over the accuracy of payroll payments made to service members. To document the process for capturing pay-related information and setting up military personnel records, we interviewed Army personnel, Human Resources Command, and Finance Command officials and visited a Military Enlistment Program Station in Indianapolis and a military reception battalion at Fort Jackson, South Carolina. We also interviewed finance officials at Defense Military Pay Offices at Fort Jackson, South Carolina, and Fort Carson, Colorado, to gain an understanding of how pay adjustments are initiated, input, and reviewed. We requested and obtained fiscal year 2010 monthly Leave and Earnings Statement files for the service members in our sample from DFAS-IN and requested Army personnel documents to support basic pay and allowance amounts reported on the Leave and Earnings Statements, including such information as military orders, and the certifications of special duty expertise, marital status, and dependent information. We did not plan to, nor did we test deductions and allotments for items such as Service Member’s Group Life Insurance, Thrift Savings Plan, and TRICARE. In addition to basic pay, we planned to test base housing allowance; hazardous duty pay; hostile fire/imminent danger pay; cost of living allowance; military overseas housing allowance; family separation housing; temporary lodging allowance; clothing allowance; and special duty pay, such as foreign language proficiency pay and parachute (jump) pay. We reviewed documentation provided by the Army for 5 sample items and documentation contained in the Army’s Interactive Personnel Management System (iPERMS), which serves as the Army’s Official Military Personnel File. We were unable to complete our tests of active duty military payroll accuracy because of a scope limitation related to the Army’s inability to provide support for its active component military payroll transactions. In support of our objectives, we reviewed Army military personnel and payroll policies and procedures and identified sources of pay-related documentation. Throughout our work, we interviewed key Army officials in Manpower and Reserve Affairs, Human Resources Command, and Finance Command. We also interviewed DFAS-IN officials responsible for payroll functions and Office of the Under Secretary of Defense (Comptroller/Chief Financial Officer) audit readiness contractor officials. Additionally, we interviewed agency officials regarding the status of Army’s efforts to develop an Integrated Personnel and Payroll System- Army (IPPS-A). We also performed walkthroughs of DFAS-IN Military Pay Operations, accounting, disbursing, financial reporting, and related processes. We conducted this performance audit from June 2010 through March 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Other matters identified in our work that merit management’s attention and correction will be reported in a separate letter to Army management. In addition to the contact named above, Gayle L. Fischer, Assistant Director; Carl S. Barden; Tulsi Bhojwani; Frederick T. Evans; Lauren S. Fassler; Wilfred B. Holloway; Sabur O. Ibrahim; John J. Lopez; Julia C. Matta, Assistant General Counsel; Sheila D. M. Miller, Auditor in Charge; Margaret A. Mills; Heather L. Rasmussen; Ramon J. Rodriguez; James J. Ungvarsky; and Matthew P. Zaun made key contributions to this report. | The Defense Finance and Accounting Service-Indianapolis (DFAS-IN) reported that fiscal year 2010 active Army military payroll totaled $46.1 billion. However, for several years, GAO and others have reported continuing deficiencies with Army military payroll processes and controls, raising questions about the validity and accuracy of reported Army military pay and whether it is auditable. The Department of Defense (DOD) has recently accelerated its Statement of Budgetary Resources audit readiness goal by 3 months to 2014 and is required to achieve audit readiness for a full set of DOD financial statements by 2017. GAO performed basic audit procedures for the Armys active duty military payroll to assess the Armys ability to (1) identify a valid population of payroll transactions and (2) test a sample of payroll transactions for validity and accuracy. GAO reviewed applicable laws and regulations, analyzed DOD and Army policies and procedures, drew a statistical sample of payroll transactions to test their accuracy and validity, and met with DOD, DFAS-IN, Army, and Defense Manpower Data Center officials. The Army could not readily identify the population of Army military payroll accounts given its existing procedures and systems. The Army and DFAS-IN did not have an effective, repeatable process for identifying the population of active duty payroll records. For example, it took 3 months and repeated attempts before DFAS-IN could provide a population of service members who received active duty Army military pay in fiscal year 2010. Further, because the Army does not have an integrated military personnel and payroll system, it was necessary to compare the payroll file to active Army personnel records. However, the Defense Manpower Data Center (DMDC), DODs central repository for information on DOD-affiliated personnel, did not have an effective process for comparing military pay account files with military personnel files to identify a valid population of military payroll transactions. It took DMDC over 2 months and labor-intensive research to compare and reconcile the total number of fiscal year 2010 active duty payroll accounts to its database of personnel files. DODs Financial Improvement and Audit Readiness (FIAR) Guidance states that identifying the population of transactions is a key task essential to achieving audit readiness. Without effective processes for identifying the population of Army military pay records and comparing military pay accounts to personnel records, the Army will have difficulty meeting DODs 2014 audit readiness goal for the Statement of Budgetary Resources. In addition, the Army does not have an efficient or effective process or system for providing supporting documents for Army military payroll. For example, DFAS-IN had difficulty retrieving and providing usable Leave and Earnings Statement files and the Army was unable to locate or provide supporting personnel documents for GAOs statistical sample of fiscal year 2010 Army military pay accounts. GAOs Standards for Internal Control in the Federal Government and DODs FIAR Guidance provide that audited entities document transactions and events and assure that supporting documentation can be identified, located, and provided for examination. Although the Army deployed the Interactive Personnel Management System (iPERMS) as the Armys Official Military Personnel File in 2007, it had not consistently or completely populated iPERMS with personnel records. At the end of September 2011, 6 months after receiving GAOs 250 statistical sample items, the Army and DFAS-IN were able to provide complete documentation for 2 of GAOs sample items and provided partial documentation for 3 items, but provided no documentation for 245 of GAOs 250 sample items. The Army has begun several military pay audit readiness efforts that, if successfully implemented, could help increase the likelihood of meeting DODs 2014 Statement of Budgetary Resources audit readiness goal and the 2017 mandate for audit readiness on a complete set of DOD financial statements. These efforts include documenting and testing payroll system application controls, documenting Army military pay business processes, identifying the range of supporting documents for military pay, and developing an integrated military personnel and payroll system. Most of these efforts are not yet documented and, therefore, there is no assurance that they will be implemented timely and effectively. GAO is making four recommendations to help the Army develop the processes and controls necessary to achieve financial statement audit readiness, including identifying and validating the population of military payroll transactions and obtaining and retaining necessary pay-affecting documents. The Army concurred with GAOs four recommendations and noted actions it is taking. |
For most people and many pharmacies, filling a prescription is a matter of dispensing a commercially available drug product that has been manufactured in its final ready-to-use form. This has been particularly true in the United States since the rise of pharmaceutical manufacturing companies. In addition to meeting federal safety and efficacy requirements before a new drug is marketed, the drugs manufactured by these companies are routinely tested by FDA after marketing. According to FDA, the testing failure rate for more than 3,000 manufactured drug products sampled and analyzed by FDA since fiscal year 1996 was less than 2 percent. Drug manufacturers are also required to report adverse events associated with their drugs, such as illness and death, to FDA within specified time frames. Drug compounding, which has always been a part of the traditional practice of pharmacy, involves the mixing, combining, or altering of ingredients to create a customized medication for an individual patient. According to the American Pharmacists Association, some of the most commonly compounded products include lotions, ointments, creams, gels, suppositories, and intravenously administered fluids and medication. Some of these compounded drugs, such as intravenously administered chemotherapy drugs, are sterile products that require special safeguards to prevent injury or death to patients receiving them. For example, sterile compounding requires cleaner facilities than nonsterile compounding, as well as specific training for pharmacy personnel and testing of the compounded drug for sterility. The extent of drug compounding is unknown, but it appears to be increasing in the United States. While industry representatives, the media, and others have cited estimates for the proportion of prescription drugs that are compounded ranging from 1 percent to 10 percent of all prescriptions, we found no data supporting most estimates. FDA does not routinely collect data on the quantity of prescriptions filled by compounded drugs. Similarly, we found no publicly available data, either from FDA or from industry organizations, on the amount of bulk active ingredients and other chemicals that are used in drug compounding in the United States. However, many state officials, pharmacist association representatives, and other experts we interviewed reported that the number of compounded prescriptions, which had decreased when pharmaceutical manufacturing grew in the 1950s and 1960s, has been increasing over the past decade. Problems have come to light regarding compounded drugs, some of which resulted in death or serious injury, because the drugs were contaminated or had incorrect amounts of the active ingredient. Unlike drug manufacturers, who are required to report adverse events associated with the drugs they produce, FDA does not require pharmacies to report adverse events associated with compounded drugs. Based on voluntary reporting, media reports, and other sources, FDA has become aware of over 200 adverse events involving 71 compounded products since about 1990. These incidents, including 3 deaths and 13 hospitalizations following injection of a compounded drug that was contaminated with bacteria in 2001, have heightened concern about compounded drugs’ safety and quality. In addition, a limited survey conducted by FDA’s Division of Prescription Drug Compliance and Surveillance in 2001 found that nearly one-third of the 29 sampled compounded drugs were subpotent—that is, they had less of the active ingredients than indicated. FDA and others have also expressed concern about the potential for harm to the public health when drugs are manufactured and distributed in commercial amounts without FDA’s prior approval. While FDA has stated that traditional drug compounding on a small scale in response to individual prescriptions is beneficial, FDA officials have voiced concern that some establishments with retail pharmacy licenses might be manufacturing new drugs under the guise of drug compounding in order to avoid FDCA requirements. We found efforts at the state level and among national pharmacy organizations to potentially strengthen state oversight of drug compounding. Actions among the four states we reviewed included adopting new drug compounding regulations and random testing of compounded drugs. At the national level, industry organizations are working on standards for compounded drugs that could be adopted by states in their laws and regulations. According to experts we interviewed, uniform standards for compounded drugs could help ensure that pharmacists across states consistently produce safe, quality products. While these actions may help improve oversight, the ability of states to oversee and ensure the quality and safety of compounded drugs may be affected by their available resources and their ability to adopt new standards and enforce penalties. The four states we reviewed have taken a variety of approaches to strengthen state oversight. Missouri. The pharmacy board in Missouri has taken a different approach from other states: it is in the process of implementing random batch testing of compounded drugs. No other state has random testing, according to an NABP official. Random testing will include both sterile and nonsterile compounded drugs and the board plans on testing compounded drugs for safety, quality, and potency. A Missouri pharmacy board official said testing will include random samples of compounded drugs in stock in pharmacies in anticipation of regular prescriptions, random selection of prescriptions that were just prepared, and testing of compounded drugs obtained by undercover investigators posing as patients. The official added that random testing will help to ensure the safety and quality of compounded drugs and is also intended to serve as a deterrent for anyone who might consider purposely tampering with compounded prescriptions. North Carolina. North Carolina is the only state in the country that requires mandatory adverse event reporting involving prescription drugs, including compounded drugs, according to an NAPB official. Regulations in North Carolina require pharmacy managers to report information to the pharmacy board that suggests a probability that prescription drugs caused or contributed to the death of a patient. This reporting system, which does not extend to incidents of illness or injury, allows the board to investigate all prescription-drug-related deaths and determine whether an investigation is warranted. Vermont. The pharmacy board in Vermont overhauled the state’s pharmacy rules in August 2003 to address changes in pharmacy practice, including the increase in Internet and mail-order pharmacies, according to the pharmacy board chairman. For example, the chairman reported that prior to the adoption of the new rules, Vermont had no definition of out-of- state pharmacies and no requirements for these pharmacies to have a Vermont license to do business in the state. The board chairman said that the new rule requiring licensing for out-of-state pharmacies would provide a mechanism to monitor pharmacies that ship prescription drugs, including compounded drugs, into the state. In addition, he added that the board revised the rules for compounding sterile drugs by including specifics on facilities, equipment, and quality assurance measures. Wyoming. Prior to March 2003, Wyoming did not have state laws or rules that established specific guidelines for drug compounding, aside from a definition of drug compounding, according to a pharmacy board official. The new rules include requirements for facilities, equipment, labeling, and record keeping for compounded drugs, as well as a specific section on compounding sterile drugs. In addition, under the new rules, the official added that pharmacy technicians-in-training are no longer allowed to prepare compounded drugs, including sterile products, which is a more complex procedure requiring special equipment to ensure patient safety. At the national level, industry organizations are working on uniform practices and guidelines for compounded drugs and a committee of national association representatives recently began work on developing a program that would include certification and accreditation for drug compounding that could be used for state oversight. Groups such as the NABP concluded that state oversight of drug compounding would be strengthened if the states had uniform standards and other tools that could be adopted to address the quality and safety of compounded drugs. Several experts that we spoke with said national standards for compounding drugs that could be incorporated into state laws and regulations could help to ensure the quality and safety of compounded drugs. One expert noted that an advantage to incorporating compliance with national compounding standards into state laws is that it would be easier for states to keep up with updated standards without going through the process of legislative changes. NABP developed and updated a Model State Pharmacy Act that provides standards for states regarding pharmacy practice. Recently revised in 2003, the model act includes a definition of drug compounding and a section on good drug compounding practices. According to the executive director of NABP, many states have incorporated portions of the model act into their state pharmacy statutes or regulations by including similar definitions of drug compounding and components of NABP’s good drug compounding practices. For example, officials in Missouri and Wyoming reported using the model act’s good drug compounding practices as a guideline for developing their drug compounding regulations. In addition, USP has established standards and guidelines for compounding nonsterile and sterile drug products, both of which are being updated by expert committees. An official told us that these revisions would be completed early in 2004. In addition, recognizing that there is no coordinated national program to oversee compounding practices and that states’ oversight may vary, NABP recently began working with other national organizations, including the American Pharmacists Association and USP, to create a steering committee to develop a national program to provide a national quality improvement system for compounding pharmacies and the practice of compounding. The committee, which held its second meeting in October 2003, is developing a program that is anticipated to include (1) the accreditation of compounding pharmacies, (2) certification of compounding pharmacists, and (3) requirements for compounded products to meet industry standards for quality medications. To strengthen state oversight of drug compounding, these accreditations, certifications, and product standards, once developed, could be adopted by the states and incorporated into their requirements for compounding pharmacists and pharmacies. Although there are several efforts by states and national organizations that may help strengthen state oversight, some states may lack the resources to provide the necessary oversight. State pharmacy board officials in three of the four states reported that resources were limited for inspections, for example: The Missouri pharmacy board director reported that pharmacy inspections typically occur every 12 to 18 months; however, an increase in complaints has resulted in less frequent routine pharmacy inspections, because investigating complaints takes priority over routine inspections. North Carolina has six inspectors for about 2,000 pharmacies, which the state pharmacy board director said are inspected at least every 18 months. The director added that it is difficult to keep up with this schedule of routine inspections with the available resources while also investigating complaints, which take first priority. In Vermont, the pharmacy board chairman reported that, for a period of about 8 years until January 2003, pharmacy inspectors were only able to respond to complaints and not conduct routine inspections because of a shortage of inspectors. Vermont now has four full-time inspectors that cover the state’s 120 pharmacies; however, in addition to routine pharmacy inspections, the inspectors are also responsible for inspecting other facilities such as nursing homes and funeral homes. The chairman added that the board would like to have pharmacies inspected annually but it is difficult to keep up with the current schedule of inspections once every 2 years. Since drug compounding may occur in mail-order and Internet pharmacies, the compounding pharmacy may be located in a state different from the location of the patient or prescribing health professional. Three of the four states we reviewed had a large number of out-of-state pharmacies that were licensed to conduct business in those states, and inspection and enforcement activities may differ for these pharmacies. For example, Wyoming has 274 licensed out-of-state pharmacies, which is nearly twice as many as the number of in-state licensed pharmacies. The four states we reviewed said that they have authority to inspect out-of-state pharmacies licensed in their states but because of limited resources, they generally leave inspections to the state in which the pharmacy is located. Regarding enforcement authority, all four states reported having authority to take disciplinary action against out-of-state pharmacies licensed in their states. While the pharmacy boards in all four states we reviewed can suspend or revoke pharmacy licenses or issue letters of censure, enforcement mechanisms vary. For example, Missouri and North Carolina are not authorized to charge fines for violations; however, Wyoming can fine a pharmacist up to $2,000 and Vermont can fine a pharmacy or pharmacist $1,000 for each violation. Further, not all state pharmacy boards have the authority to take enforcement action independently. For example, in Missouri when attempting to deny, revoke, or suspend a license through an expedited procedure, the pharmacy board must first file a complaint with an administrative hearing commission. Only after the commission determines that the grounds for discipline exist may the board take disciplinary action. Pharmacy board officials reported relatively few complaints and disciplinary actions involving drug compounding. For example, of the 307 complaints received and reviewed by the board of pharmacy against pharmacies and pharmacists in Missouri in fiscal year 2002, only 5 were related to drug compounding. FDA maintains that drug compounding activities are generally subject to FDA oversight, including the “new drug” requirements and other provisions of the FDCA. In practice, however, the agency generally relies on the states to regulate the traditional practice of pharmacy, including the limited compounding of drugs for the particular needs of individual patients. In recent years, the Congress has attempted to clarify the extent of federal authority and enforcement power regarding drug compounding. In 1997, the Congress passed a law that exempted drug compounders from key portions of the FDCA if they met certain criteria. Their efforts, however, were nullified when the Supreme Court struck down a portion of the law’s drug compounding section as an unconstitutional restriction on commercial speech, which resulted in the entire compounding section being declared invalid. In response, FDA issued a compliance policy guide to provide the compounding industry with an explanation of its enforcement policy, which included a list of factors the agency would consider before taking enforcement actions against drug compounders. FDA maintains that FDCA requirements, such as those regarding the safety and efficacy requirements for the approval of new drugs, are generally applicable to pharmacies, including those that compound drugs. The agency recognized in its brief submitted in the 2002 Supreme Court case that applying FDCA’s new drug approval requirements to drugs compounded on a small scale is unrealistic—that is, it would not be economically feasible to require drug compounding pharmacies to undergo the testing required for the new drug approval process for drugs compounded to meet the unique needs of individual patients. The agency has stated that its primary concern is where drug compounding is being conducted on a scale tantamount to manufacturing in an effort to circumvent FDCA’s new drug approval requirements. FDA officials reported that the agency has generally left regulation of traditional pharmacy practice to the states, while enforcing the act primarily when pharmacies engage in drug compounding activities that FDA determines to be more analogous to drug manufacturing. Federal regulatory authority over drug compounding attracted congressional interest in the 1990s, as some in the Congress believed that “clarification is necessary to address current concerns and uncertainty about the Food and Drug Administration’s regulatory authority over pharmacy compounding.” The Congress addressed this and other issues when it passed the FDA Modernization Act of 1997 (FDAMA), which included a section exempting drugs compounded on a customized basis for an individual patient from key portions of FDCA that were otherwise applicable to manufacturers. According to the congressional conferees, its purpose was to ensure continued availability of compounded drug products while limiting the scope of compounding so as “to prevent manufacturing under the guise of compounding.” In order to be entitled to the exemption, drug compounders had to meet several requirements, including one that prohibited them from advertising or promoting “the compounding of any particular drug, class of drug, or type of drug.” This prohibition was challenged in court by a number of compounding pharmacies and eventually resulted in a 2002 Supreme Court decision holding that it was unconstitutional. As a result, the entire drug compounding section was declared invalid. However, the Court did not address the extent of FDA’s authority to regulate drug compounding. FDA issued a compliance policy guide in May 2002, following the Supreme Court decision, to offer guidance about when it would consider exercising its enforcement authority regarding pharmacy compounding. In the guide, FDA stated that the traditional practice of drug compounding by pharmacies is not the subject of the guidance. The guide further stated that FDA will generally defer to state authorities in dealing with “less significant” violations of FDCA, and expects to work cooperatively with the states in coordinating investigations, referrals, and follow-up actions. However, when the scope and nature of a pharmacy’s activities raise the kinds of concerns normally associated with a drug manufacturer and result in significant violations of FDCA, the guide stated that FDA has determined that it should seriously consider enforcement action and listed factors, such as compounding drug products that are commercially available or using “commercial scale manufacturing or testing equipment,” that will be considered in deciding whether to take action. Some representatives of pharmacist associations and others have expressed concern that FDA’s compliance policy guide has created confusion regarding when FDA enforcement authority will be used. For example, some pharmacy associations assert that FDA’s guidance lacks a clear description of the circumstances under which the agency will take action against pharmacies. In particular, they pointed to terms in the guide, such as “very limited quantities” and “commercial scale manufacturing or testing equipment” that are not clearly defined, and noted that FDA reserved the right to consider other factors in addition to those in the guide without giving further clarification. FDA officials told us that the guide allows the agency to have the flexibility to respond to a wide variety of situations where the public health and safety are issues, and that they plan to revisit the guide after reviewing the comments the agency received, but did not have a time frame for issuing revised guidance. In several reported court cases involving FDA’s regulation of drug compounders, the courts have generally sided with FDA. Two cases we identified involved drug compounders engaged in practices that were determined to be more analogous to drug manufacturing. In a district court case decided this year, the court upheld FDA’s authority to inspect a pharmacy specializing in compounding, noting that it believed that FDA’s revised compliance policy guide was a reasonable interpretation of the statutory scheme established by FDCA. While drug compounding is important and useful for patient care, problems that have occurred raise legitimate concerns about the quality and safety of compounded drugs and the oversight of pharmacies that compound them. However, the extent of problems related to compounding is unknown. FDA maintains that drug compounding activities are generally subject to FDA oversight under its authority to oversee the safety and quality of new drugs, but the agency generally relies on states to provide the necessary oversight. At the state level, our review provides some indication that at least some states are taking steps to strengthen state oversight, and national pharmacy organizations are developing standards that might help strengthen oversight if the states adopted and enforced them. However, the effectiveness of these measures is unknown, and factors such as the availability of resources may also affect the extent of state oversight. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information, please contact Janet Heinrich at (202) 512-7119. Individuals making key contributions to this testimony included Matt Byer, Lisa A. Lusk, and Kim Yamane. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Drug compounding--the process of mixing, combining, or altering ingredients--is an important part of the practice of pharmacy because there is a need for medications tailored to individual patient needs. Several recent compounding cases that resulted in serious illness and deaths have raised concern about oversight to ensure the safety and quality of compounded drugs. These concerns have raised questions about what states--which regulate the practice of pharmacy--and the Food and Drug Administration (FDA) are doing to oversee drug compounding. GAO was asked to examine (1) the actions taken or proposed by states and national pharmacy organizations that may affect state oversight of drug compounding, and (2) federal authority and enforcement power regarding compounded drugs. This testimony is based on discussions with the National Association of Boards of Pharmacy (NABP) and a GAO review of four states: Missouri, North Carolina, Vermont, and Wyoming. GAO also interviewed and reviewed documents from pharmacist organizations, FDA, and others involved in the practice of pharmacy or drug compounding. A number of efforts have been taken or are under way both at the state level and among pharmacy organizations at the national level that may strengthen state oversight of drug compounding. Actions among the four states reviewed included adopting new regulations about compounding and conducting more extensive testing of compounded drugs. For example, the pharmacy board in Missouri is starting a program of random testing of compounded drugs for safety, quality, and potency. At the national level, industry organizations are working on standards for compounded drugs that could be adopted by the states in their laws and regulations, thereby potentially helping to ensure that pharmacies consistently produce safe, high-quality compounded drugs. While these actions may help improve oversight, the ability of states to oversee and ensure the quality and safety of compounded drugs may be affected by state-specific factors such as the resources available for inspections and enforcement. FDA maintains that drug compounding activities are generally subject to FDA oversight, including its authority to oversee the safety and quality of new drugs. In practice, however, the agency generally relies on states to regulate the limited compounding of drugs as part of the traditional practice of pharmacy. In 1997, the Congress passed a law exempting drug compounders that met certain criteria from key provisions of the Federal Food Drug and Cosmetic Act (FDCA), including the requirements for the approval of new drugs. These exemptions, however, were nullified in 2002 when the United States Supreme Court ruled part of the 1997 law to be an unconstitutional restriction on commercial speech, which resulted in the entire compounding section being declared invalid. Following the court decision in 2002, FDA issued guidance to indicate when it would consider taking enforcement actions regarding drug compounding. For example, it said the agency would defer to states regarding "less significant" violations of the Act, but would consider taking action in situations more analogous to drug manufacturing. |
Medicare’s prospective payment system (PPS) provides incentives for hospitals to operate efficiently by paying them a predetermined, fixed amount for each inpatient hospital stay, regardless of the actual costs incurred in providing the care. Although the fixed, or standardized, amount is based on national average costs, actual hospital payments vary widely across hospitals, primarily because of two payment adjustments in PPS. There is an adjustment that accounts for cost differences across patients due to their care needs, and a labor cost adjustment that accounts for the substantial variation in average hospital wages across the country. The fixed amount is adjusted for these two sources of cost differences because they are largely beyond any individual hospital’s ability to control. The Medicare labor cost adjustment for a geographic area is based on a wage index that is computed using data that hospitals submit to Medicare. The wage index for an area is the ratio of the average hourly hospital wage in the area compared to the national average hourly hospital wage. The wage indexes ranged from roughly 0.74 to 1.5 in 2001. Only the portion of the hospital payment that reflects labor-related expenses (71 percent) is multiplied by the wage index. The rest of the payment, which covers drugs, medical supplies and certain other non-labor-related expenses, is uniform nationwide because prices for these items are not perceived as varying significantly from area to area. The geographic area for which a wage index is calculated is supposed to represent an area where hospitals pay relatively uniform wages. If it does not, the hospitals in the area may receive a labor cost adjustment that is higher or lower than the wages paid in their area would justify. The Medicare program uses the Office of Management and Budget’s (OMB) “metropolitan/non-metropolitan” classification system to define the geographic areas used for the labor cost adjustment. Medicare calculates labor cost adjustments for 324 metropolitan areas and 49 “statewide” non- metropolitan areas. Medicare specifies an OMB metropolitan statistical area (MSA) as a distinct region within which wages are assumed to be relatively uniform. Medicare specifies the rest of a state—all the non-MSA counties—as a single, non-metropolitan area in which hospitals are assumed to face similar average wages. These non-metropolitan areas can be quite large and not contiguous (see fig. 1). The variation in hospital wages within some Medicare geographic areas – MSAs or the non-metropolitan areas in a state—is systematic across different parts of these areas. While wages paid by hospitals are expected to vary within a labor market, such systematic variation suggests that some Medicare geographic areas include multiple labor markets within which hospitals pay different average wages. For example, average hospital wages in outlying counties of MSAs tend to be lower than average hospital wages in central counties. Average wages in non-metropolitan large towns tend to be higher than in other non-metropolitan areas within a state. Because the labor cost adjustment does not take this kind of systematic variation into account, the adjustment sometimes does not appropriately reflect the average wages that hospitals pay. Because an MSA may extend over several thousand square miles, the hospitals within an MSA may not be competing with each other for the same pool of employees. Therefore, these hospitals may need to pay varying wages to attract workers. The Washington, D.C. MSA illustrates how hospital wages in a large MSA can vary across different counties (see fig. 2). It includes hospitals located in the central city of the District of Columbia and in 18 counties in Maryland, Virginia, and West Virginia. Hospital wages averaged $23.70 per hour in fiscal year 1997 in the District of Columbia and in most adjacent suburban Maryland and Virginia counties, but averaged $20.14 per hour in the outlying counties. Yet, the labor cost adjustment for hospitals within this MSA is based on an average wage of $23.41 per hour and is the same for hospitals within all its counties. Hospitals in central counties of an MSA typically pay higher wages than hospitals in outlying counties. Central county hospital wages ranged from 7 percent higher than outlying county hospital wages in Houston to 38 percent higher in New York City in fiscal year 1997. In most of the MSAs with the highest population, the difference was from 11 to 18 percent in fiscal year 1997. Medicare uses the same labor cost adjustment for all hospitals in the non- metropolitan areas of a state. The adjustment would be adequate for all hospitals in these sometimes vast areas if the hospitals paid similar average wages. However, we found wage variation across non- metropolitan areas that appears to be systematically related to type of community. In three-quarters of all states, the average wages paid by hospitals in large towns are higher than those paid by hospitals in small towns or rural areas. About 38 percent of hospitals in large towns paid wages that were at least 5 percent higher than the average wage in their area, and 16 percent paid wages that were at least 10 percent higher than the area average. As a result, the Medicare labor cost adjustment for non-metropolitan areas may be based on average wages that are lower than wages paid by large town hospitals and based on average wages that are higher than wages paid by hospitals in small towns and rural areas. For example, the fiscal year 2001 labor cost adjustment for non-metropolitan Nebraska was based on an average hourly wage of $17.65. Yet, Nebraska hospitals in large towns had an average wage that year that was 11 percent higher; small town Nebraska hospitals had an average wage that was 5 percent lower; and hospitals in rural areas of the state had an average wage that was 16 percent lower. The administrative process for geographic reclassification allows hospitals meeting certain criteria to be paid for Medicare inpatient hospital services as if they were located in another geographic area with a higher labor cost adjustment. The first criterion concerns the hospital’s proximity to the higher-wage “target” area. The proximity requirement is satisfied if the hospital is within a specified number of miles of the target area (15 miles for a metropolitan hospital and 35 miles for a non-metropolitan hospital) or if at least half of the hospital’s employees reside in the target area. The second criterion pertains to the hospital’s wages relative to the average wages in its assigned area and in the target area. This criterion is satisfied if the hospital’s wages are a specified amount higher than the average in its assigned area and if its wages are comparable to the average wages in the target area. Rural referral centers (RRC) and sole community hospitals (SCH) can be reclassified by meeting less stringent criteria. These hospitals receive special treatment from Medicare because of their role in preserving access to care for beneficiaries in certain areas. RRCs are relatively large rural hospitals providing an array of services and treating patients from a wide geographic area. SCHs are small hospitals isolated from other hospitals by location, weather, or travel conditions. RRCs and SCHs do not have to meet the proximity requirement to reclassify. RRCs are also exempt from the requirement that their wages be higher than those of the average wages in their original market. Of the 756 hospitals that paid wages high enough to qualify for reclassification, only 310, or 41 percent, were reclassified in fiscal year 2001. More than one-quarter of these higher-wage hospitals were in large towns, and 73 percent of them were reclassified. Higher-wage hospitals in large towns are likelier to be reclassified than other higher-wage hospitals because many are RRCs, which are exempt from the reclassification proximity criterion. In contrast to the nearly three-quarters of large town higher-wage hospitals that reclassified in fiscal year 2001, about half of higher-wage hospitals in small towns and rural areas were reclassified. Almost 39 percent of the reclassified higher-wage small town and rural hospitals were exempt from the proximity criterion because they were RRCs or SCHs. Some non- reclassified, higher-wage small town or rural hospitals that were SCHs may have opted out of PPS to receive cost-based payments from Medicare, making reclassification irrelevant. Moreover, even though metropolitan area higher-wage hospitals made up 42 percent of the higher-wage hospitals, only 12 percent of them were reclassified in fiscal year 2001—a percentage far lower than that for higher-wage hospitals in other areas. Reclassified metropolitan hospitals paid wages that were about 10 percent above the average wage in their former area; those average wages are equal to the average wage in the new areas to which these hospitals were reclassified in fiscal year 2001. The likely reason that so few metropolitan higher-wage hospitals were reclassified is that few are close enough to a higher-wage MSA to meet the proximity criterion. More than two-thirds of the metropolitan hospital reclassifications in fiscal year 2001 were concentrated in two areas— California and a region that includes parts of New York, Connecticut, New Jersey and Pennsylvania—where metropolitan areas are close enough to each other that more higher-wage hospitals in these areas may be able to meet the reclassification proximity requirement. While reclassification is designed to increase payments to hospitals paying wages significantly above the average for their area, certain provisions allow some hospitals that pay lower wages to reclassify. For example, an additional 116 hospitals were reclassified for a higher wage index in fiscal year 2001, even though they paid wages that were too low to meet the wage criterion. Prior to reclassification, these non-metropolitan hospitals had average wages that were close to the area average. With reclassification, these hospitals were assigned to areas with a labor cost adjustment based on wages that averaged 8 percent higher than their own. Of the 116 hospitals that reclassified for a higher wage index in fiscal year 2001, but failed to meet the wage criterion, 89 were RRCs (see table 1). About 42 percent of these had wage costs below their statewide non- metropolitan average. The other hospitals that reclassified, but did not pay wages that met the wage criterion, include those that were part of county- wide reclassifications and those reclassified through legislation. Medicare’s physician fee schedule, which specifies the amount that Medicare will pay for each physician service, includes an adjustment to help ensure that the fees paid in a geographic area appropriately reflect the cost of living in that area and the costs associated with the operation of a practice. This geographic adjustment is a critical component of the physician payment system. An adjustment that is too low can impair beneficiary access to physician services, while one that is too high adds unnecessary financial burdens to Medicare. Although much attention in recent months has focused on the method used to annually update the physician fee schedule, concerns have also been voiced about the appropriateness of the geographic adjustments. H.R. 4954, the Medicare Modernization and Prescription Drug Act of 2002, would require us to evaluate the methodology and data that Medicare uses to geographically adjust physician payments. We are beginning an analysis of the methodology and the available data to determine whether Medicare’s geographic adjustment appropriately reflects underlying costs and whether beneficiary access to physician services has changed in certain areas. In adjusting 2002 fees for physician services, Medicare has delineated 92 separate geographic areas. In some instances, these areas consist of an entire state. For example, physician fees are uniform across Connecticut. In other cases, a large city or group of cities within a state is classified into one geographic area and the rest of the state is classified into another. Maryland illustrates this case: Baltimore and surrounding counties are classified into one geographic area, and the rest of Maryland is classified as another. Finally, some large metropolitan areas, such as New York City and its suburban counties, are split into multiple geographic areas. Medicare’s geographic adjustments for physician fees are based on indexes that are designed to reflect cost differences among the 92 areas. There are three separate indexes, known as geographic practice cost indexes (GPCI), that correspond to the three components that comprise Medicare’s payment for a specific service: (1) the work component, reflecting the amount of physician time, skill, and intensity; (2) the practice expense component, reflecting expenses, such as office rents and employee wages; and (3) the malpractice insurance component, reflecting the cost of personal liability insurance premiums. The overall geographic adjustment for each service is a weighted average of the three GPCIs where the weights represent the relative importance of the components for that service. Across all physician services in 1999, the average weights were approximately 55 percent for the work component, 42 percent for the practice expense component, and 3 percent for the malpractice insurance component. The GPCIs are calculated from a variety of data sources. The work GPCI is based on a sample of median hourly earnings of workers in six professional categories. Physician earnings are not used because some physicians derive much of their income from Medicare payments, and an index based on physician earnings would be affected by Medicare’s existing geographic adjustments. The work GPCI is a weighted average of the median earnings of these professions in the area and their median earnings nationwide. If the work GPCI was based solely on the median earnings in each area, physician payments would likely increase in large metropolitan areas and decrease in rural areas. The practice expense GPCI is based on wage data for various classes of workers, office rent estimates, and other information. The malpractice insurance GPCI is based on average premiums for personal liability insurance. Concerns have been raised that the current geographic adjustments for physician fees do not appropriately reflect the underlying geographic variation in physicians’ costs and that, as a result, beneficiary access to services may be impaired in certain areas. Unfortunately, information on physicians’ willingness to see Medicare patients is dated—although it does not indicate access problems. Data from the 1990s show that virtually all physicians were treating Medicare beneficiaries and, if they were accepting new patients, accepted those covered by Medicare. A 1999 survey conducted by the Medicare Payment Advisory Commission (MedPAC) from that year found that 93 percent of physicians who had been accepting new patients were continuing to do so. It is unclear whether the situation has deteriorated since 1999. MedPAC is updating its survey, and the new results may shed light on this issue. However, MedPAC’s survey results may not be able to identify access problems if they occur only in certain areas. As I said in my testimony before this Subcommittee in February, it is important to identify beneficiary access problems quickly and take appropriate action when warranted. As part of the work we are beginning on access to physician care, we will examine Medicare claims data to get the most up-to-date picture possible of access by area, by specialty, and for new versus established patients. Medicare’s PPS for inpatient services provides incentives to hospitals to deliver care efficiently by allowing them to keep Medicare payment amounts that exceed their costs, while making hospitals responsible for costs that exceed their Medicare payments. To ensure that PPS rewards hospitals because they are efficient, rather than because they operate in favorable circumstances, payment adjustments are made to account for cost differences across hospitals that are beyond any individual hospital’s control. If these payment adjustments do not adequately account for cost differences, hospitals are inappropriately rewarded or face undue fiscal pressure. The adjustment used to account for wage differences—the labor cost adjustment—does not do so adequately because many of the geographic areas that Medicare uses to define labor markets are too large. Geographic reclassification provides relief to some hospitals that pay wages that are higher than the average in their area. Yet, other hospitals paying higher wages cannot be reclassified. Still other hospitals get a higher labor cost adjustment than is warranted by the wages they pay, and many are in rural areas and may be facing financial problems. Their labor cost adjustment, however, is not necessarily the cause of these problems. Therefore, reclassification may not be the most effective mechanism to address the financial pressure faced by these rural hospitals. | This testimony discusses Medicare program payment adjustments to hospitals and physicians that account for geographic differences in costs. Because Medicare's hospital and physician payment systems are based on national rates, these geographic cost adjustments are essential to account for costs beyond providers' control and to ensure that beneficiaries have adequate access to services. If these adjustments are not adequate, this could affect providers' financial stability and their ability or willingness to continue serving Medicare patients. Medicare's payments to hospitals vary with the average wages paid in a hospital's labor market. Yet, some hospitals believe that the labor cost adjustment applied does not reflect the average wage in their labor market area. Medicare's labor cost adjustment does not adequately account for geographic differences in hospital wages in some areas because a single adjustment is applied to all hospitals in an area, even though it may encompass multiple labor markets or different types of communities within which hospitals pay significantly different average wages. Geographic reclassification addresses some inequities in Medicare's labor cost adjustments by allowing some hospitals that pay wages enough above the average in their area to receive higher labor cost adjustments. However, some hospitals can reclassify even though they pay wages that are comparable to the average in their area. To help ensure that beneficiaries in all parts of the country have access to services, Medicare adjusts its physician fee schedule on the basis of indexes designed to reflect cost differences among 92 geographic areas. The adjustment is designed to help ensure that the fees paid appropriately reflect the cost of living and operating a practice in that area. |
The Mojave Desert tortoise is a relatively large reptile, with adults measuring up to 15 inches in shell length (see fig. 1). Desert tortoises live in creosote bush and Joshua tree habitats in valleys, plains, and washes at elevations generally ranging up to 4,000 feet above sea level. In these habitats, desert tortoises construct and live in burrows and spend a majority of their life below ground. Desert tortoises may live for 50 years or more in the wild, and females do not breed until they are at least 15 years old. They usually lay one or more clutches of about 6 to 8 eggs between mid-April and the first week of July. Although desert tortoises can withstand prolonged periods of drought, females may not lay eggs if forage is unavailable. Survival of juveniles is thought to be low; some researchers estimate that only 2 to 3 per 100 hatched may live to become adults. The Mojave Desert tortoise’s range lies north and west of the Colorado River in California, southern Nevada, southwestern Utah, and northwestern Arizona (see fig. 2). Apparent declines in tortoise populations have been attributed to many factors including habitat loss or degradation, drought, and predation on juvenile tortoises by ravens, coyotes, domestic dogs, and other animals (see fig. 3). According to the Service, habitat loss has occurred as a result of increasing amounts of urban development, military operations, and recreational uses such as off-road vehicle use, in the tortoise’s range. Habitat degradation has been attributed to domestic livestock grazing, particularly in livestock watering and loading areas. Other factors that may have caused population declines include mortality through vandalism or accidental road kill and removal of tortoises from their habitat for pets, food, or commercial purposes. Respiratory and shell diseases have also been observed in desert tortoise populations. Before a species, such as the desert tortoise, can receive protection under the Endangered Species Act, the Secretary of the Interior, through the Fish and Wildlife Service, is required to use the best available scientific and commercial data (e.g., biological or trade data obtained from scientific or commercial publications, administrative reports, maps or other graphic materials, or experts on the subject) to decide whether the species is at risk of extinction. The Endangered Species Act specifies the following five factors for identifying at-risk species, any one of which is sufficient to determine that a species qualifies for the act’s protections: present or threatened destruction, modification, or curtailment of a species habitat or range; overuse for commercial, recreational, scientific, or educational purposes; disease or predation; inadequacy of existing regulatory mechanisms; or other natural or manmade factors affecting a species’ continued existence. Once the Service determines that a species should receive the act’s protection, it can list the species as threatened or endangered. As of July 2002, 517 animal species and 744 plant species were listed as threatened or endangered in the United States. The act prohibits the “taking” of any listed species of animal and defines “take” as to harass, harm, pursue, shoot, wound, kill, trap, capture, or collect, or to attempt to engage in any such conduct. However, under the act the Service may issue permits that allow the taking of a listed species if the taking is incidental to, rather than the purpose of, an otherwise legal activity. In most cases, the Service must develop a recovery plan for listed species that specifies actions needed to recover the species so that it can be removed from the list of protected species under the act, or “delisted.” Federal agencies must comply with prohibitions against taking a threatened or endangered species and must consult with the Service to determine the effect, if any, that their activities may have on listed species. In particular, federal agencies must ensure that their activities do not jeopardize the continued existence of any endangered or threatened species, or result in destruction or adverse modification of critical habitat.If any proposed activities will jeopardize a species or adversely modify its critical habitat, the Service will identify reasonable and prudent alternative activities. In addition, federal agencies have a broader directive under the act to use their authorities to carry out programs to conserve threatened and endangered species. Scientists we consulted agreed that the listing of the desert tortoise in 1990, the critical habitat designation, and the recommendations in the recovery plan were reasonable, based on the limited data available on the desert tortoise when the relevant decisions were made. These decisions were made on the basis of a variety of information, including published and unpublished research and government studies. The scientists we consulted recognized that, as is often the case when making such decisions, little published data on the species were available. However, they agreed that the Service’s decisions were appropriate and consistent with their understanding of the agency’s responsibilities under the act. The Endangered Species Act requires that listing decisions be based solely on the best scientific and commercial data available without taking into account economic factors. Although the Service is required to seek out the best data available at the time, it is not required to generate additional data. The listing decision for the desert tortoise was based on a variety of information, including published research, meeting and symposium proceedings, and government reports. Generally, published, peer-reviewed research is considered the most reliable information source because the research methods and conclusions have been reviewed by other scientists before publication. However, other sources such as unpublished research, meeting proceedings, and government reports can provide important information for making listing and other decisions. Moreover, several scientists said that listing decisions are often necessarily based on limited data, because funding for research on a species is typically scarce until after that species is listed. The listing decision describes how each of the five listing criteria that make a species eligible for protection under the act applies to the desert tortoise, with habitat loss and disease cited as threatening the tortoise’s continued existence. The scientists we consulted agreed that, despite the limited amount of quantitative data on the desert tortoise that was available at the time of its listing, the decision to list it as threatened was reasonable. In particular, they cited increases in threats such as diseases and habitat loss as important factors making listing necessary. In addition, researches noted declines in numbers. For example, in the western Mojave Desert in California, researchers found that some populations decreased by as much as 90 percent between the 1970s and the mid-1990s; in Nevada, study plots also generally showed declines ranging from 10 to 39 percent since the late 1970s. The scientists we consulted also noted that desert tortoise populations appear to continue to decline. Some said that the listing of the desert tortoise was an unusual step by the Service because, at the time of the listing, there were still desert tortoises occurring across a large range; yet they recognized that listing it as threatened was consistent with their understanding of the act’s intent to protect species whose numbers are declining and are at risk of becoming endangered. When designating critical habitat, the Service must also use the best scientific and commercial information available. Unlike for listings, however, the Service must also consider the economic impact of the critical habitat designation. The primary source of information for the designation was a draft of the recovery plan for the tortoise that recommended protection for 14 separate areas of habitat. The Service adjusted the boundaries for these 14 areas to generally follow legal property boundaries and elevation contours in order to remove as much unsuitable habitat as possible and to reflect additional biological information. Some areas that were already protected, such as Joshua Tree National Monument and the Desert National Wildlife Range, were intended to be excluded from critical habitat because the habitat within them was already receiving protection as desert tortoise habitat. After making these adjustments, the Service identified 12 areas in its final critical habitat designation—seven in California, one in Nevada, one in Utah, and three that span more than one state—that total about 6.4 million acres (see table 1). The scientists we consulted said the size and number of the areas designated as critical habitat were reasonable given the available data, but found that the rationales for drawing the specific boundaries were not well explained in the decision documents. The size of the areas was determined based on estimates of how dense a desert tortoise population should be to ensure the population’s continued existence—estimates that the scientists noted were based on limited quantitative research. Several of the scientists we consulted observed that the critical habitat areas appear to have been designated where desert tortoise populations were found at the time. One scientist suggested that the designation of the areas of critical habitat may have been conservative, and that if the designation was done today, the protected areas might be even larger. In contrast with the requirements for listing and critical habitat, the Endangered Species Act does not specify the type of information that should be used to develop recovery plans. Instead, the act requires that recovery plans contain three specific elements: (1) a description of site- specific management actions necessary for the conservation and survival of the species; (2) objective, measurable criteria that, when met, would result in the removal of the species from the threatened or endangered species list, or delisting; and (3) estimates of the time and cost required to carry out the plan. However, Service policy dictates that recovery plans should seek the best information to achieve recovery of a species. While not in effect at the time the tortoise recovery team was founded, Service policy is that teams developing recovery plans should have diverse areas of expertise and may include personnel from many different organizations, including officials from other federal agencies and states, and other recognized experts. According to the Service, recovery plans impose no obligations on any agency, entity, or persons to implement the various tasks contained within them. The recovery plan for the desert tortoise addresses each of the three required elements. The plan describes site-specific management actions for the 14 separate areas that it recommends be established such as discontinuing livestock grazing, constructing fencing along highways to reduce tortoise road kill, monitoring the health of desert tortoises within the areas, eliminating raven nest and perch sites, constructing signs to delineate the boundaries of the protected areas, and restricting off-road vehicle use. The plan also recommends that agencies develop programs and facilities to educate the public about the status and management needs of the desert tortoise and its habitat, and that research be conducted to monitor and guide recovery efforts. In addition, the plan includes estimates of the time frame and costs for implementation. Lastly, as the act requires, the plan describes the criteria that must be met before the desert tortoise population may be considered for delisting. The criteria are: as determined by a scientifically credible monitoring plan, the population within a recovery unit must exhibit a statistically significant upward trend or remain stationary for at least 25 years (one desert tortoise generation); enough habitat must be protected within a recovery unit, or the habitat and desert tortoise populations must be managed intensively enough to ensure long-term population viability; provisions must be made for population management within each recovery unit so that population growth rates are stable or increasing; regulatory mechanisms or land management commitments must be implemented that provide for long-term protection of desert tortoises and their habitat; and the population in a recovery unit is unlikely to need protection under the Endangered Species Act in the foreseeable future. The scientists we consulted agreed that the recommendations in the recovery plan describing site-specific management actions are reasonable, and reflect the best information available at the time. They observed that because much was still unknown about the severity of specific threats to desert tortoises at the time the plan was developed, its recommendations were made without establishing priorities that would reflect differences in the seriousness of the threats. For example, the plan does not differentiate among the seriousness of the threats from uncontrolled vehicle use off designated roads as compared to livestock grazing or dumping and littering. Nonetheless, the scientists commented that the plan was a significant, resource-intensive effort; indeed, one scientist commented that the expertise of the scientists comprising the recovery team was unprecedented. The team included experts in reptile and tortoise biology, desert ecosystems, population analyses, and conservation biology. The team also coordinated with numerous people and organizations, including federal and state agencies and officials, and others with expertise in desert tortoise and land management issues. Federal agencies and others have taken a variety of actions to benefit desert tortoises, reflecting recommendations in the recovery plan or efforts to minimize the effects of potentially harmful activities, but the effectiveness of those actions is not known because the necessary analyses to measure their effectiveness have not been done. Federal, state, and local agencies and others have acquired habitat, restricted certain uses, and promote education programs about the species, and research has been conducted or is underway on such topics as the causes of disease in tortoises, their nutritional needs, and the effects of human activities on tortoises. However, no process has been established for integrating agencies’ management decisions regarding the desert tortoise with research results. As a result, Service and land managers cannot be certain that they are focusing their limited resources on the most effective actions. In addition, the recovery plan recommends that its recommendations be reassessed every 3 to 5 years, but the plan has not been reassessed since its 1994 issuance. Such a reassessment would allow the Service to evaluate whether the plan’s recommendations are still sound or should be revised in light of more recent research. The recovery plan recommends securing habitat to aid in the recovery and continued existence of the desert tortoise. In addition to managing land they already own, federal and state agencies—which collectively manage over 80 percent of tortoise critical habitat—and private groups have made efforts to acquire privately owned land for desert tortoise habitat through land exchanges, purchases, or donations. Much of the acquired land is surrounded by or adjacent to federally or state-owned tortoise habitat, and its acquisition makes management easier by consolidating acres needing protection. These land acquisitions have occurred primarily in California and Utah, as almost all tortoise critical habitat in Nevada and Arizona is already federally owned. For example, from 1995 through 2001, BLM acquired approximately 337,000 acres in California, valued at almost $38 million, primarily for the benefit of the desert tortoise. Land acquisition has also been an important feature in Utah, where BLM and the State of Utah have acquired, through purchase and exchange, more than 7,700 acres of nonfederal land valued at almost $62 million, for the benefit of the tortoise. In addition to these acquisitions, the Desert Tortoise Preserve Committee, a nonprofit organization, acquired more than 175 acres of privately owned lands within the 39.5-square-mile Desert Tortoise Natural Area in California. The Committee, in cooperation with another conservation organization, also purchased 1,360 acres of privately owned land in desert tortoise critical habitat in the central Mojave Desert. The Committee has historically donated or sold land it acquires to the federal government or the state of California. The recovery plan for the desert tortoise also recommends specific land use restrictions such as restricting livestock grazing, harmful military maneuvers, and excessive and destructive recreational uses. The responsibility for implementing many of these actions falls to the entities that manage land in desert tortoise habitat, including the Bureau of Land Management, the National Park Service, the Department of Defense, and state agencies. These agencies have restricted some permitted uses on lands with tortoise habitat and taken protective steps to aid in the species’ recovery. For example, Washington County, Utah, purchased permits allowing livestock grazing on 30,725 acres of federal land in tortoise habitat in Utah at a cost of $114,000 from ranchers who were willing to sell their land. BLM then retired these permits from use. In addition, since 1991, BLM has prohibited sheep grazing on more than 800,000 acres of tortoise habitat in California; the agency has also restricted cattle grazing in all or part of several other grazing allotments in California either entirely or seasonally when tortoises are active, as part of a settlement agreement with conservation groups. The recovery plan’s recommended restrictions on livestock grazing are controversial because they affect a large number of acres and were recommended on the basis of limited published data. Other significant restrictions that benefit the tortoise include those addressing off-highway vehicles. For example, BLM’s off- highway vehicle management plan limits off-highway vehicle use to existing approved areas, specific courses for competitive events, or designated roads and trails to protect sensitive habitats, species, and cultural resources. However, officials note that enforcing compliance among individual users has proven to be difficult. Agencies have also undertaken projects on their lands to control random events such as road kill on highways and human vandalism, and other threats that are associated with human development, such as disease (which may be spread when captive tortoises are released into the wild) and predation by ravens and other animals (which are aggravated by humans through the presence of landfills and other sources of food and water). For example, agencies and others have installed hundreds of miles of fencing to keep tortoises away from roads and other hazardous areas. Joshua Tree National Park installed breaks, or “tortoise cuts,” in the curbs along more than 5 miles of newly constructed park roads in 2001 to avoid trapping desert tortoises in roads. To reduce raven populations and thus discourage predation on juvenile tortoises, Mojave National Preserve has cleaned up approximately 50 acres of illegal garbage dumps, and Joshua Tree National Park has removed a total of almost 550,000 pounds of garbage from 23 sites. The Army’s National Training Center at Fort Irwin also tries to reduce raven populations by covering its landfill with three times as much dirt as it would otherwise in order to reduce its attractiveness to the birds. In 2000 and 2001, Edwards Air Force Base closed 42 “pitfalls” (such as mine shafts, wells, and irrigation pipes) in critical habitat that were potentially hazardous to desert tortoises. Protective actions may also be required to offset, or mitigate, the effects of potentially harmful activities. For example, development may occur on nonfederal lands with desert tortoises, but before the Service will issue a permit allowing tortoises to be taken or habitat to be disturbed, the applicant must develop a plan describing mitigating actions—such as timing a project to minimize the likelihood of disturbing tortoises, acquiring replacement habitat to compensate for the disturbed acreage, or the payment of fees to be used for tortoise conservation. Some local governments have obtained permits that allow tortoises to be taken so that habitat within their jurisdictions can be developed. For example, Clark County, Nevada—which includes Las Vegas—has obtained a 30–year permit from the Fish and Wildlife Service that allows listed species, including tortoises, to be taken incidental to development in the county. The permit allows development of up to 145,000 acres of desert tortoise habitat on nonfederal land and requires that land developers pay $550 to a mitigation fund for every acre developed within the county. The mitigation fees are used to pay for conservation projects in the county to offset the effects of development on desert tortoises and other species. Similarly, Washington County, Utah, has a 20-year permit authorizing the take of 1,169 tortoises incidental to land development in the county. Washington County’s primary means of mitigating the effects of development on desert tortoises was to establish the 61,000-acre Red Cliffs Reserve in which no development is allowed; approximately 39,000 acres are occupied desert tortoise habitat. BLM and the state of Utah manage most of the land within the reserve. Elsewhere in the county, development is allowed on approximately 12,000 acres of nonfederal land. Developers pay $250 plus 0.2 percent of the development costs for each acre they develop; the fees are used to manage the reserve. Agencies and others also rely on education to reduce threats to tortoises. For example: Department of Defense installations in tortoise habitat require all soldiers to attend training that raises their awareness about the status of the tortoise and teaches them what to do if they encounter a tortoise, BLM’s Statewide Desert Tortoise Management Policy includes a detailed Joshua Tree National Park and the Mojave National Preserve have developed educational kits for use in schools, and Clark County, Nevada, uses radio and newspaper announcements to target desert users, reminding them to deposit garbage only at garbage dumps in order to control raven populations, shoot responsibly, and drive on roads. Appendix I discusses specific actions agencies have taken in more detail. The recovery plan recommends that research be conducted to guide and monitor desert tortoise recovery efforts and states that as new information continues to become available, these new data should influence management practices. The recovery plan recommends research on threats to tortoises including diseases and other sources of mortality, the long-term effects of road density and activities like livestock grazing on desert tortoise populations, and the effectiveness of protective measures in reducing human-caused desert tortoise mortality; it also recommends that a comprehensive model of the life history of the desert tortoise be developed, as such information is helpful in understanding how various factors influence a species’ survival. The scientists we consulted emphasized the importance of research for assessing the effectiveness of recovery actions, not only for determining whether delisting is appropriate, but also for allocating scarce resources to those actions with the most positive effects on desert tortoise populations. Research is underway in several of the recommended areas, including diseases and how they are transmitted, desert tortoise habitat and health, nutrition, predation, the effects of climate variability on tortoises, and survival of juvenile desert tortoises. Scientists from many different organizations, including the U.S. Geological Survey, the Service, the National Park Service, military installations, military laboratories, states, universities, and private consulting groups, perform this research. According to information compiled by researchers at the Redlands Institute at the University of Redlands, research presented since 1989 at the Desert Tortoise Council’s annual symposia—where scientists, land managers, and others gather to share information on desert tortoise issues—has covered more than 20 areas, with disease, livestock grazing, roads, and off-highway vehicle use emerging as the four most commonly presented topics (see fig. 4). Despite the relatively extensive desert tortoise research efforts, there is no overall coordination of the research to ensure that questions about the effectiveness of protective actions are answered. Such a coordinated program would direct research to address management needs and ensure that managers are aware of current research as they make decisions. More importantly, such a program would allow managers to adapt land management decisions on the basis of science. Unless research is focused on determining if restrictions and other protective actions are effective, managers cannot demonstrate a scientific basis for deciding whether restrictions should remain unchanged, be strengthened, or if other actions would be more appropriate. For example, since the Bureau of Land Management eliminated sheep grazing on more than 800,000 acres in California, neither the Bureau nor the Service has assessed whether this action has benefited desert tortoises or their habitat. Despite ongoing research into how livestock grazing affects the soils and plants upon which desert tortoises depend, few data are available to show the extent of its impacts and the effectiveness of restrictions in reducing adverse effects. One scientist discussed recent research that could influence future priorities for protective actions. Specifically, this research suggests that tortoise fencing may be more effective along roads with intermittent traffic than along highways, as the heavier highway traffic may itself deter tortoises from attempting to cross. However, we recognize that in some cases obtaining definitive data regarding management actions may take many years for long-lived species like the desert tortoise. While no overall process exists for integrating research and management decisions, several efforts are underway to aggregate scientific information about tortoises and the desert ecosystem and identify information gaps. The Desert Tortoise Management Oversight Group was established in 1988 to coordinate agency planning and management activities affecting the desert tortoise, and to implement the management actions called for in BLM's Desert Tortoise Rangewide Plan. The group consists of BLM's state- office directors from Arizona, California, Nevada, and Utah and a Washington office representative; the four states' fish and game directors; regional directors of the three Fish and Wildlife Service offices with desert tortoise management responsibilities; it also includes representatives of the National Park Service, the U. S. Geological Survey, and the military installations with desert tortoise habitat. The Management Oversight Group is intended to provide leadership in implementation of the recovery plan, consider funding and research priorities, help ensure data analysis procedures are standardized, and review plans related to the desert tortoise. In 1990, a Technical Advisory Committee was formed to provide technical assistance to the group. The Desert Tortoise Research Project, a group of U.S. Geological Survey biologists conducting research on the desert tortoise, works with the Technical Advisory Committee to help establish research priorities. The Mojave Desert Ecosystem Program, a cooperative effort among several agencies that is led by the Department of Defense, has aggregated large amounts of data on elevation, geology, climate, and vegetation in the Mojave Desert ecosystem and has made them available as a shared scientific database through the Internet. This shared database is intended to allow land managers to make data-driven land management decisions. The California Desert Managers Group, comprised of managers from agencies of the Departments of Defense and Interior and the State of California, is chartered to develop and integrate the databases and scientific studies needed for effective resource management and planning for the California desert. Currently, the group is compiling a list of the major ongoing scientific activities in the Mojave Desert to identify significant research gaps, opportunities to collaborate, and opportunities to solicit support for scientific research needed to fill those gaps. The Redlands Institute at the University of Redlands has begun a project, funded by the Department of Defense, to compile, organize, and store desert tortoise monitoring information and develop a database of desert tortoise-specific research, which the Institute will make available to land managers. In addition, during our review, the Service official with lead responsibility for the desert tortoise program made a proposal to the Service’s regional office to establish a science office and a permanent science advisory committee that would work with managers to ensure that future desert tortoise research is responsive to the managers’ needs for information. The proposed science office would coordinate research and would work with the Mojave Desert Ecosystem Program, the University of Redlands, and others to establish and centralize data and procedures. The proposed science committee, which would be composed of unbiased, recognized experts in disciplines relevant to tortoise recovery, would work with the science office and land managers to set priorities for desert tortoise recovery actions and review agencies’ documents for their scientific soundness. The official anticipates that the proposed committee would provide a scientific context to support decisions that are, in some cases, difficult and controversial. The recovery plan recognizes that few of the data available at the time the plan was developed were useful for recovery planning; accordingly, it recommends that the plan be reassessed every 3 to 5 years in light of newer findings. Service guidance also recommends that recovery plans be reviewed periodically to determine if updates or revisions are needed. Recovery team members and the scientists we consulted agreed that the Service should assess new research and determine if the recovery plan needs to be revised or updated to accommodate new or different findings. However, although the plan was issued 8 years ago, the Service has not yet reassessed it for several reasons. First, because the Service has limited resources for meeting its continuing obligations to designate critical habitat and develop recovery plans for other listed species, resources are not readily available for recovery plan revisions. In addition, some Service officials believe that new research has not indicated that significant changes are needed in the tortoise recovery plan. Finally, some Service officials believe that as new information is developed, it can be and sometimes is incorporated into ongoing land management decisions. Given the controversy surrounding some of the recovery plan’s recommendations and the resulting management actions, periodic reassessment of the plan in view of ongoing research could provide evidence for either retaining or revising the existing recommendations. For example, according to a recent review of scientific literature on threats to desert tortoise populations, research has shown that heavy, uncontrolled off-road vehicle use severely damages vegetation that desert tortoises rely on for food and reduces population densities, a finding that supports restrictions on such use. In contrast, the effects of livestock grazing on desert tortoises—effects that the recovery team identified as a significant threat—are still hotly debated, and research has not yet established that livestock grazing has caused declines in desert tortoise populations. In addition, reassessing the plan based on new research could also indicate whether or not the critical habitat boundaries—which were based on a draft of the recovery plan—should be revised. Data on trends in tortoise populations that would indicate whether or not the species is recovering and can be delisted are not available because population monitoring efforts have only recently begun and will need to continue for at least 25 years (one generation of desert tortoises). Although data on desert tortoise populations have been collected from study plots in specific areas, these data cannot be extrapolated across the desert tortoise’s range. Obtaining the necessary trend information has proved difficult because monitoring is costly and resource intensive, and continued funding for population monitoring efforts is uncertain. According to the desert tortoise recovery plan, identifying trends in desert tortoise populations is the only defensible way to evaluate whether populations are recovering. Under the plan, before the desert tortoise can be delisted, tortoise populations must become stable or increase, as shown by at least 25 years of population monitoring. In order to monitor population trends, it is necessary to have baseline population data. While land managers have been concerned about the desert tortoise for over 2 decades, such baseline data are not available rangewide because most population monitoring has been done in specific areas for other purposes and cannot be extrapolated to the entire population. For example, information on the health and status of desert tortoise populations in certain areas—primarily in California—has been collected from permanent study plots, some since the 1970s. These study plots were established to provide data on attributes of tortoise populations and their relationships to the condition of the habitat and land-use patterns. However, the locations of these plots were judgmentally selected and are therefore insufficient to allow scientists to project their status to that of the entire desert tortoise population. Development of a baseline population estimate has been delayed in part by difficulty in determining an acceptable methodology. The recovery plan recommended a technique for estimating desert tortoise populations, but that technique was discarded after federal land managers agreed in 1998 to a different, more suitable population monitoring technique that they believed would provide more reliable data on the population rangewide. However, efforts to implement the agreed-upon rangewide monitoring technique were hampered by a lack of funding and the absence of a designated coordinator. In 2001, the Fish and Wildlife Service began coordinating the collection of population data throughout the desert tortoise’s range using the agreed-upon technique. Establishing a complete baseline population estimate is expected to take 5 years. Service officials estimate that after the baseline is established, additional monitoring will need to occur every 3 to 5 years to determine how populations are changing over time. According to land managers and tortoise experts, counting tortoises is difficult because populations are widespread and spend much of their time underground. In addition, there are differences in peoples’ abilities to locate individual desert tortoises, especially juveniles, which can be as small as a silver dollar coin. A major concern for the tortoise recovery effort is continued funding for rangewide population monitoring. A Service official estimates that population monitoring will cost more than $1.5 million each year it is conducted. The Service depends on agreements with several entities to fund monitoring. For example, in 2002, funding for monitoring was provided by the Department of Defense, National Park Service, the Fish and Wildlife Service, the University of Redlands, Clark County, Nevada, and Washington County, Utah. However, the agencies that have provided funding for monitoring in the past have other priorities and legal mandates to which they must respond; thus, they cannot guarantee that they will provide funding for the population sampling from year to year. For example, a Bureau of Land Management official in California made an informal commitment to provide $200,000 for monitoring in fiscal year 2002, anticipating that the Bureau would continue to receive funding for management in the California Desert as it had in previous years. However, the funding did not materialize, and the Bureau determined that because of budget constraints it would be unable to fund the effort. Service staff are frustrated by this situation, because they cannot know in advance whether the funding required for sampling will be available, and thus cannot effectively plan a population monitoring effort that must span at least 25 years. Since the desert tortoise was first listed in 1980, more than $100 million has been spent on its conservation and recovery, but the total economic impact of the recovery effort is unknown. (Throughout this section, monetary amounts are expressed in constant 2001 dollars.) From fiscal years 1989 through 1998, agencies reported spending a total of about $92 million on behalf of the desert tortoise, including about $37 million for land acquisition. Comprehensive expenditure data do not exist for fiscal years 1980 through 1988, because the reporting requirement had not yet been enacted, or for 1999 through 2001, because of delays issuing the report. However, staff time estimates by five key agencies for these periods account for an additional $10.6 million in expenditures on tortoise- related activities. Aside from such expenditures, the overall economic impact—benefits as well as indirect costs incurred by local governments, landowners, and developers as a result of restrictions—associated with the tortoise recovery effort is unknown, although some limited analyses have been done. A 1988 amendment to the Endangered Species Act requires that the Service submit to the Congress an annual report on or before January 15 that accounts for, on a species-by-species basis, all reasonably identifiable federal and state expenditures during the preceding fiscal year that were made primarily for the conservation of threatened and endangered species. These expenditures cover a myriad of activities related to the conservation and recovery of threatened and endangered species, such as funding and conducting research, maintaining species’ habitats, surveying species’ populations, developing plans, and implementing conservation measures. Expenditures for land acquisition are also reported, although they were not reported as a separate category until fiscal year 1993. The purpose of the reporting requirement, according to the Service, was to obtain information with which to assess claims that a disproportionate effort was being made to conserve a few, highly visible species at the expense of numerous, less well-known species. Through discussions with congressional staff and language contained in the conference report for the 1988 amendment, the Service determined that it and other federal and state agencies were expected to cooperate and to make a “good faith effort” to collect and report expenditure data that are “reasonably identifiable” to species. The reporting provision, however, was not to become unduly burdensome. That is, agencies were not expected to undertake extensive or extraordinary measures, such as creating species- specific cost accounting systems, to develop exceptionally precise data; nor were agencies expected to pro-rate staff salaries and other normal operational and maintenance costs not directed toward a particular species. According to the Service, a significant portion of conservation activities benefiting threatened and endangered species are not easily identified to individual species such as law enforcement, consultation, and recovery coordination, and are, therefore, not included in the annual report. Based on its understanding of the reporting purpose, the Service issues guidance to federal and state agencies each year on the types of expenditures to report, which include research, habitat management, recovery plan development or implementation, mitigation, status surveys, and habitat acquisition, as well as the salary costs of employees who work full-time on a single species or whose time devoted to a particular species can be readily identified. The guidance states that salary costs of staff that are not assigned to work on particular species, expenditures on unlisted species or state-listed species (unless they are also federally listed), and expenditures on formal consultations dealing with multiple species should not be reported. The Service also does not include agencies’ unrealized revenues from unsold water, timber, power, or other resources resulting from actions taken to conserve threatened or endangered species. Reported federal and state expenditures on behalf of the desert tortoise totaled about $92 million, including about $37 million for land acquisition, from fiscal years 1989 through 1998—the latest year for which comprehensive data were available. Of all the agencies reporting desert tortoise expenditures, the Bureau of Land Management spent the most by far—about 5 times more than the Service spent (see table 2). Over the 10-year fiscal period from 1989 through 1998, federal and state expenditures on the desert tortoise increased more than 40-fold, from about $719,000 in fiscal year 1989 to nearly $31.7 million in fiscal year 1998 (see fig. 5). The sharp increases in tortoise expenditures in fiscal years 1997 and 1998 are associated with significant expenditures for land acquisition. In fiscal year 1997, nearly $8 million—or 56 percent of the $14 million in expenditures on the tortoise that year—was for land acquisition. Similarly, in fiscal year 1998, about $26.5 million—or 84 percent of the $31.7 million spent on the tortoise—was for land. All of the land acquisition expenditures for the tortoise in 1998 were made by the Bureau of Land Management, as was all but about $800,000 of the 1997 land acquisition expenditures (the Service made the remainder). The $92 million that federal and state agencies reported spending on the desert tortoise accounted for about 2.8 percent of the total $3.3 billion they reported spending on all threatened and endangered species from fiscal years 1989 through 1998. During this period, 13 species, including the desert tortoise, each had total expenditures of more than $50 million; these species accounted for about 43 percent of total expenditures during this period (see fig. 6). Comprehensive data on expenditures on endangered species have not been available since fiscal year 1998 because the Service has not been issuing its reports annually, as required. The latest report was published on August 30, 1999, and was for expenditures in fiscal year 1997. Service officials also provided us a draft of the report on the fiscal year 1998 expenditures, which we included in our analysis. Also, although comprehensive expenditure data were not available since fiscal year 1999, the Service shared with us the data it had received as August 19, 2002. By that date, all but a few agencies had reported their 1999 expenditures. Five agencies and the states, however, had not reported their 2000 expenditures, and only two agencies had reported their 2001 expenditures. For these 3 fiscal years, federal and state agencies had reported a total of about $12.4 million in additional desert tortoise expenditures. (This amount is not included in the $92 million in reported tortoise expenditures.) The Service official responsible for the report admitted that the agency has not been complying with the annual reporting requirement for several reasons. First, the Service has not always been timely in requesting the needed information from federal and state agencies. Second, several agencies have not submitted their information on time, and the Service has chosen to wait to issue the report until all agencies have done so. In some cases, agencies have been more than a year late in providing information to the Service. And third, competing priorities within the Service have delayed the report’s preparation. For example, the staff responsible for preparing the expenditures report had concurrent responsibilities such as outreach, interagency coordination, Endangered Species Act listings, and critical habitat determinations. For future reports, the Service plans to develop a web-based reporting system and use an intern to compile the data in order to issue its report more timely. Without timely issuance of the annual reports, decision makers and the public have an incomplete picture of the expenditures made on threatened and endangered species, both individually and in total. These reports constitute the only readily available, consolidated source of federal and state expenditures on a species-by-species basis. Accordingly, they can serve as a valuable tool—for the Congress, agency officials, and other interested parties—for assessing trends in spending over time, whether for all species or for any one species of interest. For example, the reports allow the Congress to assess whether a few species are receiving a disproportionate amount of funding at the expense of numerous other species. Additionally, the reports allow one to discern spending patterns that could, in turn, indicate regions or ecosystems that may be receiving more or less attention. Because the Service’s annual report does not account for many years during which tortoise work was being done, we requested staff-time estimates from five key agencies involved in desert tortoise activities—the Bureau of Land Management, the Department of Defense, the National Park Service, the Fish and Wildlife Service, and the U.S. Geological Survey. These agencies estimated that they spent the equivalent of 471 staff years, worth about $29.6 million (in 2001 dollars), on tortoise activities from fiscal years 1980 through 2001. Agencies developed their staff-time estimates based on staff memory, judgment, and anecdotal evidence, supplemented by personnel records reviews. These estimates cannot be combined with the annual expenditures that are reported because some agencies include staff time in their reports and others do not. We can, however, add to the reported expenditures the value of the five agencies’ staff-time estimates for the 9-year period for which annual expenditure data have not been compiled (fiscal years prior to 1989 and after 1998). The five agencies’ total staff-time estimate for these pre- and post-reporting periods is valued at about $10.6 million (in addition to the $92 million in expenditures reported by federal and state agencies). Of the five agencies estimating staff time devoted to tortoise-related activities over the 22-year fiscal period from 1980 through 2001, the Bureau of Land Management reported the greatest staff-time investment—about $16.2 million, more than the four other agencies combined. The Service was a distant second, with a staff-time investment of about $5.5 million— about a third that of the Bureau’s. Overall, the agencies’ staff-time investment steadily increased from 1980 through 1989, and then rose sharply following the tortoise’s rangewide listing as a threatened species (see fig. 7). Aside from the reported expenditures and staff-time cost estimates, the overall economic impact associated with the tortoise recovery effort is unknown, although some limited analyses have been done. For example, while it is known that restrictions on residential and commercial development in tortoise habitat have resulted in foregone opportunities, the extent and economic value of such lost opportunities has not been quantified. City and county governments, individual landowners, developers, and recreationists have incurred costs to comply with the requirements to protect tortoises, but no consolidated source of information exists to determine the full extent of such costs, and some are difficult to quantify. These requirements include training employees to correctly handle tortoises they encounter, facing project or event delays or restrictions associated with tortoise conservation, and preparing mitigation plans. Although various publications have estimated some costs and discussed benefits, none provides a comprehensive analysis of the economic impact of restrictions on land use to protect the desert tortoise. The most comprehensive analysis we reviewed was prepared by the Service in conjunction with its 1994 designation of critical habitat for the desert tortoise. This analysis evaluated the impact of potential restrictions on federal land use in the seven counties that would be affected by the designation of critical habitat for the tortoise. The analysis concluded that the restrictions stemming from the designation could significantly affect small rural communities, but they would have little effect on the regional or national economy. According to the economic analysis, the critical habitat designation would primarily affect three activities: ranching, mineral extraction, and recreation. For example, the analysis estimated a loss of no more than 425 jobs in the seven affected counties, with 340 of those in the ranching industry. Ranching profits were expected to be the hardest hit, with a reduction of about $4.5 million. About 51 permits—covering about 1.7 percent of all grazing units allowed on federal land in Arizona, California, Nevada, and Utah—would be affected. It is important to note that the Service’s analysis considered only the effects of restrictions on federal land. The analysis recognized that many restrictions had already been put in place on federal and nonfederal land as a result of the tortoise’s listing. For example, it cited restrictions on grazing and off-road vehicle use in California and Nevada and indicated that the critical habitat designation could result in additional restrictions in those areas. For Utah, however, the report stated that little or no additional restrictions would likely be associated with the designation, as critical habitat had previously been designated for the small portion of the population of the tortoise in the state. An analysis conducted by the Department of Agriculture’s Economic Research Service substantiated some of the results of the Service’s economic analysis for the critical habitat designation. This analysis estimated the direct and total economic effects of different levels of reductions in grazing rights in counties with known populations of desert tortoises and in counties with designated habitat areas. The estimated effects of grazing restrictions on federal land ranged from $3 million to $9 million. This analysis also concluded that grazing restrictions may have a significant impact on individual ranchers, but their impact on regional economies was not as significant. Under every scenario, the relative cost of total impacts from restrictions was less than 0.08 percent of the gross domestic product of the economic region. Lost livestock sales were the single largest cost associated with grazing restrictions; however, grazing restrictions were not likely to affect national livestock production or prices. Other kinds of restrictions can similarly have an economic cost. For example, restrictions on development, mining, and off-road vehicle use can result in foregone revenue and recreation opportunities. Such costs, however, have not been quantified. An analysis prepared by Washington County, Utah, in 1995 examined the costs and benefits associated with protective actions for the desert tortoise. Specifically, the county analyzed the costs and benefits of obtaining a permit from the Service that would allow the county to approve development projects in desert tortoise habitat. Under this permit, the county would establish a 61,000-acre reserve for desert tortoises to mitigate potential harm to tortoises from the projects. The analysis concluded that the benefits of establishing the reserve would be more than the benefits associated with having individual developers obtain permits and carry out their own mitigation actions. Property tax revenue were estimated at about $48 million more with the county obtaining the permit because if individual developers had to obtain their own permits, they would not likely develop as much land. Creating the reserve was expected to have little effect on mining and no effect on farmland. The analysis did not quantify the reserve’s economic impact on livestock grazing, although it noted that the county would extend purchase offers to holders of grazing permits on reserve land. Finally, the analysis concluded that the reserve would result in many benefits. These benefits include the aesthetic value of the open space within the reserve, the increased value of private property adjacent to the reserve (and the associated increase in property taxes), and annual expenditures of about $17.5 million a year by local and regional visitors to the reserve and its associated education center. Clark County, Nevada, also analyzed, in 2000, costs and benefits for a permit that would allow development similar to that in Washington County. However, Clark County’s permit addresses potential impacts to 79 species including the desert tortoise, and the economic impact associated with the tortoise cannot be identified separately. In addition to the county’s analysis, agencies that manage land in Clark County have prepared their own economic analyses, as part of environmental impact statements for their individual management plans. For example, BLM identified negative fiscal impacts from restrictions on cattle grazing in desert tortoise habitats in Clark County. As a result, the county has obtained grazing and water rights from willing sellers rather than restricting grazing outright. In contrast, the Forest Service found positive socioeconomic impacts from tortoise protections included in its management plan for the Spring Mountains National Recreation Area.These positive impacts were associated with increased recreation that could provide business opportunities for the surrounding communities. As the Washington County and Spring Mountains analyses indicate, tortoise recovery efforts can lead to measurable economic benefits. Other economic benefits clearly derive from efforts to protect the desert tortoise, but generally have not been estimated. These benefits are intangible and include such things as aesthetic values associated with protected areas, the knowledge that the tortoise continues to exist and may be available for future generations, and the corollary benefits that other species enjoy as a result of protections extended to the tortoise. Also, according to agency officials, the tortoise recovery effort has resulted in improved communication and coordination among federal, state, and local government officials, as well as private groups such as environmental advocates and off-highway vehicle clubs. Agency officials believe that education and communication efforts ultimately achieve greater protections for not only the tortoise but for the desert ecosystem as a whole. Many scientists consider the desert tortoise to be an indicator of the health of the desert ecosystem, and to date, over $100 million has been spent on efforts to protect and recover the species. Despite the significant expenditures made and actions taken to conserve the tortoise, land managers and the Service lack critical management tools and measures needed to assess the status of the species and to determine the effectiveness of protections and restrictions that have been taken. Specifically, the lack of a strategy for integrating research with management decisions prevents the Service and land managers from ensuring that research is conducted to evaluate the effectiveness of protective actions taken and to identify additional actions that could assist in the recovery effort. While several efforts are underway to consolidate scientific information about the tortoise and its habitat, and a recent proposal has been made for integrating science with management, it is unclear how and to what extent these efforts will be used to direct research and management actions, and the efforts may be duplicative if not properly coordinated. In addition, the original recovery plan for the tortoise has not been reviewed to determine whether recommended actions are still valid or whether recent scientific information would suggest more effective recovery actions. Such a review is important given the continued uncertainties surrounding some of the plan's original recommendations. Also, a lack of funding assurances may hamper efforts to collect rangewide population monitoring information needed to assess the current status of the desert tortoise and to track the future growth or decline in the species. Finally, late and incomplete expenditure reporting precludes the Congress and the public from knowing the type and extent of expenditures involved in the desert tortoise recovery effort. Unless these shortcomings are addressed, questions will persist about whether the current protection and recovery efforts and actions are working and are necessary, and even whether the species continues to be threatened with extinction. To ensure that the most effective steps are taken to protect the tortoise, we recommend that the Secretary of the Interior direct the Director of the Fish and Wildlife Service to take the following steps: Develop and implement a coordinated research strategy that would link land management decisions with research results. To develop such a strategy, the Director should evaluate current efforts to consolidate scientific information and existing proposals for integrating scientific information into land management decisions. Periodically reassess the desert tortoise recovery plan to determine whether scientific information developed since its publication could alter implementation actions or allay some of the uncertainties about its recommendations. To ensure that needed long-term monitoring of the desert tortoise is sustained, we recommend that the Secretary of the Interior work with the Secretary of Defense and other agencies and organizations involved in tortoise recovery, to identify and assess options for securing continued funding for rangewide population monitoring, such as developing memorandums of understanding between organizations. To provide for more timely reporting of expenditures for endangered species, we recommend that the Secretary of the Interior direct the Director of the Fish and Wildlife Service to issue the annual expenditure reports as required by the law, and to advise the Congress if reports are incomplete because not all agencies have provided the information requested. We provided copies of our draft report to the Departments of the Interior and Defense. The Department of the Interior concurred with our findings and recommendations. The department also provided technical clarifications from the Fish and Wildlife Service, Bureau of Land Management, National Park Service, and U. S. Geological Survey, which we incorporated as appropriate. The Fish and Wildlife Service also provided details on actions planned or underway to implement our recommendations. The Department of the Interior's comment letter is in appendix III. The Department of Defense provided oral comments consisting of technical clarifications, which we also incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 10 days from the report date. At that time, we will send copies of this report to the Secretary of the Interior, the Secretary of Defense, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please call me at (202) 512-3841. Key contributors to this report are listed in appendix V. Federal agencies and others have taken a variety of actions to benefit desert tortoises, reflecting recommendations in the recovery plan or efforts to minimize the effects of potentially harmful activities. These actions include acquiring habitat, restricting certain uses, promoting education programs about the species, and funding or conducting research on such topics as the causes of disease in tortoises, their nutritional needs, and the effects of human activities on tortoises. The Management Oversight Group’s Technical Advisory Committee surveyed agencies about the actions they have taken to date; what follows is a list of some of the actions reported in that survey and to us during our review. In June 2002, the Bureau of Land Management (BLM) acquired 240 acres of private property in Arizona, along with an associated 34,722-acre livestock grazing allotment. While grazing has not been permanently eliminated from this allotment, there is no current livestock use. About 10 percent of the allotment lies within a desert wildlife management area. BLM has closed some existing roads and posted these as closed, signed others, and has built some tortoise fencing. Competitive events are banned in areas of critical environmental concern. BLM amended the existing land use plan in 1997 chiefly to implement the Desert Tortoise Recovery Plan. Following the amendment of the land use plan, the BLM issued approximately 18 grazing decisions to modify livestock grazing seasons in order to protect the desert tortoise. In 2002, BLM removed 61 burros from desert tortoise habitat and plans to remove 10 more in 2003. Establishment of new roads is tightly restricted. No off-road vehicle use is allowed, and law enforcement staffing has been increased to enforce the restrictions. BLM has funded tortoise-monitoring studies for several years, typically by contracting through Arizona’s Game and Fish Department. In addition, a study plot was established in 1980 to research the effects of excluding cattle grazing. Other studies have been conducted over the years, and the U.S. Geological Survey continues to study such issues as fire and its relationship to invasive plants. All routes are closed in the Desert Tortoise Natural Area, except to owners of private land within the area’s boundary. Almost 200 closed routes throughout several management areas have been rehabilitated. Competitive vehicle events in tortoise habitat are allowed only within existing off-highway vehicle open (play) areas or on specifically identified courses. In 1991, sheep grazing was removed from more than 800,000 acres of desert tortoise critical habitat in California, pursuant to a jeopardy biological opinion from the Service. All or portions of several cattle- grazing allotments totaling almost 350,000 acres have been restricted or eliminated. Temporary restrictions are in place until bioregional plans are completed; specifically, sheep allotments covering more than 135,000 acres in non-critical habitat cannot be grazed and cattle-grazing is not authorized on all or part of allotments covering almost 250,000 acres, and is seasonally restricted in portions of 11 allotments covering almost 500,000 acres. From 1981 through 2002, more than 7,600 burros were removed from several areas, some of which were within desert tortoise habitat. Since the mid-1990s, BLM has cleaned up several illegal dumps within desert tortoise management areas, and community dumps are being closed in favor of regional landfills. An 18-mile fence was constructed along one boundary of a management area to restrict vehicle access from private lands into tortoise critical habitat. BLM’s information and visitor centers provide information on tortoise conservation. Since 1989, between 55 and 60 ravens have been removed, most from a proposed desert wildlife management area, as part of a pilot raven control program. In 2001, all BLM lands in selected critical habitat units were closed, on an interim basis, to all shooting except hunting and paper-target practice. In 1998, the Las Vegas Field Office’s Resource Management Plan established four Areas of Critical Environmental Concern to protect critical desert tortoise habitat encompassing a total of 743,209 acres. Approximately 54 miles of road have been restored All competitive events involving mechanized and motorized vehicles are limited to designated roads and trails within areas of critical environmental concern. Rights-of-way and utility corridors are restricted, and new landfills are prohibited. Two dump sites were cleaned up in one area of critical environmental concern, and through off-site mitigation fees collected from sand and gravel community pit sales, BLM has provided $12,000 and 40 people- hours to clean up another large dump site. BLM, in cooperation with Clark County, has developed a brochure depicting the locations of approved routes of travel and provides information on use restrictions. BLM has issued a number of trespass violations and required reimbursement for damaged vegetation for off road activities. Money collected from these violations is used toward restoring these areas. Several areas have been restored: 8 trespass sites, 117 road sites, 15 gravel/corral sites, and 4 dumpsites. Through off-site mitigation fees collected from sand and gravel community pit sales, BLM has provided more than $1 million in funding for nutritional research on desert tortoises since 1995. BLM has acquired almost 6,600 acres within the Red Cliffs Reserve. BLM has rehabilitated approximately 3.5 miles of closed road and has closed more than 25 trails and many other roads to non-motorized travelers. Red Cliffs Reserve is closed to fuel wood and mineral material sale and withdrawn from mineral entry; BLM prohibits surface disturbance during oil and gas exploration and limits access for rights-of-way. Compensation is required where permanent loss of desert tortoise habitat has occurred. Vegetation may not be harvested in the Reserve except by permit for scientific purposes. The BLM retired grazing on 30,725 acres of land within the Red Cliffs Reserve that had previously been under grazing permits. One illegal dump was cleaned up with 28 tons of material removed. Uncontrolled dogs are prohibited in the Reserve. Approximately 10 acres of disturbed habitat in the Reserve have been reseeded or rehabilitated. BLM offers public lectures and brochures about the Red Cliffs Reserve and management of desert tortoises in Washington County. “Tortoise breaks” in curbs allow passage of desert tortoises and other wildlife from one side of the road to another. These are also used in parking lots to keep tortoises from being trapped. More than 400 miles of jeep trails, historic roads, and recent roads are closed. Portions have been rehabilitated and re-vegetated. A Navy overflight exercise route that passed through portions of the park was rerouted because it was thought to potentially harass or affect the natural behavior of the desert tortoise and other sensitive species. The park is working to prevent a proposed landfill from being placed outside the park near one of its densest desert tortoise populations. Livestock use limited to horses and mules and is restricted to designated equestrian trails and corridors. The park has cleaned up 23 dumpsites, removing a total of 547,704 pounds of garbage. Tortoises removed from the park are given to the tortoise rescue center or tortoise adoption agency where they receive a physical inspection and U.S. Fish and Wildlife Service permit tags. Tortoises showing symptoms of upper respiratory tract disease are given to a researcher for a health inspection. Temporary tortoise fencing has been installed at construction staging areas for ongoing road construction project. Areas with high tortoise densities are fenced off and monitored by park biologists on-site during construction. Approximately 45 acres of disturbance associated with federal highway construction has been rehabilitated. Open mine shafts have been fenced and plugged to prevent tortoises from falling in. The park has developed educational kits and a curriculum unit for schools. Park biological technicians train volunteers, construction workers, and park staff about desert tortoises. The Park has established five study plots; each is visited at least 10 times per season. More than 400 tortoises have been marked and their age, sex, weight, and location have been recorded. Desert tortoise sightings reported by park staff and visitors are collected through wildlife observation cards; the information is analyzed, recorded, and incorporated into a database. Research has been initiated on raven populations and upper respiratory tract disease. Ravens are monitored and nests are removed in areas where they have been seen predating on tortoises. Mojave National Preserve actively manages all preserve lands (1.6 million acres) for desert tortoises. Approximately 772,000 acres are federally designated critical habitat for desert tortoise. Nearly 100,000 acres—most of which is desert tortoise critical habitat-- have been acquired within the preserve from private owners or from the state of California since 1994. Permits for more than 768,000 acres once designated for grazing have been retired. Permits for approximately 311,000 additional acres are pending retirement. Once that retirement is complete, grazing—and more than 4,000 cows—will have been removed from about 564,000 acres of desert tortoise critical habitat. More than 3,000 burros have been non-lethally removed since 1997. The preserve has posted signs and information kiosks to increase awareness of travelers of potential tortoise and other wildlife encounters. Vehicles are permitted only on existing roads, and in camping and parking areas. No off road driving is allowed anywhere in the preserve. Competitive motorized events are prohibited. Other organized events may be allowed on existing roads, outside of the desert tortoise active periods, with appropriate restrictions. No existing or new landfills are allowed anywhere in the preserve, which is also closing and cleaning up old, informal trash dumps. Approximately 50 acres of illegal dumps have been cleaned up in the preserve. Any surface disturbance on preserve lands must be balanced with appropriate restoration or acquisition of replacement lands for mitigation. Permits for vegetation harvest are authorized only for scientific collection; the National Park Service requires special stipulations to ensure desert tortoises are protected. To prevent the spread of disease from captive tortoises, the preserve prohibits the reintroduction of tortoises. Interpretive staff have developed school programs and created a poster and a brochure about the desert tortoise and responsible recreational behavior in tortoise habitat. The staff has placed warning stickers in preserve vehicles reminding drivers to check under their cars before driving. In 2001, population density monitoring began in the preserve. The preserve manages trash and litter to avoid subsidizing ravens. Raven- proof trash containers are being installed throughout the preserve. Cattle grazing has been removed from desert tortoise habitat in Lake Mead National Recreation Area. Hard-rock mining in 30,000 acres of desert tortoise habitat is prohibited at Lake Mead National Recreation Area. Lake Mead National Recreation Area requires that vehicles stay on designated roads. Lake Mead National Recreation area decided to abandon a proposal to build a boat launch and marina because it would have required a road through desert tortoise habitat. Fort Irwin has piloted a “head start” program to attempt to conduct research on the biology of neonate desert tortoises. Under this program, females are removed from the wild and lay their eggs in captivity, where the eggs can be protected. In the future, the young could potentially be moved into areas where tortoise numbers have been severely decreased or where they have been extirpated, if considered appropriate. Fort Irwin has installed 7.5 miles of tortoise fencing. Fort Irwin has funded the population-monitoring program in two proposed desert wildlife management areas since 2000. The National Training Center has funded many research programs of behavior, disease and other topics on the desert tortoise. The Marine Corps supports an environmental education program; more than 50,000 Marines and family members are given an environmental briefing annually. The Marine Corps provides a portion of the funding required for population monitoring efforts. Since the early 1980s, the Marine Corps has conducted or cooperated with numerous desert tortoise studies and research projects. Research projects were recently completed in juvenile survivorship and tortoise ecology, and recently initiated projects include tortoise health assessments and population monitoring. Marine Corps’ Natural Resources staff work closely with the installation’s law enforcement to control free-roaming dogs. The Marine Corps surveyed 23 areas, comprising 935 square miles, to assess the impacts of training on the desert tortoise and its habitat. Edwards closed 42 pitfalls (prospect pits, mine shafts, wells, and irrigation pipes) in critical habitat that were potentially hazardous to tortoises. Edwards prohibits competitive and organized events in critical habitat. Edwards educates personnel on the deposition of captive and displaced tortoises. A desert tortoise adoption program has been in place since 1994; it was established to prevent captive desert tortoises from being returned to the wild, prevent wild tortoises from being taken, and provide a means of tracking captive tortoises. Edwards built 22.7 miles of tortoise fencing in critical habitat to keep tortoises from entering hazardous areas (precision bombing targets) and from crossing well-traveled paved roads, and installed 48 miles of four- strand barbed-wire fence in critical habitat. Edwards revegetated 155.2 acres in critical habitat. Edwards presents an environmental education program on Mojave Desert ecosystem to local schools on base and in surrounding towns and during public events Edwards funds or conducts population monitoring in critical habitat and other areas on base. Other research includes vegetation and habitat studies, evaluation of species diversity over time, analysis of soil and vegetation samples for presence of toxic metals, and adaptive management under the base’s resource management plan. The Department of Fish and Game (DFG) has acquired and manages more than 12,000 acres. DFG reviews proposed actions on public lands and makes recommendations to BLM; it also reviews and makes recommendations on Integrated Natural Resource Management Plans for military bases. DFG prohibits and issues citations to people for collecting tortoises from the wild. DFG has fenced some lands to keep vehicles out. Though DFG has not installed fencing along roads, it has been a requirement for many projects. Because of large numbers of tortoises on a particular road, along with increased traffic associated with a solar energy plant, fencing was required and was installed by the Desert Tortoise Preserve Committee; a culvert will also be placed under the road. DFG provides funding for signs, brochures, and kiosk information. DFG provides funding for monitoring of long-term study plots. It is co- hosting a workshop on diseases to consolidate known information, foster discussion between experts, and solicit management recommendations. California’s Department of Transportation has purchased 618 acres from San Bernardino County and will transfer them to DFG to mitigate the effects of a highway expansion on desert tortoises; it also installed about 6.5 miles of permanent tortoise fence on I-15. The Department of Game and Fish prohibits the release of wildlife, including desert tortoises, without a special permit. The department monitors tortoises on several study plots (largely funded by BLM) since 1996; it partially funded population monitoring in one area in 2001 and 2002. The department conducts or funds research on tortoises in the Sonoran Desert (in such areas as life history and disease), which may provide comparative insight for Mojave Desert tortoise recovery efforts. The county’s habitat conservation plan designated the 62,000-acre (100 square-mile) Red Cliffs Desert Reserve. The county is working with BLM and the state of Utah to acquire privately owned properties located within the boundaries of the reserve; BLM and the state have acquired through purchase and exchange more than 7,700 acres of privately owned land within the reserve since 1996. Of the estimated 40 dirt roads in the Reserve, 5 remain open for public travel. Service roads are gated and locked. As resources allow, closed roads are being rehabilitated. The county has reseeded an estimated 5 acres of old roads within the reserve. The county compensated willing sellers for loss of grazing within the reserve, for a total of 1,517 animal unit months at a cost of $113,775. The county worked with St. George City, Utah, and BLM to clean up the old city dump, which was located within high-density tortoise habitat in the reserve. At least 30 illegal dumpsites have been cleaned up by the county with the help of volunteer groups. Wild, displaced desert tortoises that test negative for upper respiratory disease are moved, or translocated, to a designated area of the reserve. The county has installed or funded the installation of 40 miles of tortoise fencing. The reserve boundary is being fenced incrementally as development occurs nearby. The county has posted boundary signs to inform people when they are entering the reserve and advise of vehicle, pet, and target shooting restrictions. The county has funded 5 years’ population monitoring (conducted by the Utah Division of Wildlife Resources) at $115,000 per year. The county controls ravens that are identified as threats to tortoises, and maintains a database of known raptor and raven nest sites, which enables monitoring of predation on hatchling tortoises. Under its multiple-species habitat conservation plan, through the Nature Conservancy, the county has purchased grazing preferences from ranchers (on a willing-seller basis) on more than 1,000,000 acres of public land and eliminated grazing on those lands. The county has fenced almost 130 miles of highway, at a cost of about $580,000, to keep desert tortoises from being run over. The county funds research in such areas as desert tortoise nutrition and population monitoring, predation by ravens, translocation. The habitat conservation plan funds two BLM law enforcement rangers, one National Park Service ranger, and one Nevada Division of Wildlife ranger. Clark County provides funding for the operation and management of the Clark County Desert Tortoise Conservation Center. The habitat conservation plan provides funding for a desert tortoise pick- up service. The county educates the public about tortoises; for example, it has hosted contests in which school children estimate when a desert tortoise named Mojave Max will first exit his burrow. This event has resulted in thousands of students’ researching Mojave Desert temperatures and desert tortoise habits. The county funds radio and newspaper announcements targeted to desert users, reminding them to drive on roads, shoot responsibly, and deposit garbage only at garbage dumps in order to keep raven populations down. In 1995, the committee acquired 1,360 acres of private property, which was the base property for a grazing allotment; since 1994, it has acquired more than 175 acres within the Desert Tortoise Natural Area and has acquired or is in the process of acquiring more than 1,200 acres to buffer the natural area and other critical habitat. It generally sells or donates land it acquires to BLM or the State of California. The committee has rehabilitated 2 miles of road and removed approximately 3 tons of trash from a grazing allotment to date. A naturalist is staffed at the Desert Tortoise Natural Area every spring; the naturalist provides interpretive and educational services to visitors, routinely intercepts releases of tortoises and other turtles, and provides contact information for safe deposition/ placement of captive tortoises. A resident host/interpreter at a grazing allotment educates visitors to reduce release or take of tortoises Dogs are prohibited inside the Desert Tortoise Natural Area; the naturalist monitors compliance during the peak visitation period. The committee installed 8 miles of tortoise fencing and commissioned the design and installation of a tortoise culvert along a busy road. The committee has restored habitat at the site of an old toilet block at the Desert Tortoise Natural Area; work is ongoing to camouflage impacts of illegal off-road vehicle activity along entrance route into the area. The committee hosts twice-yearly work parties to replace lost/stolen/vandalized signs and fences at the area. The committee installed multimedia interactive kiosk at the California Welcome Center in Barstow, California, to provide desert environmental education to the general public. The committee is evaluating the protective effects of fencing. This report examines (1) the scientific basis for the 1990 listing, critical habitat designation, and recovery plan recommendations for the desert tortoise; (2) the effectiveness of actions taken by federal agencies and others to conserve desert tortoises; (3) what is known about trends in tortoise populations; and (4) costs and benefits associated with tortoise recovery actions since 1980, when one population of the tortoise was listed, to the extent that data were available. To evaluate the scientific basis for the listing decision, critical habitat designation, and recovery plan (known collectively as “key decisions”), we contracted with the National Academy of Sciences to identify and assist in the selection of scientists to provide technical assistance. The persons we selected have recognized expertise in the areas of conservation biology, herpetology, desert ecosystems, and federal land management policy, and collectively represent a range of perspectives and views on the conservation of threatened and endangered species. The selection involved a two-step process. First, the academy identified, and provided to GAO, an extensive candidate pool of individuals for possible participation. We selected a smaller pool of scientists from which the final selections were made, based on the scientists’ availability to participate. The academy’s staff administered a questionnaire to identify potential conflicts of interest; no disqualifying conflicts of interest were identified. The scientists participating in the discussion were: Dr. Roy C. Averill-Murray Amphibians and Reptiles Program Manager Nongame Branch, Arizona Game and Fish Department Phoenix, Arizona Dr. Perry R. Hagenstein Institute for Forest Analysis, Planning, and Policy Wayland, Massachusetts Dr. Jay D. Johnson University Animal Hospital Tempe, Arizona Dr. James A. MacMahon Professor of Biology Utah State University Logan, Utah Dr. Dennis D. Murphy Research Professor, Department of Biology University of Nevada Reno, Nevada Dr. Patrick Y. O’Brien Senior Research Scientist Chevron Texaco Energy Research and Technology Company Richmond, California Dr. Frederic H. Wagner Professor of Wildlife and Fisheries Utah State University Logan, Utah GAO provided the scientists with the listing decision, the critical habitat designation, the recovery plan, and key supporting documents. GAO also provided access to other materials referenced in the key decision documents. In a 2-day, facilitated discussion, the scientists provided their views on five questions: Overall, do the listing decision and critical habitat designation seem reasonable, given the scientific studies and other information that were considered? Where do you agree and what concerns, if any, do you have? Do the recommended numbers, sizes, and configurations of recovery areas and desert wildlife management areas seem reasonable? What are the strengths and weaknesses of the population viability analysis? Do the recovery plan’s recommendations about activities that should be prohibited within protected areas (e.g., grazing, mining, off-road vehicle use) and mitigative actions that should be taken (e.g., fencing or installing culverts underneath heavily traveled roads) seem supported by the scientific studies? Where do you agree and what concerns, if any, do you have? To what extent do the decision documents acknowledge and accommodate uncertainties in the scientific studies? Do the accommodations seem reasonable? Do any of the issues addressed in the recovery plan need to be reassessed from time to time? If so, describe. How often do you think such issues should be reassessed, and under what conditions? To further our understanding of the process used to develop listing decisions, critical habitat designations, and recovery plan recommendations for the desert tortoise, we interviewed officials and collected pertinent documentation from numerous federal agencies, including the U.S. Fish and Wildlife Service, the U.S. Geological Survey, the Bureau of Land Management, the National Park Service, and military installations of the Department of Defense; state and local governments in California, Nevada, and Utah; nongovernmental organizations, such as the Desert Tortoise Preserve Committee, the High Desert Multiple Use Coalition, and the QuadState Coalition; academic scientists; and six of the eight members of the desert tortoise recovery team. To assess the effectiveness of actions taken by federal agencies and others to conserve the desert tortoise and to assess what is known about trends in tortoise populations, we collected relevant land use planning documents, habitat conservation plans, and other official documents, published and unpublished scientific studies, desert tortoise population monitoring reports, survey data collected and compiled by the Management Oversight Groups’ Technical Advisory Committee regarding recovery actions, and other reports. We interviewed officials from federal and state agencies and other organizations involved with the tortoise, and conducted several site visits to observe tortoise habitat and implementation of conservation actions. Specifically, we made site visits to the Desert Tortoise Conservation Center in Las Vegas, Nevada; the Desert Tortoise Natural Area in California; Joshua Tree National Park; the Marine Corps Air Ground Combat Center at Twentynine Palms, California; the Army’s National Training Center at Fort Irwin, California; and the Red Cliffs Reserve in Washington County, Utah. We also attended the annual symposium of the Desert Tortoise Council in Palm Springs, California, which featured presentations on actions taken to conserve the desert tortoise and results of tortoise recovery efforts and research projects. To identify costs and benefits associated with desert tortoise recovery actions since the tortoise was first listed in 1980, we examined the annual expenditure reports the Service is required to submit to the Congress; these reports compile and summarize federal and state agencies’ annual expenditures on threatened and endangered species, by species. The reports contain expenditure data for land acquisition and for general activities (e.g., conducting research, monitoring species’ populations, developing and implementing recovery plans, and constructing fences). The reporting requirement began for expenditures made in fiscal year 1989, and the last report the Service submitted to the Congress was for expenditures made in fiscal year 1997. We obtained all nine of these reports, as well as the draft report for fiscal year 1998 and the more recent expenditure data (for fiscal years 1999 through 2001) that the Service had compiled as of August 19, 2002, but had not published. Although the Service summarizes and reports data on a species-by-species basis, it does not summarize and report data on an agency-by-agency basis. Rather, the Service reports, in addition to its own expenditures, one lump sum for expenditures by other federal agencies. Accordingly, we reviewed and analyzed the agencies’ individual expenditure reports, which are reproduced in an appendix in each report. We were thus able to compare and report information, year by year and in total, on individual agencies’ expenditures on the tortoise and on other species. We excluded from the agencies’ data those expenditures that clearly did not meet the intent of the report, such as expenditures that could not be broken out by species, expenditures made on behalf of sensitive or candidate species (species in need of protection but not listed as threatened or endangered), and power purchases and revenue foregone as a result of actions taken to protect listed species. Nevertheless, our sums did not always match those in the reports because the Service also excluded from its sums expenditures made on certain species, including species that were state listed but not federally listed, species that were listed after the fiscal year for which the expenditures were reported, and species that were in need of protection but were not listed. Although a few of the reports showed which expenditures the Service had excluded from its sums, most did not. In such cases, the total expenditures shown in the report for “other federal agencies” were less than the totals we calculated. Further, because the Service sometimes included land acquisition expenditures in its reported totals and sometimes excluded them, we recalculated the totals to consistently include land acquisition expenditures. We were thus able to consistently depict trends in total expenditures, whether by species, by agency, or by year. We did not verify the accuracy of the expenditures reported by the individual agencies or by the Service, but we checked the consistency of the information we were given, to the extent possible. We reviewed the guidance the Service provides to agencies on the types of expenditure data to submit, and we discussed with Service officials the criteria and methods by which the expenditure data are reviewed and edited. Additionally, we discussed with several agency officials the type of expenditure data they submit and the methods by which they estimate their expenditures. We adjusted all the expenditures to constant 2001 dollars. Because tortoise-related expenditures were not collected prior to the 1989 annual report, and because comprehensive and current expenditure data were not available for the years since 1998, we requested estimates of staff time devoted to the tortoise from the five key federal agencies involved in the tortoise’s recovery: the Bureau of Land Management, Department of Defense, Fish and Wildlife Service, National Park Service, and U.S. Geological Survey. We asked these agencies to provide, for each employee who worked on tortoise-related activities, the employee’s name, grade level, area of expertise, and percent of time devoted to tortoise- related activities during each fiscal year from 1980 through 2001. Through discussions with various agency officials, we determined that the request was reasonable and that the agencies would be able to provide us with fairly reliable staff-time estimates by consulting various staff members, personnel records, and historical data. Based on these discussions, we provided each of the five agencies with instructions, guidance, and examples of the information sought. We received staff-time estimates from all but two of the pertinent agency offices (e.g., those offices likely to have extensive experience and involvement in desert tortoise issues). We did not receive estimates from Nellis Air Force Base, Nevada, or the Chocolate Mountain Aerial Gunnery Range, California. To analyze the estimates, we used the Office of Personnel Management’s historical salary tables to calculate the salary for each grade level in each year. In accordance with guidance contained in Circular A-76, issued by the Office of Management and Budget (OMB), we used step 5 of each grade level to calculate salaries, except when the agency’s data included the step. For staff that were members of the military, we asked the installation to convert the military pay grade to the equivalent general schedule grade. Finally, based on A-76 guidance and our discussions with officials of OMB and MEVATEC Corporation (a contractor that advises and assists the Department of Defense with A-76 cost comparisons), we determined, for each year, the salary percentage that represented the value of the federal benefits package (i.e., health insurance, life insurance, pension plans, and workman’s compensation). We adjusted the staff-time values to constant 2001 dollars. We obtained staff-time estimates from the following federal agencies and offices. Bureau of Land Management—California Desert District (District Office and five field offices: Ridgecrest, Palm Springs, El Centro, Barstow, and Needles); Las Vegas Field Office; St. George Field Office; Utah State Office; and Cedar City District Office. Department of Defense—National Training Center, Fort Irwin; Marine Corps Air Ground Combat Center, Twentynine Palms; Edwards Air Force Base; and Naval Air Weapons Station, China Lake. U.S. Geological Survey—Mid Continent Ecological Science Center, Fort Collins; Northern Rocky Mountain Science Center; Western Ecological Research Center Field Stations in Las Vegas, Nevada; Riverside, California; and St. George, Utah (this field station no longer exists). Fish and Wildlife Service—Laguna Niguel/Carlsbad Field Office, Ventura Field Office, Barstow Field Office, Salt Lake City Office, Phoenix Office, Reno Office, Las Vegas Office, and Portland Regional Office. National Park Service—Joshua Tree National Park, Mojave National Preserve, Lake Mead National Recreation Area, and Zion National Park. To obtain a perspective on potential economic effects associated with the tortoise recovery effort, we reviewed the economic analyses contained in various documents, such as the critical habitat designation for the tortoise, environmental impact statements prepared by federal agencies, and habitat conservation plans. To gauge the potential economic effects of grazing restrictions in tortoise habitat, we requested that the U.S. Department of Agriculture’s Economic Research Service (ERS) calculate county-level economic effects, using a recently published analytical method. The authors had developed this method to estimate both the direct and indirect effects of grazing restrictions. Estimates of the direct (ranch-level) effects were based on the value of county cattle and sheep sales that would be lost if grazing restrictions were imposed. Estimates of the indirect (and induced) effects of grazing restrictions were then derived from an input-output model, using the estimates of the direct effects. The indirect effects include the effects in all industries that supply inputs to cattle and sheep producers; the induced effects include changes in farm purchases due to changes in farm income. At our request, the Economic Research Service estimated hypothetical 10- and 20-percent reductions in grazing owing to restrictions imposed to protect the desert tortoise. Such levels of reduction were deemed reasonable by the ERS researchers, given that not all land in the counties evaluated was federally owned or within critical habitat for the tortoise. (These hypothetical reduction levels are similar to those used in the authors’ original analysis.) The counties included in the study were those with known populations of desert tortoises and those with critical habitat for the species. Other counties were also included as part of a regional economic analysis. The study included Mohave County in Arizona; Kern, Los Angeles, Riverside, San Bernardino, and Inyo Counties in California; Clark, Esmeralda, Nye, and Lincoln Counties in Nevada; and Washington County in Utah. It relied on data on grazing activity from the National Agricultural Statistical Service’s Census of Agriculture, the Department of Agriculture’s Forest Service, and the Bureau of Land Management. General economic data and regional economic data were supplied through IMPLAN—the input-output modeling framework, software, and database developed by the authors and discussed in the referenced article. We performed our work from November 2001 through September 2002 in accordance with generally accepted government auditing standards. In addition, Carol Bray, Jennifer Duncan, Kathleen Gilhooly, Tim Guinane, Jaelith Hall-Rivera, Cynthia Norris, Judy Pagano, and Pamela Tumler made key contributions to this report. | Since the 1980s, biologists have been concerned about declines in the Mojave Desert Tortoise, which ranges through millions of acres in the western United States. The tortoise was first listed as a threatened species under the Endangered Species Act in Utah in 1980; it was later listed as threatened rangewide in 1990. The listing and designation of critical habitat for the tortoise, as well as recommendations in the tortoise recovery plan, have been controversial. In our report, we evaluate--assisted by scientists identified by the National Academy of Sciences--the scientific basis for key decisions related to the tortoise, assess the effectiveness of actions taken to conserve desert tortoises, determine the status of the population, and identify costs and benefits associated with desert tortoise recovery actions. The 1990 listing of the desert tortoise, the critical habitat designation, and recommendations in the recovery plan for the tortoise were reasonable, given the information available at the time. Under the Endangered Species Act, listing and critical habitat decisions must be based on the best available scientific and commercial data. These decisions and the recovery plan recommendations were based on sources that reflected existing knowledge about desert tortoises. To protect the tortoise, government agencies have restricted grazing and off-road vehicle use and taken other protective actions in desert tortoise habitat, but the effectiveness of these actions is unknown. Research is underway in several areas, including tortoise disease, predation, and nutrition, but the research has not assessed the effectiveness of the protective actions. Furthermore, the status of desert tortoise populations is unclear because data are unavailable to demonstrate population trends. Before the tortoise may be delisted, populations must increase or remain stable for at least 25 years--one generation of desert tortoises. Determining the trends will cost an estimated $7.5 million in the first 5 years, plus additional monitoring every 3 to 5 years at a cost of about $1.5 million per year of monitoring. The Fish and Wildlife Service depends on other agencies and organizations to assist with funding and monitoring, but these agencies and organizations cannot guarantee assistance from year to year because of other priorities. Expenditures on desert tortoise recovery since the species' first listing in 1980 exceed $100 million, but the exact investment is unknown. The investment includes $92 million in "reasonably identifiable" expenditures for the tortoise, plus staff time valued at about $10.6 million. The overall economic impact of the tortoise recovery program--including benefits as well as the costs incurred by local governments, landowners, and developers as a result of restrictions--is unknown. |
In response to the growing threat of weapons of mass destruction, in December 2002 President Bush signed National Security Presidential Directive 23, which stated an initial ballistic missile defense capability to defend the United States and deployed forces should be deployed in 2004. Also in 2002, the Secretary of Defense created the Missile Defense Agency to develop an integrated system that would have the ability to intercept incoming missiles in all phases of their flight. The Secretary of Defense’s goals for the Ballistic Missile Defense System (BMDS) included using prototypes and test assets to provide an early capability and enable the services to field elements of the system as soon as possible. In order to develop a system that can more readily respond to a changing threat and be more easily modified to enhance system performance using new technologies, the Secretary of Defense exempted the Missile Defense Agency from the traditional requirements processes. Ballistic missile defense is a challenging mission for DOD, simultaneously involving multiple combatant commands and services employing complex capabilities that require the development of many elements. Figure 1 shows how a notional scenario to engage an incoming ballistic missile, including the commands and services involved, could unfold. BMDS is eventually intended to be capable of defeating ballistic missiles during all three phases of a missile’s flight. However, the initial capability is intended to have the capability to intercept missiles in the midcourse and terminal phases. BMDS requires a unique combination of elements— space-based sensors, surveillance and tracking radars, advanced interceptors, command and control, and reliable communications— working together as an integrated system. Table 1 below explains the role of the BMDS elements that DOD plans to be available to the warfighter between fiscal years 2006-11. In developing BMDS, the Missile Defense Agency is using an incremental development and acquisition process to field militarily useful capabilities as they become available. Under this process, the Missile Defense Agency will develop ballistic missile defense elements and then transition elements to the military services for operation after approval by DOD senior leadership. In preparing for each element’s transition, the Missile Defense Agency is expected to collaborate with the services to develop agreements explaining each organization’s responsibilities, including which organization will pay for operational costs. Most of these transition plans are currently being drafted. The only BMDS element that has transferred to a service is the Patriot, which was transferred to the Army in 2003. The Missile Defense Agency plans to develop and field capabilities in 2- year blocks. The configuration of a given block is intended to build on the work completed in previous blocks. Block 2004, which was scheduled to be deployed during calendar years 2004-2005, is the first biennial increment of BMDS that is intended to provide an integrated set of capabilities. Table 2 below shows, for each block of capability, the cumulative total number of each element that the Missile Defense Agency plans to deliver. The capabilities in bolded text show cumulative totals and show new or additional capabilities from the previous block. DOD’s framework for BMDS ground and flight testing through block 2006 (December 2007) is established in the Integrated Master Test Plan. This plan defines the test plans for the BMDS and its elements and identifies test objectives. In 2006, the Missile Defense Agency plans to conduct 10 flight tests—3 for the Aegis ballistic missile defense element, 4 for the Terminal High Altitude Area Defense element, and 3 for the Ground-based Midcourse Defense element. We reported last year that the Missile Defense Agency has conducted a variety of tests that provide some degree of confidence that the limited defensive operations will operate as intended. However, we also pointed out that some elements have not been fully tested and that performance of the system remains uncertain because the Missile Defense Agency has not conducted an end-to-end flight test using operationally representative hardware and software. In addition, DOD’s fiscal year 2005 annual test report states that “…there is insufficient evidence to support a confident assessment of Limited Defensive Operations…” Whereas the Missile Defense Agency is the developer of BMDS, the U.S. Strategic Command is responsible for coordinating ballistic missile defense operations that will be conducted by multiple commands, such as U.S. Northern Command and U.S. Pacific Command. Strategic Command developed an overall strategic concept of operations for ballistic missile defense in November 2003 that explains how all aspects of the system are to be integrated. Strategic Command is also tasked with directing, coordinating, and reporting Military Utility Assessments of the ballistic missile defense system. Military Utility Assessments are iterative, event- driven assessments that document the combatant commanders’ views on the expected military utility of the system. These assessments are intended to independently examine the degree to which delivered capabilities support the warfighter’s ability to execute the missile defense mission, record all data and results from flight tests, ground tests, and wargame/exercises, and focus on the overall ballistic missile defense system rather than the individual elements. As of January 2006, one assessment had been completed (April 2005) and the scope was limited due to the system’s immaturity at that time. Operations and support costs (hereafter called operational costs) are the resources required to operate and support a weapon system and include maintenance of equipment/infrastructure, operations of forces, training and readiness, base operations, personnel, and logistics. Operational costs for weapons systems typically account for 72 percent of a weapon system’s total life-cycle cost and can generally be found in the Future Years Defense Program (FYDP). The FYDP is a DOD centralized report consisting of thousands of program elements that provides information on DOD’s current and planned budget requests. It is one of DOD’s principal tools to manage the spending for its capabilities and is available to help inform DOD and Congress about spending plans for the next 5 years and to make resource decisions in light of competing priorities. The FYDP is a report that resides in an automated database, which is updated and published to coincide with DOD’s annual budget submission to Congress. It provides projections of DOD’s near and midterm funding needs and reflects the total resources programmed by DOD, by fiscal year. DOD has made progress in planning to operate BMDS, but aside from testing issues we have previously reported on, planning is incomplete in that it lacks several critical elements such as establishing operational criteria, resolving security issues, and completing training plans. DOD has developed procedures and guidance, created an organization to integrate contingency plans, and planned and conducted some training and exercises. However, this planning lacks critical elements such as development of operational criteria, resolution of security issues, completion of training plans, and approval of dual status for the commanders of the National Guard units responsible for operating the ground-based element. DOD’s operational planning is incomplete because it is developing BMDS in a unique way and exempted BMDS from the department’s traditional requirements guidance. DOD officials agreed that planning for new weapon systems generally includes critical planning elements such as development of training plans, assessment of military specialties, identifying support requirements, and successful operational testing. U.S. Strategic Command officials agreed that this level of detailed planning is necessary but has not been done because BMDS is being developed in a nontraditional way, and further stated that warfighters are ready to use the system on an emergency basis. However, without a comprehensive plan establishing what needs to be accomplished before declaring BMDS operational and assigning responsibility for doing such planning, the Secretary of Defense may not have a transparent basis for declaring BMDS operational, which will become more important as capabilities are added in subsequent blocks and Congress considers requests to fund operations. Moreover, it may be difficult for DOD to identify and prioritize actions and determine whether the return on its significant development investment can be realized. DOD has taken positive steps in planning to operate the BMDS. For example, some operating plans and guidance are either in development or in place. In addition, the U.S. Strategic Command has created a subcommand, the Joint Functional Component Command for Integrated Missile Defense, to integrate planning and operational support for missile defense. The Missile Defense Agency and the combatant commands have also been actively planning and conducting training and exercises. DOD has developed some operational plans, established guidance, and conducted capability demonstrations to refine operating procedures. In 2003, the U.S. Strategic Command was assigned responsibility for planning, integrating, and coordinating global missile defense operations including developing a concept of operations. Since then, U.S. Strategic Command has coordinated development of plans and orders that explain how the ballistic missile defense mission will be conducted, including command relationships, who authorizes missile launches, and other policies. For example, some combatant commands have developed plans that specify how they will defend against incoming ballistic missiles and how they will support other combatant commands in doing so. DOD has also developed tactics, techniques, and procedures for how the ballistic missile defense mission would be conducted. Strategic Command’s subcommand for missile defense is working with the combatant commands to ensure these plans are integrated. The services have also published service doctrine and DOD is currently developing joint doctrine that will explain concepts for planning, coordinating, and conducting the ballistic missile defense mission. The doctrine will be revised as BMDS capabilities increase and as procedures for conducting the mission evolve. In addition to developing plans, DOD has established some policy guidance clarifying command and control for the ballistic missile defense mission. The Joint Staff has issued several orders providing guidance for ballistic missile defense mission planning which reflect policy decisions made by senior DOD leadership. For example, orders issued in fall 2005 resolved policy issues regarding weapons release authority, defined various system readiness conditions and defense priorities, and explained the rules of engagement and the relationships between combatant commands. Since the fall of 2004, DOD has been in a transitional period (called “shakedown”) to move from development to operations. As part of this process, the Missile Defense Agency, in conjunction with operational commanders and contractors, has completed 11 capability demonstrations and U.S. Strategic Command’s subcommand for missile defense is planning the twelfth for March 2006. The capability demonstrations are being conducted to practice and refine procedures for transitioning BMDS from a developmental configuration to an operational configuration and maintain the system in the operational configuration for a specific time period. The purpose behind these demonstrations is to reduce operational risks by demonstrating capabilities prior to combat use, using trained military personnel to exercise procedures in an operational environment. According to officials, there is no plan to conduct a specific number of these capability demonstrations; rather, they will be conducted as needed. In addition, U.S. Strategic Command officials said that the subcommand for missile defense will conduct readiness exercises to practice and refine warfighter tactics and procedures. Because U.S. Strategic Command has several other broad missions in addition to missile defense, it created a subcommand to integrate planning and operational support for ballistic missile defense. This subcommand, called the Joint Functional Component Command for Integrated Missile Defense, was created in early 2005 for the purpose of integrating and globally synchronizing missile defense plans to meet strategic objectives. This subcommand is drafting a global concept of operations for ballistic missile defense and is working with other combatant commands to integrate their ballistic missile defense operating plans. The subcommand is also operating the BMDS asset management process, which is a tool for scheduling and tracking the status of each ballistic missile defense element. This process uses a real-time database that shows when each BMDS element is being used for testing, exercises, maintenance, development, or operations. The asset management process schedules activities for the coming fiscal year and is updated throughout the year. The Missile Defense Agency and combatant commands have planned and conducted some training and exercises for ballistic missile defense to practice and refine command and control, tactics, procedures, and firing doctrine specified in the contingency and supporting plans. The Missile Defense Agency works with the combatant commands to incorporate ballistic missile defense training into each other’s exercises. For example, the combatant commands will include training on their mission-essential tasks during the Missile Defense Agency’s exercise and wargame program, and the Missile Defense Agency will try to incorporate ballistic missile defense training into the exercises scheduled by the combatant commands. For example, U.S. Strategic Command integrated ballistic missile defense with all of its other missions in its fall 2005 command exercise and will include ballistic missile defense to a limited extent in the command’s upcoming spring exercise for the first time. The Missile Defense Agency also provides some ballistic missile defense training programs and course development for individuals, units, and combatant command staffs. The Missile Defense Agency provides initial operator training on specific elements and the crews are subsequently certified by their unit commanders. The agency also provides training to combatant command staffs on BMDS policy and procedures and command and control. For example, during an exercise we observed at the training center in Colorado, the Northern Command staff, Army crews from the battalion in Alaska, and Navy crews from the Aegis training center in Virginia were linked electronically. In the future, this type of training will be enhanced via the Distributed Multi-echelon Training System, which will enable warfighters to participate in live, virtual, and integrated training from their duty station. The Missile Defense Agency also cochairs the Integrated Training Working Group with U.S. Strategic Command to address training and education goals, objectives, roles, missions, and policy decisions among the combatant commands and services. Despite the progress made since 2002, DOD’s planning to operate BMDS is incomplete and lacks several critical elements. DOD officials agreed that planning for new weapon systems articulated in requirements guidance generally includes critical planning elements such as establishing operational criteria, identifying personnel requirements, developing training programs, completing successful testing, and establishing readiness reporting. However, DOD’s BMDS planning is missing several of these critical elements, such as specific operational criteria for the overall BMDS and most of the system’s elements that must be met before declaring that either limited defensive operations or subsequent blocks of capability are operational. Furthermore, security issues involving responsibility for and funding of necessary security remain unresolved and training plans are still evolving. In addition, DOD has not approved dual status for the commanders of the National Guard units responsible for operating the ground-based element. U.S. Strategic Command officials agreed that this level of detailed planning is necessary but has not been done because BMDS is being developed in a nontraditional way and further stated that warfighters are ready to use the system. However, without comprehensive planning laying out steps that need to be completed before declaring the system operational, development of operational criteria, and assigning responsibility for doing such planning, DOD may face uncertainty about the basis that will be used to declare BMDS operational. This, in turn, may make it difficult for DOD to identify and prioritize actions needed to achieve this end effectively and efficiently. Moreover, the Secretary of Defense and Congress may not have a sound basis for assessing the system’s status and progress toward an operational capability. Prior to initially employing a new weapon system, DOD customarily prepares planning documents that identify actions that must be taken and criteria that must be met before the system can be declared operational. DOD officials agree that requirements guidance states that these planning documents identify any changes needed to doctrine, organizations, training, materiel, leadership and education, personnel, and facilities. Our prior work on successful management of complex defense programs shows that such planning provides a basis for knowing what steps need to be completed before a weapon system can be declared operational. As part of the planning for new weapon systems, DOD guidance, as well as DOD practices based on discussions with defense officials, requires initial operating capability criteria (hereafter called operational criteria) to be met to ensure that necessary planning has been completed to initially employ a new weapon system. These operational criteria include critical elements such as: an assessment of the military specialties needed; identification of personnel requirements; development of individual, unit, and joint training programs; system supportability, including identifying logistics and maintenance successful operational testing; and the ability to report system and unit readiness. If the new system is a part of a system of systems, then these operational criteria are to be integrated with those of the related system elements. DOD officials told us that these operational criteria also describe actions that the services typically take to prepare to operate a new system. Likewise, the services have developed instructions that embody these principles for new systems. For example, an Air Force instruction states that an initial operating capability can be declared for a system when it has successfully completed operational testing, key logistics support is in place, and the personnel necessary to operate, maintain, and support the system are trained. This instruction further states that the following items should be met before declaring that operational capability has been achieved: concept of operations, system training plan, personnel plan, operational protection guide, logistics support plan, system security design, successful operational testing and completion of a successful trial period, and the ability to report readiness at a certain level. Army and Navy regulations also specify operational criteria. For example, new Army weapon systems must have adequately trained operators who are equipped and supported to execute the mission before the system can be declared operational. Furthermore, a Navy instruction states that a logistic support strategy, identification of personnel requirements, manpower estimates, and a plan for training shall be developed for new weapon systems. As of February 2006, according to DOD officials, DOD had not yet developed any overarching operational criteria to be met before declaring the overall BMDS operational either for limited defensive operations or subsequent blocks of capability. Instead, officials stated that the Secretary of Defense will declare BMDS operational based on test results, confidence in the system, threat, and recommendations from the Commander, U.S. Strategic Command, the Commander of the subcommand for missile defense, commanders of other combatant commands, and the Director, Missile Defense Agency. Additionally, the Missile Defense Support Group, which was formed to advise senior DOD leaders on policy, operations, acquisition, and resources for BMDS, has not defined any criteria with which to make recommendations about operational capability. DOD officials have told us that while operational criteria describe actions that services customarily take to prepare to operate a new system, these actions have not been taken for BMDS. Some DOD officials have suggested that DOD should not have to meet operational criteria due to the urgency of emplacing a ballistic missile capability as soon as possible. DOD has done some assessments in which warfighters raised issues in areas that the operational criteria are intended to address. For example, combatant commanders have raised concerns about security and personnel. Recognizing that there may be planning gaps, the Army Space and Missile Defense Command has begun to identify what actions need to be taken—such as security planning, force design analysis, personnel requirements, training sustainment program, and system training plan—for the warfighter to use the BMDS and some of the elements. The officials acknowledged that, ideally, a master plan should be developed to track these actions. However, even though the Army Space and Missile Defense Command’s preliminary analysis and the other DOD assessments may provide a foundation for developing operational criteria, the Command officials stated they are not responsible for doing so and have not been tasked with ensuring that the services do so when an element is transitioned to the service. In August 2005, the Commander, U.S. Strategic Command recognized that as BMDS approached operational status, DOD needed to take necessary actions to put the ballistic missile defense elements in the hands of the warfighters that would address base operations, manning, force protection, and other aspects of military support. The Commander recommended a lead service be named for each BMDS element. This lead service would be responsible for developing doctrine, training, organizations, and personnel. This concept was briefed to the Joint Staff in November 2005 and in January 2006. The Joint Staff recommended that the Office of the Secretary of Defense for Acquisition, Technology, and Logistics name a lead service for only two elements—Army was recommended to be lead service for the forward-based radar and the Air Force was recommended to be lead service for the ballistic missile defense mission of the Cobra Dane radar. On February 11, 2006, the Deputy Secretary of Defense approved this recommendation. According to DOD officials, operational criteria also have not been developed for most BMDS elements. As shown in table 3, DOD has not developed any operational criteria for five of eight ballistic missile defense elements and criteria for two more are being drafted. DOD has developed and approved operational criteria for only one BMDS element, the Patriot PAC-3 Missile System. The Army developed operational criteria to ensure the Army was prepared to operate Patriot and specified these criteria in two capabilities documents (dated November 2000 and July 2003). These documents included criteria in areas such as support equipment, training and training support for system users, a logistics support concept and logistics standards, security, maintenance planning, and personnel. The Army determined these criteria were met and declared operational capability was achieved in June 2004 after the system transferred to the Army from the Missile Defense Agency in 2003. Although DOD is developing plans to transition some BMDS elements to the services, these plans, according to DOD officials, are not required to include operational criteria. However, the Air Force and the Army have elected to develop operational criteria for two BMDS elements as part of the transition plans. For example, Air Force Space Command officials stated they have drafted operational criteria for the Upgraded Early Warning Radar that include: testing to demonstrate the radar meets required performance standards for existing missions and the ballistic missile defense mission; training for operators, maintainers, and logistics support personnel; a successful trial period to validate system performance; and adequate support capability and sufficient spare parts. The draft plan to transition the Terminal High Altitude Area Defense element to the Army is also supposed to include operational criteria such as: system training plan and identification of leader development courses; system security requirements; supportability strategy; manpower estimate; and development of a Capabilities Development Document which, according to DOD guidance, typically includes operational criteria. DOD officials stated that operational criteria—such as the criteria in DOD guidance required to be met before initially employing a new weapon system—for some elements may not be developed. For example, operational criteria will probably not be developed for elements that either are not likely to transition from the Missile Defense Agency to a service or are expected to be contractor operated, such as the sea-based radar and the forward-based radar. Moreover, the Navy has not developed operational criteria for the Aegis ballistic missile defense element. Navy officials stated that they would only develop operational criteria and establish a timeline for achieving an initial capability if the Navy decides to buy ballistic missile defense capability for more ships than the Missile Defense Agency currently plans to buy. Although DOD has developed security policies specifically for BMDS, unresolved security issues remain and it is not clear when these issues will be resolved. Specifically, DOD has not resolved issues of who is responsible for security of BMDS elements and which organization is financially responsible for funding required security. In addition, DOD may have difficulty meeting security requirements at some locations because not all the funding has been allocated. Despite this situation, Joint Staff and combatant command officials stated that a decision to declare BMDS operational does not necessarily depend on resolving these issues. In July 2004, the Deputy Secretary of Defense designated the highest security level for BMDS when it is operational because damage to this system would harm the strategic capability of the United States. The Deputy Secretary also designated U.S. Strategic Command as the oversight authority responsible for coordinating security issues with other combatant commands, the services, and the Missile Defense Agency. This was done, in part, to identify budget requirements. This policy was further clarified in a May 2005 memo stating that the Commander, U.S. Strategic Command has the authority to designate the security level for each BMDS element and is responsible for developing security standards, policies, and procedures for BMDS. In October 2005, U.S. Strategic Command issued a directive specifying the standards for BMDS security and setting the security level for each BMDS element. Despite these directives, however, combatant commands have expressed concerns about which DOD commands are responsible for actually providing and paying for BMDS security, particularly for those elements that will be contractor operated and are expected to be available to the warfighter in fiscal year 2006. According to U.S. Strategic Command officials, BMDS elements at the highest security level require, for example, two lines of defensive security, including sensor fences and sufficient personnel to achieve a specific response rate; integrated electronic security systems; entry control; and access delay and denial systems. These measures are expensive—the Missile Defense Agency estimated that security measures for three BMDS elements will cost about $350 million over fiscal years 2006-2011. However, Office of the Secretary of Defense for Acquisition, Technology, and Logistics, Joint Staff, and other DOD officials said that service estimates of security requirements (personnel and costs) are generally higher and that some of these costs are not budgeted by either the services or the Missile Defense Agency. Furthermore, although U.S. Strategic Command has oversight responsibility and has conducted some security inspections, Command officials told us that ensuring security requirements are met will actually be done by a service or the combatant command where the element is located. As discussed above, the U.S. Strategic Command and the Joint Staff recommended that the Office of the Secretary of Defense (OSD) for Acquisition, Technology, and Logistics assign a “lead service” for each BMDS element that would be responsible for providing security, ensuring security standards are met, and budgeting for any associated costs in the next Future Years Defense Program (which will be for fiscal years 2008- 13). Although negotiations on this issue are ongoing, the Missile Defense Agency agreed in December 2005 to fund the sea-based radar and forward- based radar costs for fiscal years 2006 and 2007, Air Force Cobra Dane radar costs for fiscal year 2007, and contractor logistic support through fiscal year 2013. However, DOD officials stated that there are significant disagreements between the services and the Missile Defense Agency over the levels of support and force protection required. Further, the services and the Missile Defense Agency have not resolved disagreements over which organization will fund operational costs or which organization will provide and fund force protection beyond fiscal year 2007. It is not clear whether the recent designation of lead service for only two BMDS elements will help resolve these issues in time to be reflected in the development of the fiscal years 2008-13 Future Years Defense Program. Funding issues could prevent DOD from meeting security requirements at some locations before the system is declared operational. For example, both Vandenberg and Schreiver Air Force Bases require a combination of additional security personnel and technology improvements to meet security requirements. Although some personnel were recently added and the Air Force has requested funding for the technology improvements, as of February 8, 2006, not all the required personnel and technology were in place. The Army also had to increase the military police unit to protect the missile fields at Fort Greely, Alaska, and the cost for snow removal is nearly a million dollars a year. Security will become increasingly important and costly as additional BMDS elements are placed in more locations, particularly those outside the continental United States (see table 2). For example, DOD is planning a third site for the ground-based element and is planning for four forward-based radars, and officials have noted that the estimated cost for protecting the forward-based radar could double for austere locations. Although DOD has made progress in developing some training, the training plans prepared by the combatant commands under the Joint Training System are evolving as are readiness assessments for BMDS. The Joint Training System is DOD’s authoritative process for combatant commands and others to develop training plans, conduct training, and assess proficiency. This system requires combatant commands to develop annual training plans based on the mission-essential tasks required to perform assigned missions. The Joint Training System also includes an automated, Web-based system to track progress. The mission-essential tasks are also the basis for DOD readiness assessments such as the Defense Readiness Reporting System and the Joint Quarterly Readiness Review. DOD has not yet completed all the planning as part of the Joint Training System for ballistic missile defense. For example, the U.S. Strategic Command subcommand for missile defense is developing but has not yet completed an annual training plan and a list of mission-essential tasks under the Joint Training System. Although some combatant commands have individually drafted some mission-essential tasks for ballistic missile defense, the subcommand’s efforts are intended to develop a list that will be standardized and integrated across combatant commands. Once developed, these mission-essential tasks need to be entered into the Joint Training System’s Web-based tracking system, which currently does not include ballistic missile defense tasks. The roles of organizations involved in ballistic mission defense training are evolving and DOD is still developing some important aspects of its training program. The Missile Defense Agency has done a lot of work to develop BMDS element and command training as well as develop and conduct exercises for the combatant commands and services. However, the U.S. Strategic Command’s subcommand is beginning to assume more responsibilities for training, such as developing the annual training plan and mission-essential tasks. The two organizations are negotiating which organization will assume which training functions, but, as of November 2005, according to DOD officials, no final decisions had been made. The subcommand, with a supporting working group, is working on several important aspects of ballistic missile defense training that are not yet complete even though additional elements, such as the forward-based radar and the sea-based radar, are expected to be made available to the warfighter in 2006. The subcommand and working group are also developing: an overarching training vision, a global BMDS employment guide for how to “fight the system” with more elements than just the ground-based element, a method to systematically integrate ballistic missile defense into the Joint Staff’s exercise program and crosswalk these exercises with the ballistic missile defense annual training plan, and a training and certification program for nonservice-owned elements such as the sea-based radar and the forward-based radar. Development of a standardized list of joint mission-essential tasks will form the basis for DOD readiness assessments such as the Defense Readiness Reporting System and the Joint Quarterly Readiness Review. Joint Staff officials told us that in some of the recent quarterly reviews, U.S. Strategic Command submitted a subjective evaluation of ballistic missile defense as part of the review. However, the officials said that the Joint Staff could not assess the Command’s input during the review because there is not yet an approved, common list of mission tasks and the system has not been declared operational; thus, there was no “yardstick” for them to use to assess the readiness to conduct the ballistic missile defense mission. Regarding input into the Defense Readiness Reporting System, U.S. Strategic Command officials stated that inputs are usually based on the mission-essential tasks, which are assessed using objective effectiveness measures and some subjective commander’s judgment. However, since the mission-essential tasks are evolving and the combatant commands are just beginning to develop measures of effectiveness, the inputs into this system are currently limited and predominantly subjective. Although the Secretary of the Army recently approved the model for using National Guard units to operate the ground-based BMDS element, DOD has not approved dual status for the commanders of these units, according to DOD officials. The Army decided in 1999 to establish National Guard units to perform the ballistic missile defense mission. In 2003, the Army assigned National Guard soldiers to the Colorado Army National Guard 100th Missile Defense Brigade and the Alaska Army National Guard 49th Missile Defense Battalion. The model for using these National Guard units and roles/responsibilities of all parties involved are specified in a memorandum of agreement between the Army’s Space and Missile Defense Command, National Guard Bureau, and the Colorado and Alaska State Adjutants General, which was signed in December 2005. The model states that once BMDS is declared operational, the National Guard soldiers will serve in a federal status when performing ballistic missile defense mission duties, including controlling, operating, maintaining, securing, or defending the ground-based element or site. Otherwise, the soldiers will serve in a state status and be responsible for performing National Guard duties, such as organizing, administering, recruiting, instructing, or training reserve components. Until BMDS is declared operational, the National Guard soldiers are in a state status all of the time. The Secretary of the Army approved this model on March 3, 2006. The model states that the commanders of these National Guard units will serve in a dual status—meaning they can command soldiers in either a federal or state status. According to an official in the Secretary of the Army’s office, the governors of Colorado and Alaska have signed the document authorizing dual status of the unit commanders. However, according to Army officials, either the Secretary of Defense or the President must sign approval for dual-status authority. As of March 3, 2006, this had not been done. However, DOD officials stressed that these National Guard soldiers are trained and certified by their unit commanders and are thus prepared to operate the ground-based BMDS element whenever the system is declared operational. DOD’s incomplete planning to operate BMDS has created uncertainty about the basis that will be used to declare the system operational. DOD does not have a comprehensive plan laying out steps that need to be taken and criteria that should be met before declaring that either the limited defensive operations or subsequent system blocks are operational. DOD officials agreed that planning for new weapon systems articulated in requirements guidance generally includes critical planning elements such as development of operational criteria, a plan to adequately staff units, provide security, and complete training and personnel plans. However, no organization has been officially assigned responsibility for developing a comprehensive plan—to include operational criteria—specifying what needs to be accomplished before declaring that BMDS is operational either for limited defensive operations or subsequent blocks of capability. Although DOD has conducted some assessments that could be used to form the basis for developing operational criteria, no organization is clearly in charge of developing such criteria and ensuring they are met. Some DOD officials have suggested that the “lead service” could do this planning, but DOD has not clearly defined lead service responsibilities and has not fully implemented this proposal. Without comprehensive planning, the services and the combatant commands may not be as well prepared to operate the complex, integrated BMDS as they are for other new weapon systems for which DOD establishes criteria for achieving operational capability. Without operational criteria, it may be difficult for the Secretary of Defense to objectively assess combatant commands’ and services’ preparations to conduct BMDS operations, and the Secretary may not have a transparent basis for declaring BMDS operational, which will become more important as capabilities are added in subsequent blocks. Further, operational criteria are important because they specify actions that need to be completed for users to be prepared to use the system, such as security, training, and personnel. Without resolving the outstanding security issues, there is uncertainty about personnel requirements, and which organization will provide security for each element and pay the related costs. Without complete training plans, it is unknown how training for the integrated BMDS and some elements will be conducted, particularly the radars that will be fielded in 2006. Furthermore, it is not clear which mission-essential tasks will be used in DOD readiness assessments. The absence of comprehensive planning to operate BMDS may result in uncertainty about the basis that will be used to declare the system operational for limited defensive operation and subsequent blocks of capability. Thus, it may be difficult for DOD to identify and prioritize actions across the department needed to achieve this end effectively and efficiently and identify specific DOD organizations responsible and accountable for making this happen. As a result, the Secretary of Defense and Congress may not have the information to assess the system’s status and progress toward an operational capability as they consider funding requests from DOD. The Future Years Defense Program (FYDP) does not provide complete and transparent ballistic missile defense operational costs for use by either DOD or Congress. The FYDP is a major source of budget information that reports projected spending for the current budget year and at least 4 succeeding years. We and DOD have repeatedly recognized the need to link resources to capabilities to facilitate DOD’s decision making and congressional oversight. However, complete and transparent ballistic missile defense operational costs are not visible in the FYDP because the FYDP’s structure does not provide a way to identify and aggregate these costs, even though DOD plans to field an increasing number of elements between fiscal years 2006-2011. Several factors impair the visibility of ballistic missile defense operational costs. For example, we have reported that although expected operational costs for fiscal years 2005-2011 total $1.7 billion, DOD has not included all known operational costs in its budget. Also, these operational costs are contained in many program elements throughout the FYDP and are not linked in any way, making it difficult to compile these costs. Without the ability to clearly identify and assess the total ballistic missile defense operational costs, neither the Secretary of Defense nor Congress has complete information to use when making funding and trade-off decisions among competing priorities; provide assurance that DOD’s plans to field ballistic missile defense capabilities are affordable over time; and assess the costs of operating the New Triad. Complete and transparent budget information facilitates the ability of DOD officials to make informed resource decisions, which is increasingly important given the current strategic environment and growing demand for resources at a time when the department is facing significant affordability challenges. DOD acknowledged in its fiscal year 2004 Performance and Accountability Report that transparent budget submissions will facilitate DOD leaders’ ability to make better-informed resource decisions. In addition, DOD has acknowledged that defense decision making requires accurate, consistent computation of costs for each type of military capability and thus has modified the FYDP over time to capture the resources associated with particular areas of interest, such as space activities. Moreover, we have previously recommended DOD take actions designed to provide greater visibility of projected spending and future investments. For example, our report on DOD’s New Triad explained that ballistic missile defense is an important element of the New Triad and the current FYDP structure does not readily identify and aggregate New Triad–related costs. We recommended in June 2005 that DOD establish a virtual major force program to identify New Triad costs. Subsequently, because DOD disagreed with our recommendation in its comments on our report, we also recommended that Congress consider requiring the Secretary of Defense to establish a virtual major force program to identify New Triad costs and report annually on these funding levels. Complete and transparent budget information also facilitates congressional oversight of DOD programs. To this end, we recommended in 2004 that DOD enhance its FYDP report to provide better information for congressional decision makers’ use during budget deliberations. Also, a congressional committee has expressed specific interest in obtaining ballistic missile defense cost data. For example, in the Report of the House Committee on Appropriations on the Department of Defense Appropriations Bill for Fiscal Year 2006, congressional committee members noted that the large level of funding in individual program elements “obscures funding details and creates significant oversight issues.” Another committee also expressed frustration with the lack of transparency in budgeting and, in the Conference Report on the National Defense Authorization Act for Fiscal Year 2006 (December 18, 2005) directed the Comptroller General to conduct a study of the current program element structure (for research, development, test, and evaluation projects), particularly those that employ the system of systems concept. Complete costs to operate ballistic missile defense elements that will be fielded between fiscal years 2006-2011 are not visible to DOD or Congress in the FYDP because the current FYDP structure does not provide a way to identify and aggregate all ballistic missile defense system operational costs. Officials in the Office of the Secretary of Defense Comptroller and Program, Analysis, and Evaluation agreed that such data are necessary in making fully informed resource decisions and will become more important as more ballistic missile defense elements are fielded over time; however, these officials also agreed that these data are not transparent in the FYDP and that they have not developed a new structure for capturing these costs. We analyzed the fiscal year 2006 FYDP to determine whether the program elements related to ballistic missile defense operations could be identified. In 1995, DOD’s Office of Program, Analysis, and Evaluation created a defense mission category structure in the FYDP to identify resources devoted to different military missions, because this type of data was not available from the FYDP. This defense mission category structure can be used to identify the program elements and costs for various missions such as suppression of enemy air defenses because they are linked to related program elements in the FYDP. Our analysis showed, and a Program, Analysis, and Evaluation official agreed, that neither the current FYDP structure nor its associated defense mission categories provides a way to effectively identify and aggregate ballistic missile defense operational costs. In our analysis, we identified eight defense mission categories related to ballistic missile defense such as “ballistic missile defense forces” and “theater missile defense”. Even though our analysis identified 135 ballistic missile defense program elements that were linked to these ballistic missile defense mission categories, our analysis also showed that these program elements did not provide a complete and accurate list for identifying and aggregating ballistic missile defense operational costs. For example, 88 of the 135 (65 percent) program elements linked to ballistic missile defense mission categories were not related to the current BMDS—for example, one of these was for Special Operations Command. Also, the 135 program elements identified did not include some programs that are part of the BMDS such as the upgraded early warning radar. In addition, the 135 program elements did not include many program elements that service officials said contain BMDS operational costs. Specifically, we documented 28 BMDS-related program elements from the services, such as those for sensors and radars supported by the Air Force, ground-based missile defense supported by the Army, and the Aegis ballistic missile defense radar supported by the Navy. When we compared this list of program elements to the 135 we identified using the FYDP defense mission categories, we found that 24 of the 28 service-provided program elements did not match any of the 135 identified via our analysis of FYDP defense mission categories for ballistic missile defense. We discussed the results of our analysis with officials from the Office of Secretary of Defense, Comptroller and Program, Analysis and Evaluation, and they agreed that our methodology was reasonable. They also agreed that our analysis showed that complete and transparent ballistic missile defense operational costs are not visible in the FYDP. Since there is no structure in the FYDP to accurately identify and aggregate ballistic missile defense operational costs, the Comptroller’s office must request these data from each service and the Missile Defense Agency. The data are added together to determine an estimate of the total operational cost for the ballistic missile defense system. The Comptroller’s office estimated that the services’ operational costs for fiscal years 2004-2006 totaled $259 million. However, the officials acknowledged that these data may not have been gathered consistently across all these organizations, because there is no standardized methodology specifying which costs to include. The completeness and transparency of operational costs for ballistic missile defense system elements are impaired by four primary factors: (1) operational costs are included in many program elements and there is no mechanism to link and compile these costs, (2) the Missile Defense Agency is authorized to use research and development funds to pay for operational costs, (3) DOD has not included all known operational costs in its budget estimates, and (4) DOD has not yet identified all costs associated with the New Triad, of which the ballistic missile defense system is an important part. Officials from the Office of Secretary of Defense, Comptroller and Program, Analysis and Evaluation agreed that complete and transparent ballistic missile defense operational costs are not visible in the FYDP for the reasons cited above. First, operational costs are included in many program elements throughout the FYDP and there is no mechanism to link the FYDP program elements together so that total operational costs can be compiled. A further complication is that some of these program elements also include costs for items that are not related to ballistic missile defense. For example, one program element entitled Theater Missile Defense is defined as including costs for theater missiles of all classes, including tactical, cruise, and air-to-surface missiles. Another program element includes all costs for all the Navy’s destroyers, and does not distinguish the 15 destroyers that DOD will operate to perform the ballistic missile defense mission. Even though there is no FYDP structure to identify and aggregate ballistic missile defense operational costs, there is no plan to modify the FYDP structure to allow identification of ballistic missile defense program elements, according to an official in the Office of the Secretary of Defense, Program, Analysis, and Evaluation, because they have not received direction to do so. Second, the Missile Defense Agency is authorized by statute to use research and development funds to pay for some operational costs. However, officials we spoke with from the Office of Secretary of Defense, Comptroller and Program, Analysis, and Evaluation said that this practice makes it much more difficult to derive an accurate estimate of operational costs, because the research and development funds come from a different appropriation and are not typically used to pay operational costs. These officials told us that operational costs are usually paid from the operations and maintenance appropriation, not the research and development appropriation. Third, we reported in September 2005 that operational costs for fiscal years 2005-2011 totaled $1.7 billion but that DOD has not included all known operational costs for BMDS in its budget. Further, we reported that the Missile Defense Agency and the services disagreed as to which organization should pay operational costs for developmental assets, even though these assets may be available for operational use. In discussing our analysis with officials in the Office of the Secretary of Defense, Comptroller, and Program, Analysis, and Evaluation, the officials noted that DOD’s estimate of ballistic missile defense operational costs does not reflect total costs, because it does not include combatant commanders costs such as the costs for the new Strategic Command subcommand for missile defense. In addition, an official in the Office of the Secretary of Defense, Comptroller stated that their estimate of operational costs over fiscal years 2006-2011 is not complete because the services and the Missile Defense Agency are negotiating who will pay operational costs in the future. Fourth, as we previously reported, DOD has not identified all costs associated with the New Triad, of which ballistic missile defense is an important part. We reported that the current FYDP structure does not expressly identify and aggregate New Triad program elements that would allow identification of New Triad spending. Since ballistic missile defense is a part of the New Triad, DOD would need to be able to identify these costs as part of the New Triad. In fact, the Commander of the U.S. Strategic Command suggested that creating a virtual major force program could be necessary for each of the New Triad legs because of the diversity and scope of New Triad capabilities. The lack of complete and transparent budget information about ballistic missile defense operational costs impairs the ability of DOD officials to make informed resource decisions. DOD officials agreed that complete and transparent data on ballistic missile defense operational costs are necessary to make informed funding and trade-off decisions among competing priorities. Without the ability to identify and assess total BMDS operational costs, neither DOD nor Congress has complete information to know whether DOD’s plans to field ballistic missile defense capabilities are affordable over time. Furthermore, if the funds budgeted for BMDS support turn out to be insufficient since not all costs are included, DOD will either have to take funds from other programs or spend less on missile defense and potentially accept risks in security, training, personnel, or other areas. This is particularly important when considering the Missile Defense Agency’s plans to deliver an increasing number of systems and units over fiscal years 2006 -2011. The Missile Defense Agency may face increasing budget pressure because, although it will be supporting more BMDS elements, the agency’s budget for contractor logistic support is expected to remain relatively constant. Finally, we reported in 2005 that decision makers need complete data about the resources being allocated to the New Triad—of which ballistic missile defense is a part—in making trade-offs among efforts to develop capabilities. Without these cost data, DOD will be limited in its ability to guide and direct its efforts to integrate New Triad capabilities and Congress will not have full visibility of the resources being allocated to these efforts. Preparing to perform the ballistic missile defense mission is highly complex, involves many different DOD organizations, and requires seamless integration across multiple combatant commands. At the same time that the warfighters are developing and refining their training, operations, and security plans, the Missile Defense Agency continues to develop blocks of BMDS capabilities. Although DOD faces the twin challenges of simultaneously developing the system and beginning operations, comprehensive planning could alleviate users’ concerns before declaring that either limited operations or each subsequent block of capability is operational. Although DOD has plans for additional tests that are designed to resolve technical performance issues, the absence of a comprehensive plan for operational issues creates uncertainty across DOD on what remains to be done and how remaining actions should be prioritized before the department declares BMDS operational. Without operational criteria, it may be difficult for the Secretary of Defense to objectively assess combatant commands’ and services’ preparations to conduct BMDS operations and the Secretary may not have a transparent basis for declaring BMDS operational, which will become more important as capabilities are added in subsequent blocks and Congress considers requests to fund operations. Until an organization is assigned responsibility for developing a comprehensive plan that includes operational criteria, DOD may be hindered in its ability to identify and prioritize actions across the department effectively and efficiently. Considering that DOD guidance generally includes this type of planning and operational criteria to be developed for new weapon systems such as radars or fighter aircraft, it is even more important to bring discipline into the process for the highly complex and integrated BMDS. Considering the significant changes DOD plans for each block of BMDS, this disciplined approach is important to apply not only to the initial capabilities, but to each subsequent block. Without adequate planning, clear criteria, and identifying responsibility for ensuring necessary actions, it may be difficult for DOD to identify and prioritize actions and assure itself or Congress that all of the necessary pieces will be in place before declaring either limited defense operations or subsequent blocks of capability operational. In addition, it will be difficult for DOD to determine whether the return on its significant development investment in BMDS can be realized. Complete and transparent information on expected costs for important missions (such as ballistic missile defense) and investment efforts (such as the New Triad) facilitates DOD and congressional decision making when allocating resources. Complete and reliable data are needed to assess and understand cost trends over time, which is particularly important as warfighters begin to use ballistic missile defense elements and as an increasing number of elements are fielded over fiscal years 2006-2011. However, because the FYDP is currently not structured to transparently identify and aggregate ballistic missile defense operational costs, DOD’s ability to make strategic investment decisions based on knowledge of complete BMDS operational costs is impaired. In addition, the consequences of not having this information means that neither DOD nor Congress has the benefit of complete and adequate data to make fully informed trade-off decisions in a resource-constrained environment. As a result, the investment decisions made may not truly reflect the desired relative priority of ballistic missile defense within DOD’s overall defense strategy. We are making the following two recommendations for executive action. First, to help DOD identify and prioritize actions across the department needed to declare limited defensive operations as well as each subsequent block of capability operational, and to dispel uncertainty and bring needed discipline to the process, we recommend that the Secretary of Defense take the following actions in consultation with the Commander, U.S. Strategic Command, the services, and the Chairman, Joint Chiefs of Staff: Develop operational criteria for each ballistic missile defense element and the overall BMDS system for limited defensive operations and each subsequent block of capability. These criteria should be comparable to the operational criteria that are currently developed for new weapon systems. Assign responsibility to specific organizations and hold these organizations accountable for developing the criteria and ensuring these criteria are met before operational capability is declared. Develop a comprehensive plan specifying actions that must be completed with completion deadlines. The plan should cover the range of doctrine, organization, training, personnel, and facilities actions that are normally required to be developed and in place for new weapon systems, should integrate these actions across elements, and should address actions needed for the overall, integrated BMDS. Second, to provide decision makers in Congress and DOD with complete, transparent data on the resources required to operate the ballistic missile defense system and to clearly identify costs for an important piece of the New Triad, we recommend that the Secretary of Defense direct the Director, Program, Analysis, and Evaluation, in consultation with the Under Secretary of Defense (Comptroller) and the services, to develop a structure within the FYDP to identify all ballistic missile defense operational costs, which can be included as part of an annual report on the funding levels for New Triad activities that GAO recommended DOD provide annually to Congress. Given the significance of BMDS to national defense and the billions of dollars spent in developing this system, Congress should consider requiring the Secretary of Defense to develop: A comprehensive plan (including operational criteria) specifying actions that must be completed by the services and combatant commands before declaring BMDS operational for limited defensive operations or subsequent blocks of capability. A structure within the FYDP to identify all ballistic missile defense operational costs which can be included as part of an annual report on the funding levels for New Triad activities. In written comments on a draft of this report, the Department of Defense concurred or partially concurred with our recommendations. The department’s comments are reprinted in their entirety in appendix III. The department also provided technical comments, which we have incorporated as appropriate. DOD partially agreed with our recommendations to develop operational criteria and a comprehensive plan specifying actions that must be completed before declaring BMDS operational and also agreed with our recommendation to assign responsibility for doing so to a specific organization which would be held accountable for completing these tasks. However, while DOD’s response addressed the warfighters’ role in providing input to the Missile Defense Agency to guide the system’s technical development, it did not address the need for operational criteria prior to declaring the BMDS or elements of the system operational. Moreover, DOD’s comments do not indicate what, if any, process it plans to use to develop operational criteria for assessing combatant commands’ and services’ preparedness to conduct BMDS operations or whether it plans to assign responsibility. We continue to believe that the warfighters, specifically the combatant commands and services under the leadership of U.S. Strategic Command, should have the lead in developing and ensuring operational criteria are met as opposed to the developers—the Missile Defense Agency and system development program offices. Without comprehensive planning and objective operational criteria, the services and the combatant commands may not be as well prepared to operate the complex, integrated BMDS as they are for other new weapon systems. Furthermore, such planning and criteria would provide an objective basis for assessing combatant commands’ and services’ preparedness to conduct BMDS operations and provide a transparent basis for declaring BMDS operational. In addition, without an organization assigned responsibility for developing a comprehensive plan which includes operational criteria, DOD may be hindered in its ability to identify and prioritize actions across the department effectively and efficiently. DOD also partially concurred with our recommendation to develop a structure within the FYDP to identify all ballistic missile defense operational costs that could be included as part of an annual report on New Triad funding that we had previously recommended DOD provide annually to Congress. Considering that there is no common methodology to identify and aggregate BMDS operational costs, we continue to believe that corrective action is needed so that Congress and DOD have adequate information to assess whether DOD’s plans to field ballistic missile defense capabilities are affordable. Complete and transparent BMDS operational cost information is important to assess cost trends over time, particularly as an increasing number of BMDS elements are fielded during the next several years. Without this information, neither DOD nor Congress will have the benefit of complete and adequate data to make fully informed trade-off decisions within projected defense spending levels. With respect to DOD’s nonconcurrence on our previous recommendation to account for New Triad costs in the FYDP, we note that the Report of the House Armed Services Committee on the National Defense Authorization Act for Fiscal Year 2006 directed the Secretary of Defense to modify the FYDP to identify and aggregate program elements associated with the New Triad which, as we state in this report, includes ballistic missile defense. We continue to believe that the specific actions we recommended are needed for DOD to prepare for conducting BMDS operations and to assist in DOD and congressional oversight of ballistic missile defense operational costs. Because DOD did not indicate that it plans to implement our recommendations, we have added a matter for Congress to consider directing DOD to develop a comprehensive plan which includes operational criteria and to develop a structure within the FYDP to identify all ballistic missile defense operational costs. We are sending copies of this report to the Secretary of Defense; the Commander, U.S. Strategic Command; the Commander, U.S. Northern Command; and the Director, Missile Defense Agency. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me on (202) 512-4402. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions to this report are listed in appendix III. To determine the extent to which the Department of Defense (DOD) has made progress in planning to operate the Ballistic Missile Defense System (BMDS), and to determine whether the Future Years Defense Program (FYDP) provides complete and transparent data on total ballistic missile defense operational costs, we conducted various analyses, reviewed key documentation, and interviewed relevant DOD officials. During this review, we focused on assessing issues DOD faces in planning to operate the BMDS such as operational criteria, training, security, and cost transparency. We did not evaluate DOD’s testing plans, research and development programs, or the technical effectiveness of individual elements as we have addressed these issues in other reports. Specifically, we have issued two reports on the status of BMDS that included assessments of program goals, testing plans, and progress in developing each element. Our March 2005 report found that system performance remains uncertain and unverified because DOD has not successfully conducted an end-to-end flight test using operationally representative hardware and software. To assess DOD’s progress in planning to operate the BMDS, we obtained and reviewed relevant documents on ballistic missile defense operations such as the National Security Presidential Directive 23 dated December 16, 2002; the Unified Command Plan dated January 10, 2003; various combatant command contingency plans; BMDS Tactical Handbook; various Joint Staff orders; DOD, Joint Staff, U.S. Strategic Command, and service instructions and regulations; DOD memoranda providing guidance for implementing the ballistic missile defense program; Integrated Training Working Group briefings; Missile Defense Agency briefings and documents explaining program status and plans; and briefings by DOD officials. We also observed an exercise that involved the services and combatant commands. To identify areas where planning was incomplete, we compared what DOD had done with the planning principles for new weapon systems embodied in DOD acquisition and requirements guidance and service instructions and training plans explained in DOD’s Joint Training System. We then discussed the results of our comparisons with officials in the U.S. Strategic Command; the Army’s Space and Missile Defense Command; Office of the Secretary of Defense for Acquisition, Technology, and Logistics; Joint Staff; and Missile Defense Agency. To determine the extent to which the FYDP provides complete and transparent data on ballistic missile defense operational costs, we analyzed the FYDP structure to determine whether it was designed to readily identify the program elements that contain ballistic missile defense operational costs and assessed whether these FYDP program elements included all BMDS elements. In addition, we obtained and reviewed documentation at the Office of the Secretary of Defense, and the Army, Navy, and Air Force to identify program elements that would include ballistic missile defense operational costs. We met with DOD officials from the Office of the Under Secretary of Defense (Comptroller), Office of the Director, Program, Analysis, and Evaluation, and the Office of the Undersecretary of Defense for Acquisition, Technology, and Logistics to discuss our approach and they agreed it was reasonable. We assessed the reliability of the data by corroborating our list of defense mission categories and some program elements with knowledgeable agency officials. We determined that the data were sufficiently reliable for our purposes. In addition, other organizations we visited to gain an understanding of their roles in operating elements of the Ballistic Missile Defense System included the Joint Staff, U.S. Army Headquarters and Space and Missile Defense Command, the office of the Chief of Naval Operations’ Surface Warfare division, Air Force Headquarters and Space Command, the office of the National Guard Bureau, the Army National Guard, and the Air National Guard. To document how various commands would employ BMDS in performing the ballistic missile defense mission, we met with officials from the U.S. Strategic Command in Omaha, Nebraska, and the U.S. Northern Command in Colorado Springs, Colorado, and observed an exercise. We provided a draft of this report to DOD for their review and incorporated their comments where appropriate. Our review was conducted between January 2005 and February 2006 in accordance with generally accepted government auditing standards. In addition to the individual named above, Gwendolyn R. Jaffe, Assistant Director, Brenda M. Waterfield, Pat L. Bohan, Amy J. Anderson, Jeffrey R. Hubbard, John E. Trubey, and Renee S. Brown made key contributions to this report. Defense Acquisitions: Actions needed to Ensure Adequate Funding for Operation and Sustainment of the Ballistic Missile Defense System. GAO-05-817. Washington, D.C.: September 6, 2005. Military Transformation: Actions Needed by DOD to More Clearly Identify New Triad Spending and Develop a Long-term Investment Approach. GAO-05-962R. Washington, D.C.: August 4, 2005. Military Transformation: Actions Needed by DOD to More Clearly Identify New Triad Spending and Develop a Long-term Investment Approach. GAO-05-540. Washington, D.C.: June 30, 2005. Defense Acquisitions: Status of Ballistic Missile Defense Program in 2004. GAO-05-243. Washington, D.C.: March 31, 2005. Future Years Defense Program: Actions Needed to Improve Transparency of DOD’s Projected Resource Needs. GAO-04-514. Washington, D.C.: May 7, 2004. Missile Defense: Actions Are Needed to Enhance Testing and Accountability. GAO-04-409. Washington, D.C.: April 23, 2004. Missile Defense: Actions Being Taken to Address Testing Recommendations, but Updated Assessment Needed. GAO-04-254. Washington, D.C.: February 26, 2004. Missile Defense: Additional Knowledge Needed in Developing System for Intercepting Long-Range Missiles. GAO-03-600. Washington, D.C.: August 21, 2003. Missile Defense: Alternate Approaches to Space Tracking and Surveillance System Need to Be Considered. GAO-03-597. Washington, D.C.: May 23, 2003. Missile Defense: Knowledge-Based Practices Are Being Adopted, but Risks Remain. GAO-03-441. Washington, D.C.: April 30, 2003. Missile Defense: Knowledge-Based Decision Making Needed to Reduce Risks in Developing Airborne Laser. GAO-02-631. Washington, D.C.: July 12, 2002. Missile Defense: Review of Results and Limitations of an Early National Missile Defense Flight Test. GAO-02-124. Washington, D.C.: February 28, 2002. Missile Defense: Cost Increases Call for Analysis of How Many New Patriot Missiles to Buy. GAO/NSIAD-00-153. Washington, D.C.: June 29, 2000. Missile Defense: Schedule for Navy Theater Wide Program Should Be Revised to Reduce Risk. GAO/NSIAD-00-131. Washington, D.C.: May 31, 2000. | The Department of Defense (DOD) has spent about $91 billion since the mid-1980s to develop a capability to destroy incoming ballistic missiles. In 2002, recognizing the new security environment after the September 11 attacks, President Bush directed that an initial set of defensive ballistic missile capabilities be put in place in 2004. Although DOD is developing the Ballistic Missile Defense System (BMDS) to meet an urgent need, preparing to operate and support a system under continuous development poses significant challenges. GAO was asked to assess the extent to which (1) DOD has made progress in planning to operate the BMDS, and (2) the Future Years Defense Program (FYDP) provides complete and transparent data on BMDS operational costs. DOD has made progress in planning to operate BMDS; however, it has not established criteria that would have to be met before declaring BMDS operational, nor has DOD resolved security issues or completed training and personnel plans. DOD officials agree that operational criteria are typically established and met prior to declaring a system operational, and that planning for new systems includes identifying personnel requirements, developing training programs, and identifying logistics and maintenance requirements. DOD has developed BMDS procedures and guidance, created an organization to integrate planning and operational support, and conducted some training and exercises. However, DOD has not established formal criteria for declaring that limited defensive operations or subsequent blocks of capability are operational or completed planning for security, training, and personnel. DOD has not done this because it is developing BMDS in a unique way and BMDS is exempted from traditional requirements guidance. Without specific operational criteria, the Secretary of Defense will not be in a sound position to objectively assess combatant commands' and services' preparations to conduct BMDS operations nor have a transparent basis for declaring BMDS operational, which will become more important as capabilities are added in subsequent blocks and Congress considers requests to fund operations. Without adequate planning, clear criteria, and identification of responsibility for ensuring necessary actions have been completed, it may be difficult for DOD to identify and prioritize actions, assure itself or Congress that the necessary pieces are in place before declaring the system operational, and determine whether the return on its significant development investment in BMDS can be realized. The FYDP, a major source of budget information, does not provide complete and transparent data on ballistic missile defense operational costs. DOD and GAO have repeatedly recognized the need to link resources to capabilities to facilitate decision making and oversight. However, complete and transparent ballistic missile defense operational costs are not visible in the FYDP because the FYDP's structure does not provide a way to identify and aggregate these costs. Four primary factors impair the visibility of ballistic missile defense operational costs in the current FYDP structure: (1) operational costs are included in many program elements and there is no mechanism to link and compile these costs, (2) the Missile Defense Agency is authorized to use research and development funds to pay for operational costs, (3) DOD has not included all known operational costs in its budget estimates, and (4) DOD has not identified all costs associated with the New Triad, of which BMDS is an important part. Without the ability to identify and assess total ballistic missile defense operational costs, neither the Secretary of Defense nor Congress has complete information to make funding and trade-off decisions among competing priorities; provide assurance that ballistic missile defense capabilities are affordable over time; and assess the costs of employing the New Triad. |
STARS is a BMDO program managed by the U. S. Army Space and Strategic Defense Command (SSDC). It began in 1985 in response to concerns that the supply of surplus Minuteman I boosters used to launch targets and other experiments on intercontinental ballistic missile flight trajectories in support of the Strategic Defense Initiative would be depleted by 1988. SSDC tasked Sandia National Laboratories, a Department of Energy laboratory, to develop an alternative launch vehicle using surplus Polaris boosters. Two STARS booster configurations were developed, STARS I and STARS II. STARS I consists of refurbished Polaris first and second stages and a commercially procured Orbus I third stage (see fig. 1). It can deploy single or multiple payloads, but the multiple payloads cannot be deployed in a manner that simulates the operation of a post-boost vehicle (PBV). To meet this specific need, Sandia developed an Operations and Deployment Experiments Simulator (ODES), which functions as a PBV. (See app. I, fig. I.1.) When ODES is added to STARS I, the configuration is designated STARS II. The development phase of the STARS program was completed in fiscal year 1994, and BMDO provided about $192.1 million for this effort. The operational phase began in fiscal year 1995. The first STARS I flight, a hardware check-out flight, was launched in February 1993, and the second flight, a STARS I reentry vehicle experiment, was launched in August 1993. The third flight, a STARS II development mission, was launched in July 1994. All three were considered successful by BMDO. In March 1993, the Secretary of Defense initiated a comprehensive “Bottom-Up Review” of the nation’s defense strategy. He believed that a departmentwide review needed to be conducted “from the bottom up” because of the dramatic changes that had occurred in the world as a result of the end of the Cold War and the dissolution of the Soviet Union. This review provided the direction for shifting America’s focus away from a strategy designed to defend against a global Soviet threat to a strategy oriented toward the dangers of aggression by regional powers, a theater missile threat. Based on the nature of the present and projected threat from ballistic and cruise missiles armed with weapons of mass destruction, the Secretary of Defense decided to proceed with a more robust TMD program to emphasize protection of forward-deployed U.S. forces in the near term. Additionally, he decided to limit the NMD effort to a technology program, which drastically reduced the number of STARS launches to support NMD. In May 1994, based on declining launches for STARS and budget reductions resulting from the “Bottom-Up Review,” BMDO requested SSDC to develop a long-range plan for the STARS program. The SSDC STARS project office developed a draft long-range plan that included management options for (1) continuing the STARS program; (2) placing it in a dormant status, retaining the capability to reactivate it; and (3) terminating it. BMDO is currently evaluating STARS as a potential long-range system for launching targets for development tests of future TMD systems. The final decision, which may not be made for 6 to 9 months, will be based on factors such as the cost to maintain STARS and ABM Treaty issues associated with testing TMD systems. STARS project office officials cite several reasons related to treaty implications for not terminating the program. The Strategic Arms Reduction Treaty I (START) limits other strategic ballistic missiles’ use of telemetry encryption, but STARS is exempt from this restriction. In addition, the START II Treaty after its ratification and formal entry into force would require the total elimination of land-based multiple warhead intercontinental ballistic missiles by January 2003. This means that the launching of land-based multiple warhead intercontinental ballistic missiles, even as research and development target boosters, would cease. Because STARS is exempt from the START II Treaty, it would be the only land-based multiple warhead booster that the United States can use as a target or for research and development. The STARS II PBV carries multiple warheads and has the maneuvering capability to independently target each warhead on a final trajectory toward a target. STARS project office officials also cite other reasons for not terminating the program. STARS can deliver payloads at various reentry speeds and trajectories to the vicinity of Kwajalein Missile Range located about 4,000 kilometers from the Kauai Test Facility. STARS is also the only U.S. target missile system that operates in the 1,500 to 3,500 kilometer range. Additionally, the relatively large diameter of the STARS launch vehicle, the shape of the nose shroud, and the flat payload plate make STARS suitable as a carrier vehicle for a variety of experiments and scientific payloads. Also, STARS has demonstrated a real-time reporting capability to accurately predict target positions for experiments throughout its trajectory. These are important features for evaluating the capabilities of both theater and strategic missile defense sensors and weapons. In July 1993, BMDO had plans to launch 12 more STARS boosters from Kauai that would deliver experiments into near space and targets to Kwajalein through fiscal year 2003. All of these launches were to support NMD objectives. Two were conducted, but as a result of the “Bottom-Up Review,” all but one of the remaining 10 NMD launches were canceled. BMDO now has only one firm launch scheduled. Additionally, BMDO has 11 potential launches identified through fiscal year 2000. Ten would support TMD and 1 would support NMD. Table 1 provides the schedule by fiscal year for the STARS launches. The firm launch scheduled for 1995 involves launching a STARS II that will deploy numerous objects for the Midcourse Space Experiment (MSX) satellite to observe. The MSX satellite is scheduled to be launched into orbit from Vandenberg Air Force Base on a Delta II booster during the second quarter of fiscal year 1995 to conduct a variety of experiments, one of which will involve observing different types of target objects deployed from the STARS PBV. Although this experiment will support work being conducted in a number of areas, the data will primarily support the Space and Missile Tracking System (formerly called Brilliant Eyes) demonstration and validation program. The targets for the MSX satellite to observe are scheduled to be launched on a STARS II in the third quarter of fiscal year 1995. The MSX’s sensors are to view the numerous objects deployed from the PBV during sunrise conditions, and the objects are to be representative of various targets and deployment techniques. Other mobile and ground-based sensors will provide trajectory identification, definition, stereo viewing, and dynamic motion verification of the test objects. Until the ABM Treaty is clarified, the use of STARS to support TMD testing, including the 10 potential TMD missions shown in table 1, is in question. The 1972 ABM Treaty prohibits mobile, land-based systems that can counter strategic missiles. However, it does not define the characteristics of either a strategic or theater missile. Some theater missiles now approach the capabilities of the older, shorter range strategic missiles in terms of maximum range. Congress has continuously urged the administration to pursue discussions on amending the ABM Treaty to clarify the distinctions between theater and strategic missiles. The United States and Russia and some of the states of the former Soviet Union are currently involved in discussions seeking a demarcation that would clarify the treaty in such a way that would allow TMD systems such as the Theater High Altitude Area Defense and other advanced concepts to be developed in compliance with the ABM Treaty. As shown in table 2, the STARS operational budget will be about $22.7 million for fiscal year 1995. Of this amount, $15.1 million is the cost to maintain the capability to conduct launches, and the remaining cost of about $7.6 million primarily represents costs to be incurred for the scheduled launch in fiscal year 1995. For future years, it is estimated that the annual STARS operating budget would also be about $15 million (excluding inflation) to maintain the capability to launch. The $15 million does not include the additional costs that would be charged to STARS customers for launches. The $8.36 million for program infrastructure includes a full-time STARS staff of about 40 to 45 Sandia engineers and technicians, Sandia part-time staff for the STARS program, Sandia overhead costs, and a Department of Energy surcharge of 4.3 percent. According to a Sandia official, in years when there are no launches, the engineers and technicians would be used to provide technical support for the STARS booster system, plan for future STARS launches, upgrade system documentation, correct anomalies noted on past launches, and perform other tasks assigned by the STARS office. The Sandia official also told us that under Sandia’s personnel practices if the 40 to 45 full-time Sandia personnel were to be assigned to other programs because of a termination or extended suspension of the STARS operation, it is highly unlikely that they would later be returned to the STARS program. According to a STARS project office official, plans are to spend about $3.47 million in fiscal year 1995 to maintain the industrial base for refurbishing first- and second-stage Polaris motors. A Sandia official provided the following general comments about maintaining the industrial base. Initially, plans are to (1) modify the existing contract with Aerojet General Corporation to start assembling refurbished first-stage motors, (2) consolidate facilities to save money, and (3) send first-stage motors to the Navy’s China Lake facility for screening. Also, Sandia plans to provide Hercules, Inc., Aerospace Division, with funds to recertify the second-stage flight motor for the fiscal year 1995 flight and assemble a second-stage component refurbished motor as a flight spare. Plans are to award new 2-year contracts in April 1995 to Aerojet and Hercules for work in fiscal years 1995 and 1996. Aerojet is to refurbish up to three first-stage motors. Hercules is to continue assembling second-stage component refurbished motors. These contracts will contain provisions for paying fixed termination costs to these contractors if the decision is made to cancel the contracts in fiscal year 1996. Plans are to also award new contracts in April 1995 to Lockheed Missile and Space Company, Inc., and the Navy. Lockheed is to provide technical assistance, and the Navy’s facilities at China Lake and Corona, California, are to screen and static fire STARS motors and calibrate and recertify motor nozzles and assembly gauges. The Kauai Test Facility range support cost of $2.5 million is primarily for a facility maintenance contractor; Sandia personnel supporting STARS; and maintaining range technical capabilities such as electronic communications equipment, computers, and recording equipment used to gather flight data. The additional cost associated with the launch to support the MSX mission, $7.38 million, is for work to be performed by Sandia, support of the Pacific Missile Range Facility, and logistics’ maintenance and transportation support. Sandia is to perform work (1) related to the third stage of the STARS launch vehicle that houses the Orbus motor and (2) support of launch-field operations. This effort involves (1) mission specific software modifications and validation; (2) assembly and construction of specialized parts and equipment, to include components for the PBV; and (3) final system checkout and testing. The Pacific Missile Range Facility is to provide uprange support of the STARS booster launch activities. It is also to provide miscellaneous range tracking, telemetry, range safety, and other support requirements. The logistics’ transportation support is primarily for transportation supporting the MSX mission. In addition, funds are to be used for a nonrecurring effort to move and consolidate first- and second-stage motors, thus reducing storage costs. The logistics’ maintenance support primarily involves Hill Air Force Base’s effort. This work involves attaching components and performing system checks and validation for first- and second-stage motors. The booster destruction cost of $260,000 is for the destruction of older first- and second-stage motors no longer required for the STARS program. The Sierra Army Depot in California is to destroy the motors. In September 1993, we reported that STARS users would pay an estimated $5.9 million for each STARS I launch and an estimated $10.9 million for each STARS II launch. These cost estimates have decreased because the STARS program has already paid for equipment such as electronic components, mechanical equipment, stage 1 and 2 refurbished motors, and Orbus motors. Another reason future STARS users will pay less is because the MSX and other programs have paid for long lead hardware to be used on STARS launches that were canceled. Even though these launches were canceled, the STARS office had already acquired the assets. Beginning in fiscal year 1996, the costs to future STARS I and II customers will vary. Specifically, for the next three STARS launches, the cost to STARS I customers is estimated to be about $2.8 million, and the cost to STARS II customers is estimated to range from $6.7 million to $9.1 million. Beyond the next three launches, the cost to STARS I customers is estimated to be about $4.1 million, and the cost to STARS II customers is estimated to range from $8.1 million to $9.1 million. These estimates include costs for hardware refurbishment, Sandia launch support, booster transportation, and costs associated with ODES hardware and related integration of ODES with the STARS I booster. These estimates do not include transportation and payload and range support costs associated with specific launches. The STARS program acquired surplus Navy Polaris first- and second-stage boosters starting in the mid-1980s through 1991. The STARS program’s only cost for those boosters was for transportation to storage facilities. The STARS program purchased third-stage Orbus I motors from United Technologies. Sandia builds ODES PBVs only as needed for STARS II launches. First- and second-stage Polaris motors have to be refurbished before being used on missions. Orbus I motors and ODES do not need to be refurbished. One ODES has been built and flown. Currently, a second ODES is being built for the launch scheduled in fiscal year 1995. Table 3 shows the status of STARS hardware acquisition and refurbishment as of December 1994. When the STARS program was begun, four launches a year were anticipated. Now, no more than two launches a year are anticipated or even considered possible without increasing the number of Sandia personnel supporting the STARS program. According to a STARS official, there were two reasons the STARS office acquired such a large number of surplus Polaris first- and second-stage boosters. First, a large number of launches was expected when the STARS program was started. Second, the defect rate for these 1960s vintage motors was not known. To determine the cost of the STARS program through fiscal year 1994, we obtained funding data from BMDO, STARS program office, and Sandia National Laboratories. STARS officials also provided funding estimates for fiscal year 1995 and beyond. To determine planned launches, BMDO and STARS officials discussed and provided documents showing firm and potential launches. Air Force and TMD officials also provided information about their launch needs. BMDO and STARS officials and the SSDC treaty advisor provided information about how U.S. treaties may affect the future of the STARS program. To determine the status of STARS hardware, we reviewed relevant documents such as inventory records and refurbishment contracts. Additionally, Sandia and STARS officials provided detailed information about the status of the hardware program. We performed our work at BMDO in Washington, D.C.; SSDC in Huntsville, Alabama; and Sandia in Albuquerque, New Mexico. Our work was conducted from August through December 1994 in accordance with generally accepted government auditing standards. As requested, we did not obtain fully coordinated agency comments on a draft of this report. However, we did discuss the results of our work with SSDC and BMDO officials and have incorporated their suggestions. In general, they agreed with the information in this report. We are sending copies to the Chairmen of the Senate and House Committees on Appropriations; the Senate Committee on Armed Services; the House Committee on National Security; the Secretaries of Defense, the Air Force, the Army, and the Navy; and the Directors of BMDO and the Office of Management and Budget. Copies will also be made available to others upon request. If you or your staff have questions concerning this report, please contact me at (202) 512-4841. The major contributors to this report are J. Klein Spencer, Assistant Director; Bobby D. Hall, Evaluator-in-Charge; and Thomas L. Gordon, Evaluator. Appendix I contains pictures and maps of STARS and launch sites. A picture of ODES with its multiple reentry vehicles is shown in figure I.1. The STARS launch facility is located on Kauai, Hawaii (see figs. I.2 and I.3). The booster’s range, about 4,000 kilometers, is about the same as the distance from Kauai to the Kwajalein Atoll in the Marshall Islands, the intended destination. Kwajalein, where sensing and other tracking devices are located (see fig. I.4), is one of the two designated test ranges under the ABM Treaty. The other, White Sands Missile Range, is not suitable for the types of tests planned for STARS. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the Ballistic Missile Defense Organization's (BMDO) Strategic Target System (STARS), focusing on the status of STARS, planned launches, costs, and the status of major hardware acquisition and refurbishment. GAO found that: (1) the Secretary of Defense's 1993 comprehensive review of the nation's defense strategy drastically reduced the number of STARS launches required to support National Missile Defense (NMD) and BMDO funding; (2) due to the launch and budget reductions, the STARS office developed a draft long-range plan for the STARS program; (3) the study examined three options: (a) place the program in a dormant status, but retain the capability to reactivate it; (b) terminate the program; and (c) continue the program; (4) BMDO is currently evaluating STARS as a potential long-range system for launching targets for development tests of future Theater Missile Defense (TMD) systems; (5) the final decision, which may not be made for 6 to 9 months, will be based on factors such as the cost to maintain STARS and Anti-Ballistic Missile (ABM) Treaty issues associated with testing TMD systems; (6) since July 1993, the planned level of test launches has decreased; (7) one firm STARS launch is scheduled to support NMD in fiscal year (FY) 1995; (8) BMDO has identified another 11 potential launches through FY 2000; (9) ten of these 11 launches would support TMD and are dependent on the successful resolution of ABM Treaty issues; the remaining launch would support NMD; (10) the estimated annual cost of operating STARS varies depending on how many launches are conducted; (11) in FY 1995, BMDO plans to spend approximately $22.7 million on STARS and will conduct one launch; (12) of this amount, $15.1 million is the cost to maintain the capability to launch STARS; this cost would be incurred whether or not any launches occur in a fiscal year; (13) for future years, it is estimated that the annual cost to maintain the capability to launch STARS would remain at about $15 million; (14) beginning in FY 1996, project offices that use STARS to launch experiments or targets will be charged from $2.8 million to $4.1 million for each STARS I launch and from $6.7 million to $9.1 million for each STARS II launch; (15) the STARS program has a substantial inventory of STARS hardware; (16) when the STARS program was started in 1985, four launches were anticipated each year; (17) because of the large number of anticipated launches and an unknown defect rate for surplus Polaris motors, the STARS office acquired 117 first-stage and 102 second-stage surplus motors; (18) as of December 1994, seven first-stage and five second-stage refurbished motors were available for future launches; and (19) thirteen third-stage new motors were on-hand and 1 post-boost vehicle built for the STARS launch scheduled in FY 1995. |
Aviation weather refers to any type of weather that can affect the operation of an aircraft—anything from a brief delay in departure because of low visibility to a catastrophic accident during flight. For example, in March 1992, a USAir flight crashed during takeoff from La Guardia Airport in New York City, killing 27 people and injuring 21 others. Icing was identified as one of the factors that contributed to that accident. According to data from the National Transportation Safety Board and FAA, about 24 percent of all aviation accidents in fiscal year 1987 through fiscal year 1996 were weather-related. During the same period, about 35 percent of aviation fatalities occurred in weather-related accidents. About 88 percent of these accidents involved small private aircraft. Weather-related aviation accidents were most often caused by winds, poor visibility, or turbulence. Figure 1 shows all the types of weather events cited in aviation accidents over this period. Multiple weather factors may be cited in an accident investigation. Because of rounding, the sum of percentages exceeds 100. Weather need not cause an accident to have an effect on aviation. FAA estimates that 72 percent of all delays over 15 minutes can be attributed to weather. These delays add to the cost of flying, either for passengers whose travel plans are disrupted or for airlines, which can incur additional fuel, servicing, and crew costs. The Air Transport Association estimates that delays cost airlines and passengers about $4 billion in 1996. (App. II provides more detailed information on weather-related accidents and delays.) FAA is responsible for maintaining the safety of the national airspace system. Because of the impact of weather on aviation, FAA has spent more than $1.4 billion in facilities and equipment funds since fiscal year 1982 to develop and purchase weather-related systems and equipment. In future years, FAA expects to spend another $440 million on those systems already in development. FAA believes that its purchases of improved weather systems will help it meet the President’s stated goal of reducing fatal aviation accidents by 80 percent within 10 years. For example, FAA has purchased more than 500 automated surface observing system (ASOS) units, which use a series of instruments to automatically measure such meteorological data as wind speed and direction, temperature, and barometric pressure near airports. FAA is also buying systems, like the integrated terminal weather system (ITWS), that will collect and analyze weather data from ASOS, radars, and other systems, and display them for use by air traffic controllers and supervisors. FAA relays the data provided by such systems, as well as information provided by the National Weather Service (NWS) and private vendors, to pilots through automated systems or direct voice communications from air traffic controllers. FAA also uses weather information when deciding how to handle air traffic, such as which runway to use at an airport. In addition, since fiscal year 1982, FAA has spent almost $169 million of its total funding of $3.3 billion for research, engineering, and development on research related to aviation weather. FAA’s research has looked into ways to improve radars and other weather sensors, to detect and avoid turbulence, and to support the early development of some of the systems it has purchased or plans to purchase. Much of this research is conducted under contract by several universities and federally funded laboratories, including the National Center for Atmospheric Research, the National Oceanic and Atmospheric Administration’s (NOAA) Flight Systems and National Severe Storms Laboratories, NWS’ Aviation Weather Center and National Center for Environment Prediction, and the Massachusetts Institute of Technology’s Lincoln Laboratories. Several other federal agencies also collect and disseminate aviation weather information, as well as conduct aviation weather research. NWS, which is part of NOAA in the Department of Commerce, is responsible for collecting, analyzing, and disseminating weather information in general and has worked with FAA on joint projects such as ASOS and an advanced national weather radar system. NWS also provides meteorologists for some of FAA’s air traffic control centers. Other agencies with related aviation weather responsibilities include the National Aeronautics and Space Administration, which conducts basic research on weather-related topics, and the Department of Defense, which provides aviation weather information to military pilots and command officers. The Office of the Federal Coordinator for Meteorology (OFCM), which is also part of NOAA, was created to coordinate the meteorological services and research for all federal agencies. However, the office does not have the authority to direct the weather operations of other federal agencies. Over the past 3 years, several reports have raised concerns about the quality of the weather information available to the aviation community. In 1995, NRC, examining the roles and missions of the agencies involved in aviation weather, found that FAA, NWS, and the other agencies involved did not coordinate their activities. NRC called upon FAA to take the lead in federal aviation weather efforts. At the same time, a subcommittee of FAA’s RE&D Advisory Committee that was examining the adequacy of FAA’s aviation weather research found a number of problems. This subcommittee reported that FAA needed to improve its aviation weather research as well as its delivery of weather information to system users, such as pilots, controllers, and dispatchers. Finally, FAA’s advisory committee released a report in 1997 on research related to the national airspace system. This report found that FAA’s efforts on aviation weather were unfocused and that the agency had not clearly defined its role in providing aviation weather information. We contacted the members of the NRC aviation weather committee and the FAA advisory committee that addressed weather issues and asked for their assistance in our efforts to follow up on their recommendations that were specifically addressed to FAA. In obtaining their assistance, we asked all of the committee members, in a survey, to identify the highest-priority recommendations. The highest-rated recommendations address three general topics: policy and leadership, interagency coordination, and efforts to address users’ needs. We chose an expert panel from among those who answered our survey, with members representing the various users of aviation weather information, such as airline representatives, commercial and private pilots, and air traffic controllers. The panel reviewed the information we had gathered on FAA’s actions to implement the eight recommendations and rated FAA’s general progress on each recommendation on a 5-point scale, from very poor to excellent. The panelists were also asked to indicate whether FAA’s actions were sufficient to address the recommendation and whether FAA had taken these actions in a timely manner. In discussing the recommendations, the panelists repeatedly raised concerns about FAA’s funding of weather activities, a fourth area of concern that was mentioned in the original reports. NRC and FAA’s RE&D Advisory Committee found that FAA did not exercise leadership for aviation weather services, partly because it lacked a clear policy on weather and partly because of organizational inefficiencies. FAA has attempted to address these criticisms by creating an aviation weather directorate and issuing a policy on weather. However, members of our expert panel did not think these actions went far enough to address the previously identified weaknesses, generally rating FAA’s progress in this area as poor. Reports by NRC and FAA’s RE&D Advisory Committee criticized FAA for failing to exercise leadership on aviation weather issues. For example, NRC found that “vigorous leadership within the federal government . . . needed to build consensus and coordinate the overall effort to optimize aviation weather services and related research.” It concluded that FAA was the agency best able to exercise that leadership because of its aviation expertise and legal authority. All three reports also criticized FAA for not developing a policy to define its role and priorities in aviation weather and recommended that FAA provide a clear policy statement on its role in providing aviation weather services. For example, NRC noted that under FAA’s policies, pilots have the primary responsibility for keeping their aircraft away from hazardous weather, while air traffic controllers are principally responsible for separating aircraft from one another, thus avoiding collisions. The report found that FAA’s guidance required controllers to remain aware of current weather conditions and relay information on hazardous weather to pilots, but it did not allow controllers to direct aircraft away from hazardous weather, as they direct aircraft away from other aircraft. NRC concluded that FAA should develop procedures that allow controllers to take a more active role in separating aircraft from hazardous weather, especially when they have more accurate weather information than the pilot. The 1995 advisory committee report reached similar conclusions. The 1997 advisory committee report concluded that even though the definition of hazardous weather is highly dependent upon the capabilities of the individual aircraft and flight crew, FAA’s mission should include the responsibility for transmitting weather information to pilots and dispatchers in order to improve the separation of aircraft from hazardous weather and to increase collaboration between pilots and air traffic controllers. NRC and FAA’s advisory committee also cited weaknesses in FAA’s internal organization as a reason for the agency’s not taking a leadership role in weather issues. For example, NRC found that no single office within FAA had the authority and responsibility for setting priorities for aviation weather. The 1997 advisory committee report found that six offices within FAA were responsible for setting priorities for aviation weather research. According to FAA’s Manager for Weather Research, prior to 1995, these offices did not set priorities to ensure that the most important research projects received funding. Instead, this official stated, FAA set its research priorities by reviewing the requests submitted by the national laboratories and contractors to the several offices with responsibility for aviation weather. These offices did not coordinate their efforts internally but submitted their requests separately to FAA’s Office of Research and Acquisitions. As a result, NRC and FAA’s advisory committee found that aviation weather research was hampered by a lack of coordination, funding, and priority setting. To address this problem, NRC recommended that FAA appoint an official to serve as the single focal point with responsibility for providing effective internal and external coordination of aviation weather activities. FAA took several actions to address concerns about its leadership role in aviation weather. First, in response to concerns about how it organized its aviation weather activities, FAA made several organizational changes to consolidate these activities. In October 1995, FAA created an aviation weather directorate, which is intended to serve as the federal government’s focal point for determining aviation weather requirements, policies, and plans. The directorate was intended to fulfill the aviation weather responsibilities previously carried out by several organizations within FAA. The directorate is responsible for setting requirements for, and developing programs and policies on, aviation weather. In February 1996, FAA created a program to coordinate its research efforts on improving weather observations, warnings, and forecasts. The weather research program is organized into eight product development teams that focus on topics such as turbulence and visibility. According to program officials, the program oversees the research conducted by the national laboratories and universities and sets priorities for requests to conduct research on aviation weather. Second, in response to congressional direction, in April 1996, FAA began implementing a new acquisition management system designed to provide for more timely and cost-effective acquisitions for the entire agency. Under this system, FAA operates five integrated product teams, which are responsible for the research, development, acquisition, and installation of all new equipment within their area of expertise. To carry out these tasks, each team includes staff with various areas of expertise, such as systems engineers, lawyers, contract specialists, and representatives of the organizations responsible for the operation and maintenance of the systems acquired. In the past, according to FAA officials, the responsibility for the acquisition of such systems would be carried out sequentially through various FAA offices, depending on whether the systems were being designed, purchased, or deployed. Now, one team is responsible for all three of those activities. Two of these teams deal with weather systems: one with weather processor systems and one with weather sensors. The weather processor team, for example, develops and acquires systems such as ITWS, which takes data from various sensors and displays the data for users. In addition, since the NRC and advisory committee reports were issued, FAA has worked with other federal agencies involved in aviation weather to develop the National Aviation Weather Strategic Plan, which was published in April 1997 and is intended to lay out a vision of how to reduce the number of weather-related aviation accidents and delays. According to FAA’s Director of Aviation Weather, plans to implement the interagency strategic plan and FAA’s aviation weather policy are still under development, and to date, no policies or regulations of FAA’s have been amended to reflect the new weather policy. Finally, in September 1997, the Administrator of FAA issued an aviation weather policy in which FAA accepted responsibility for taking the lead in aviation weather services. According to this policy statement, FAA will (1) work closely with the federal agencies concerned with aviation weather; (2) take the lead in developing a plan to meet stated national goals concerning aviation weather; and (3) ensure that the needs of FAA and the aviation community are being addressed and that research, development, and acquisition are focused to improve the safety of the air traffic system. Three of the recommendations our January 1998 expert panel reviewed addressed FAA’s lack of leadership on aviation weather issues. These recommendations included two by the RE&D Advisory Committee in 1997. One recommendation called for FAA to see weather as a safety issue, not just a delay issue. The committee also recommended that FAA issue a “clear and cohesive policy statement regarding the agency’s important role” in aviation weather, including the need to separate aircraft from hazardous weather. In the third recommendation, NRC called for FAA to see weather as an important part of all of its operations. Several members of our expert panel applauded FAA for issuing a policy on weather, calling the policy “a step in the right direction.” One panelist also stated, “I don’t think that you can take a snapshot right now and evaluate where FAA is because . . . is a long-term program.” However, panelists also questioned whether the changes cited by FAA demonstrate that it has taken the lead for federal aviation activities. Specifically, several panelists expressed concern that FAA had not developed a plan to implement the new policy. For example, one panelist stated, “I . . . think that meetings and policy statements and all that are . . . just a first step. . . . ou have to . . . look at what has occurred.” Another added, “I think the intent of the committee . . . was to suggest that if you come out with a policy statement that you would . . . take some action to put some teeth into it.” Several panelists were also concerned that FAA did not believe that a policy on separating aircraft from hazardous weather was necessary, as the advisory committee had recommended. According to one panelist, the responsibility for controllers to provide weather information to pilots is implicit and ambiguous, “but if that was articulated, then [it would] provide a basis for saying that controllers need better weather information to actually provide that service.” We asked the panelists to rate FAA’s overall progress on a 5-point scale. In rating the recommendations dealing with policy and leadership, most panelists saw FAA’s progress in treating weather as a safety issue as fair. However, most panelists also thought that FAA had made poor progress in establishing a weather policy that addresses the role of controllers in providing weather information and in seeing weather as an important part of its operations. In addition, most of the panelists indicated that FAA’s actions on these three recommendations were neither timely nor sufficient. NRC and FAA’s RE&D Advisory Committee raised concerns about FAA’s coordination with other federal agencies involved in aviation weather, especially in the area of research. FAA stated that it has increased its coordination with NWS as well as with multiagency working groups. Members of our expert panel commented, however, that they did not see any evidence that the increased number of meetings was having an impact on the agencies’ aviation weather efforts. As a result, they generally rated FAA’s progress in this area as poor. Two of the three reports by NRC and the advisory committee found that FAA did not effectively coordinate its aviation weather responsibilities with other agencies involved in weather. Inadequate interagency coordination was especially apparent in research and development. For example, in 1995, NRC found little communication between FAA and NWS and was unable to identify any interagency coordination for research and development. It also found that the National Aeronautics and Space Administration was not included in FAA’s long-range planning for aviation weather. NRC recommended that FAA and NWS establish more formal coordination procedures. NRC and one advisory committee also criticized FAA for not implementing a 1977 memorandum of agreement with NWS, under which FAA was to provide NWS with a list of FAA’s requirements for aviation weather services and research. FAA, NWS, and Department of Defense officials we spoke with agreed with NRC’s assessment that FAA’s coordination on aviation weather activities had been limited. However, they also pointed out that FAA has taken a number of steps to increase its coordination with the other federal agencies engaged in weather activities. For example, FAA points to its work with OFCM, NWS, and other agencies on the National Aviation Weather Strategic Plan. FAA and the other agencies are continuing to work together to develop procedures to implement the goals outlined in the plan. According to FAA’s Director of Aviation Weather, these procedures will be published in May or June 1998. In addition, FAA and NWS have increased the frequency of their meetings to address aviation weather concerns. While FAA could document only one such meeting in 1995, it identified four meetings between the two agencies in both 1996 and 1997. Some of these meetings have been attended by high-level officials—FAA’s Director for Air Traffic Requirements and NWS’ Deputy Assistant Administrator for Operations. FAA officials also believe that the agency’s joint activities with NWS are further evidence of improved coordination. They cited, for example, the joint funding of aviation weather research and participation in management councils for two jointly developed weather systems. Finally, FAA and the Department of Defense have arranged for a military officer to be detailed to FAA as a military adviser for aviation weather requirements. This position, currently staffed by an Air Force lieutenant colonel, is intended to provide FAA with advice on planning, implementing, and monitoring FAA’s weather programs, including training, certifying, and integrating related weather programs operated by FAA and the Department of Defense. Two of the recommendations our panelists reviewed addressed NRC’s concerns about coordination. One recommendation called upon FAA and NWS to reestablish “high-level liaisons” to be responsible for defining and coordinating aviation weather research, development, and operations. NRC also recommended that FAA and NOAA work together to ensure that aviation weather research and development are “closely coupled” to the agencies’ short-term operational needs. In discussing FAA’s implementation of these two recommendations, our panelists emphasized the importance of coordination among the federal agencies. One panelist, for example, stated that while a number of agencies are involved in aviation weather research, they are not working to leverage their resources or coordinate their research projects. Another panelist commented that OFCM has not been an effective forum for coordination because it does not have any authority over other agencies. While the panelists believed that FAA had taken steps to improve its coordination, they questioned whether the agency had gone as far as the recommendations intended. For example, one panelist stated, “bsolutely, the dialogue between the FAA and NWS has improved. But . . . it would be very difficult for it not to improve because there was no dialogue .” This panelist also noted that the meetings that have occurred do not appear to have contributed substantially to the development of a list of FAA’s requirements for aviation weather services and research, as required by the 1977 memorandum of agreement. On the topic of coordinating research with operational needs, several panelists praised the weather research projects FAA was pursuing. However, panelists also raised concerns about the extent of coordination among the agencies’ research programs. Several panelists cited the lack of communication between FAA’s air traffic controllers and NWS’ forecasters as an example of weaknesses in coordination at the operational level. According to the panelists, even when controllers and forecasters are in the same room, communication is limited. In regional centers, one panelist noted, few controllers use the forecasts provided by NWS meteorologists because they would have to leave their radar display and go to another part of the room to get the information. Most panelists rated FAA’s progress in implementing the recommendations on coordination as poor. The panelists also indicated that FAA’s actions on these recommendations were neither timely nor sufficient. In the reports by NRC and FAA’s RE&D Advisory Committee, experts also raised concerns that FAA was not providing enough consistent weather information and training to aviation users, such as pilots, dispatchers, and air traffic controllers. FAA responded that it is developing or deploying systems to meet the needs of all users, as well as instituting a number of training courses. However, several panelists questioned whether the systems and training courses FAA cited adequately provide the type of information and training that system users have determined is necessary. Each of the three reports raised concerns about the lack of attention paid to the needs of all users of the aviation system. According to NRC, one of FAA’s goals is to provide consistent weather information to all types of users. However, NRC found that “pilots, controllers, and dispatchers often obtain weather information from different sources that may not agree about the location, duration, or severity of adverse weather.” For example, a controller’s radar screen may not show clouds that a pilot can see out the window or on a cockpit weather radar screen. In addition, some of the weather information given to pilots covers broad geographic areas, making it hard for them to determine if they will experience hazardous weather during their flight. According to NRC, the needs of various aviation system users were well known, but the federal government had not acted adequately to address these concerns. Similarly, the 1997 advisory committee report found that while the needs of users may vary because of such factors as the capabilities of the pilot or aircraft, “for safety and efficiency, all participants—controller, pilots, and dispatchers—should have consistent, timely, and common knowledge of the weather situation.” NRC cited FAA’s experience with the automated weather observing system known as ASOS to illustrate the impact of inadequately considering user needs in developing a weather system. Although FAA worked with NWS on the development of ASOS, some aviation users complained that the system as deployed did not meet their needs. Specifically, ASOS was designed to replace human weather observers. However, while a human observer can look at weather conditions over a broad area, ASOS can measure weather conditions only directly overhead. As a result, several aviation groups commented that ASOS provided unrepresentative observations when weather conditions were patchy or changing rapidly. Such inaccurate observations could cause pilots to avoid an airport when it is safe to land but ASOS reports unsafe weather or could cause pilots to attempt to land at an airport when unsafe conditions are not reported. Because ASOS’ observations cannot substitute for the completeness of human observations, FAA is still employing human weather observers. NRC cited ASOS as an example of FAA’s failure to “serve as an effective intermediary between the NWS and aviation system users.” Both NRC and the advisory committee also cited the need for all users to receive adequate training and observed that they were not currently receiving such training. They cited weaknesses in the weather training provided to pilots and controllers that undermine their ability to use available weather information to their maximum advantage. “Training offers great potential for near-term reductions in weather-related accidents,” NRC concluded. Similarly, the advisory committee reported in 1995, “The Administrator should set policies for training and certification that will lead to enhanced understanding and decision-making regarding weather, taking into account the many significant forthcoming changes in the National Airspace System.” FAA weather officials cited the various aviation weather systems it is developing and deploying as evidence that it is meeting the needs of all aviation users. Table 1 lists the intended users and the implementation schedule for each system cited by FAA. FAA and NWS are also currently working to enhance ASOS to address some of the concerns raised by aviation users. Regarding training, officials at FAA’s Academy provided materials describing the weather-related courses taught at the Academy and through computer-based instruction. While some of the computer-based courses offer an overview of weather topics, most of the Academy’s courses provide training on how to use systems like those identified in table 1. The final two recommendations the panel considered focused on meeting the needs of aviation system users. NRC called for FAA to focus on addressing users’ urgent unmet needs, such as the improved communication of weather information, improved observations and forecasts, and a “comprehensive training program.” In 1997, the advisory committee recommended that FAA support “a weather architecture, which includes the appropriate elements and interfaces needed to disseminate critical weather information to ALL aviation users, supported by adequate funding and priorities.” The panelists were most critical of FAA’s actions to date in this area. Speaking about providing improved weather information to users, one panelist said, “You can get better information on the than you can in the [FAA] system.” Another panelist questioned who would benefit from the systems FAA is developing, saying, “The systems are designed to get the information to people on the ground, but, quite frankly, one of the key individuals who needs that information is the captain of the airline, who is up at 39,000 feet.” Similarly, several panelists expressed concern that FAA had not integrated the systems that it provides to different aviation weather users. According to one panelist, “There was not, and is not yet, a coherent information architecture to distribute the weather information.” On the issue of training, the panelists agreed that the courses FAA identified did not fully respond to the recommendation. According to one panelist, “The recommendation is a comprehensive national plan. This is just a hodgepodge.” Another panelist noted, “ have a mandated 4, 5, 6 hours of security training every year for something that, fortunately, one out of a million . . . person will encounter, and we have nothing, or relatively nothing, on weather, which is something that they will encounter every day in every one of their flights.” The panelists also raised concerns about the adequacy of the weather training provided to air traffic controllers, noting that there is often a disparity among controllers’ abilities to interpret weather information. Overall, most panelists rated FAA’s progress in meeting users’ unmet needs as very poor. The panel rated FAA’s efforts to develop aviation weather systems to support all users as poor. The panelists did not believe that FAA’s actions on these recommendations were either timely or sufficient. Each of the three reports also raised concerns about the amount of funding FAA has provided for weather activities. NRC, for example, found that while funding levels for activities such as training and research were small compared with the cost of acquiring aviation weather systems, the lack of funding for such activities could adversely affect system deployment. The RE&D Advisory Committee also stated in 1995 that, because of the low priority given to weather activities, “weather-related programs are inconsistently funded, causing less than acceptable performance.” Finally, in 1997, the advisory committee found that “as a result of the present budget environment, the FAA management has decided to give weather programs a lower priority than other system areas, thereby causing cancellations or significant delays to critical weather efforts.” The reports discussed several instances that raised questions about FAA’s commitment to funding aviation weather projects that meet users’ needs. For example, FAA eliminated funding for the Advanced Weather Products Generator, a system designed to provide weather information to pilots and other external aviation users. According to NRC, this decision represented a “lack of focus on pilots’ needs.” The 1997 advisory committee report called FAA’s plans to consolidate weather data using systems like ITWS logical but questioned FAA’s commitment to fund such projects over the long term. Our review of FAA’s budget data confirms the committees’ findings and the panelists’ concerns about the relative importance FAA places on weather funding. FAA has a number of major activity areas linked to its mission and management goals. Although aviation weather is a prominent factor in aviation accidents, FAA’s spending for research and acquisitions related to weather has been lower than spending for most other agency research and acquisition activities. For example, from fiscal year 1990 through fiscal year 1998, aviation weather research accounted for 4 percent of the funds allocated to all types of FAA research. Spending on weather activities was lower than spending on all but three other areas—airport technology, environment and energy, and research and development partnerships—as figure 2 shows. FAA spent 8 percent of its research funds on weather in fiscal year 1990 but only 1 percent in fiscal years 1994 and 1995. In fiscal year 1998, FAA plans to spend 8 percent of its research funds on weather-related projects. Aircraft safety technology ($339) System security technology ($320) Communications, navigation, and surveillance ($239) Similarly, funding for the acquisition of aviation weather systems was lower (eighth out of eight areas) than for all other program areas for fiscal years 1990 through 1998, as figure 3 shows. Over this period, acquisitions for aviation weather accounted for 5 percent of all spending for facilities and equipment, varying from a high of 8 percent in fiscal year 1990 to a low of 4 percent in fiscal years 1993 and 1997. In fiscal year 1998, FAA plans to spend nearly 5 percent of its facilities and equipment funds on weather-related projects. Mission support ($3,573) Facilities ($3,356) Communications ($2,216) Finally, during the last 3 fiscal years, FAA has requested less funding for aviation weather than the Congress has provided. Table 2 shows the amount of funding FAA requested for aviation weather research and acquisitions and the amount that the Congress provided. Even though FAA’s management has acknowledged the increasing value of weather research, it is still difficult for aviation weather to get funding, according to FAA’s Manager for Aviation Weather Research. In addition, this official stated that neither FAA’s request nor the recent level of appropriations has been enough to support an adequate weather research program. He estimated that FAA’s planned aviation weather research for the next 5 to 7 years would cost $15 million to $18 million per year. Another FAA official pointed out that other competing demands, such as security programs, continue to have a higher priority. Several factors may account for the lower funding levels given to aviation weather. First, according to FAA’s Director of Aviation Weather and FAA’s Manager for Weather Research, without a central office, aviation weather did not have a funding advocate when decisions were being made on the allocation of resources. In addition, these officials said, some of the FAA leadership, until recently, did not believe that weather was a contributing factor in safety and in delays and therefore did not consider it a high priority. Finally, FAA does not assign weather information a high priority in its architecture plans for the national airspace system. FAA categorizes its information needs according to three classifications: critical, essential, and routine, with critical being the highest priority. Critical information is information that if lost would prevent the national air system from exercising safe separation and control over aircraft. Essential information is information that if lost would reduce the capability of the national air system to exercise safe separation and control over aircraft. Since FAA does not believe most aviation weather systems fall into the critical category, it classifies them as essential. Because weather information is not considered critical, aviation weather systems are often among the first areas cut, FAA officials told us. Several panelists commented that the level of funding FAA was providing for research projects was not adequate, potentially jeopardizing multiyear research projects. While some panelists stated that FAA could be reducing its funding requests deliberately because it believed that the Congress would restore funding, others raised the possibility that the low funding requests reflect the fact that FAA continues to make weather issues a lower priority. Owing to the significant impact of hazardous weather on aviation safety and efficiency, improving the weather information available to all users of the aviation system should be one of FAA’s top priorities. However, a panel of experts presented with information on FAA’s actions to improve its management of aviation weather concluded that FAA had done a poor job in addressing the most significant concerns raised by previous reports. While the panelists recognized that FAA had taken certain steps, such as issuing a policy to define its role in aviation weather and increasing coordination with NWS, many questioned FAA’s commitment to implementing permanent changes resulting from these actions. On the basis of the panel’s discussion and the information we gathered, we agree that FAA has addressed some of the concerns raised in previous reports. However, FAA’s responses also demonstrate that some of the issues raised by the three reports have not been fully addressed. For example, FAA indicated that issuing a policy defining its staff’s role in separating aircraft from hazardous weather is not necessary—a key function if the weather information it collects is to improve safety. Furthermore, two conditions—weather information’s being classified as a lower priority than other types of air traffic information and the lack of training for FAA staff on how to use weather information—indicate that despite the new policy, weather continues to be a lower priority for FAA than its traditional function of separating aircraft from other aircraft. The implementation plan FAA proposes to issue later this year provides the agency with an opportunity to respond to these continuing concerns with stronger evidence of its commitment to weather issues. We provided FAA with a draft of this report for its review and comment. We met with FAA officials, including FAA’s Director for Aviation Weather, to obtain FAA’s comments. FAA commented that the draft report accurately reflected the condition of the organization, and it agreed that corrective actions are needed. FAA also suggested that we add some information on several points, including the findings from the advisory committee’s 1997 report on separating aircraft from hazardous weather and additional actions FAA had taken regarding coordination, training, and deploying aviation weather systems. We added information to the report, where appropriate, to reflect these suggestions. We performed our review from August 1997 through April 1998 in accordance with generally accepted government auditing standards. Our scope and methodology are discussed further in appendix I. We are providing copies of this report to interested congressional committees; the Secretary of Transportation; and the Administrator, FAA. We will also make copies available to others upon request. If you or your staff have any questions, please call me on (202) 512-3650. Major contributors to this report are listed in appendix VII. At the request of the Chairwoman and Ranking Minority Member of the Subcommittee on Technology, House Committee on Science, we agreed to review the Federal Aviation Administration’s (FAA) progress in addressing recommendations made by outside experts on FAA’s management of aviation weather. To address this topic, we first reviewed the reports on aviation weather management prepared by the National Research Council and FAA’s Research, Engineering, and Development Advisory Committee. We also interviewed officials and reviewed policy, budget, and planning documents at FAA’s headquarters, the Orlando International Airport control tower, the National Weather Service, and the Office of the Federal Coordinator for Meteorology. Our discussions with agencies other than FAA focused on their joint efforts with FAA and were not designed to evaluate the agencies’ individual aviation weather activities. We also worked with the members of the committees that wrote the three reports. First, we sent a survey to each member of the committees that listed each of the recommendations made by those reports and asked the respondents to rate their importance. The survey form and results are included in appendix III. We received responses from 28 of the 35 committee members surveyed. The seven recommendations most highly rated by the respondents dealt with the general topics of policy and leadership, coordination, and efforts to address user needs. One of the recommendations chosen by the respondents addressed coordination of research. To ensure that the panel adequately addressed concerns about coordination raised in the previous reports and the original request, we added the second-highest rated recommendation dealing with coordination, resulting in a final total of eight recommendations. We then asked officials responsible for FAA’s weather activities to provide evidence of the actions FAA had taken to address these eight high-priority recommendations. FAA provided written responses and some supporting material to support its actions to address each of the eight recommendations. The full text of each of the recommendations, FAA’s response, and selected supporting material are presented in appendix IV. Finally, we convened an expert panel of individuals who had answered our survey, judgmentally selecting a subset of eight individuals who represented various users and providers of aviation weather information. The names and affiliations of the panel members are listed in appendix V. We held an all-day meeting with the seven-member panel (one invitee was unable to attend but provided written comments) at our offices in Washington, D.C. For each of the eight high-priority recommendations, we presented the panelists with FAA’s response, supporting material submitted by the agency, and any other information about FAA’s actions that we had identified during our previous work. We asked for their comments on (1) the original intent of the recommendation, (2) any other actions FAA had taken to address the recommendation, and (3) the adequacy of FAA’s response. At the end of each discussion, we asked the panelists to rate, using an anonymous ballot, FAA’s progress in addressing each recommendation. The panelists were given the option of rating FAA’s overall response as very poor, poor, fair, good, or excellent. They were also asked if FAA’s actions were consistent with the intent of the recommendations, sufficient, and timely. The results of these ballots are included in appendix VI. We recorded and transcribed the meeting to ensure that we accurately captured the panel members’ statements. As also requested, we are providing information on the effect of weather on aviation accidents and delays. (See app. II.) To determine the impact of weather on aviation accidents and delays, we worked with FAA’s National Aviation Safety Data Analysis Center to analyze data from the National Transportation Safety Board’s accident database and FAA’s Operations Network. We did not independently verify the reliability of the computer-based data provided by FAA, because they are not material to our findings. Many factors contribute to aviation accidents and delays. Weather has a significant role in these occurrences. Data from the National Transportation Safety Board (NTSB) show that weather is a cause or contributing factor in almost one-quarter of accidents and more than one-third of all injuries and fatalities. According to FAA data for 55 airports, weather caused almost three-quarters of all delays. On August 2, 1985, a Delta Airlines’ Lockheed L-1011 with 165 persons aboard crashed after encountering severe weather conditions on its approach to the Dallas/Fort Worth International Airport: 135 persons died, and 28 were injured. Although NTSB concluded that the accident was the fault of the pilot, procedures, and training, the following weather conditions were cited as contributing factors: thunderstorm, lightning, rain, windshear, wind, and downdraft. Of the 23,383 accidents from 1987 through 1996, NTSB had completed investigations of 22,489 accidents as of March 1, 1998. For its completed investigations, NTSB determined that weather was a cause or contributing factor in 5,286 or about 24 percent, of the accidents. See table II.1. Of the 5,287 aircraft involved in the 5,286 weather-related accidents that occurred in 1987 through 1996 for which NTSB had completed investigations, 4,669, or about 88 percent, involved general aviation aircraft, and 73, or about 1 percent, involved air carriers. See figure II.1 for an analysis of accidents by type of aviation. Multiple aircraft may be involved in an accident. Of the 19,426 general aviation accidents and 240 air carrier accidents that occurred in 1987 through 1996 for which NTSB had completed investigations, weather-related accidents accounted for 24 percent of all the general aviation accidents and about 30 percent of all the air carrier accidents. Wind/windshear was the most frequent cause or contributing factor cited in weather-related general aviation accidents. According to the Aircraft Owners and Pilots Association (AOPA), the most common problem in wind-related general aviation accidents is the loss of control of the aircraft while landing because of crosswinds, gusts, and tailwinds. This experience results in damage to the aircraft, usually with no injuries. Turbulence was the most frequent cause or factor cited in weather-related air carrier accidents. Turbulence-related accidents typically involve injuries to unbelted flight crew or passengers during the cruise phase of the flight. See figures II.2 and II.3. Multiple weather factors may be cited in an accident investigation. On January 17, 1996, an American Airlines’ Airbus A-300 with 268 persons aboard, en route from Miami, Florida, to San Juan, Puerto Rico, encountered severe turbulence. Although the captain had turned on the “fasten seat belt” sign, 20 passengers were injured, 3 of them seriously. NTSB determined that turbulence and noncompliance with the seat belt sign were the cause of the injuries. NTSB also determined that American Airlines’ failure to issue a hazardous weather advisory to the flight crew was a contributing factor. In the more than 22,000 accidents that occurred between 1987 and 1996 for which NTSB had completed its investigation, 12,415 injuries were recorded. NTSB determined that weather was a cause or contributing factor in 3,199, or about 26 percent, of the injuries in these accidents. See table II.2. Of the 3,199 weather-related injuries that occurred in 1987 through 1996, 2,345, or about 73 percent, involved general aviation aircraft, while 372, or about 12 percent, involved air carriers. See figure II.4 for an analysis by type of aviation. Of 2,345 general aviation injuries and 372 air carrier injuries that occurred between 1987 and 1996 for which NTSB had completed accident investigations, weather-related injuries accounted for about 25 percent of all general aviation injuries and about 28 percent of all air carrier injuries. Wind/windshear was the most frequent cause or contributing factor cited in general aviation accidents with injuries. Turbulence was the most frequent cause or factor cited in air carrier accidents with injuries. See figures II.5 and II.6. On March 22, 1992, a USAir Fokker F-28 stalled on takeoff from La Guardia International Airport and became partially inverted and submerged in the bay. Of the 51 persons on board, 27 died and 21 were injured. NTSB determined that the accident was caused by USAir’s and FAA’s failure to provide the flight crew with adequate procedures as well as the flight crew’s failure to confirm that the wings were free of ice. NTSB determined that icing conditions was one of several other factors that contributed to the accident. In the more than 22,000 accidents that occurred between 1987 and 1996 for which NTSB had completed its investigation, 8,791 fatalities were recorded. NTSB determined that weather was a cause or contributing factor in 3,043, or about 35 percent, of the deaths in these accidents. See table II.3. Of the 3,043 weather-related fatalities that occurred in 1987 through 1996, about 2,493, or about 82 percent, involved general aviation aircraft, while 40, or about 1 percent, involved air carriers. See figure II.7 for an analysis by type of aviation. Of the 7,064 general aviation fatalities and 570 air carrier fatalities that occurred between 1987 and 1996 for which NTSB has completed accident investigations, weather-related fatalities accounted for about 35 percent of all general aviation fatalities and 7 percent of all air carrier fatalities. Low visibility/ceiling was the most frequent cause or contributing factor cited in fatal general aviation accidents. According to AOPA, flying under visual flight rules into deteriorating weather conditions and dark nights is the most frequent cause of fatal general aviation accidents. Icing was the most frequent cause or factor cited in fatal air carrier accidents. However, because only six weather-related air carrier accidents involved fatalities, no conclusions can be drawn from this small number of occurrences. See figures II.8 and II.9. According to the Air Transport Association (ATA), flight delays of 1 minute or more cost airlines and passengers more than $3 billion each year. See table II.4 for the costs of delays to U.S. major and national carriers and passengers in 1993 through 1996. In 1993 through 1997, according to FAA, more than 1.2 million flights were delayed for at least 15 minutes at the 55 airports connected to the Air Traffic Operations Network. Of these flights, about 922,000, or 72 percent, were delayed for weather-related reasons. See table II.5 for a summary of delays by primary cause in 1993 through 1997. Dr. John Dutton Dean, College of Earth and Mineral Sciences Pennsylvania State University Vice Chairman, NRC Aviation Weather Services CommitteeDr. John Hansman Professor of Aeronautics and Astronautics Massachusetts Institute of Technology FAA RE&D Advisory Committee Member Brig. Gen. Albert Kaehn (U.S. Air Force, retired) Former Commander, Air Weather Service Chairman, NRC Aviation Weather Services Committee Brig. Gen. John Kelly, Jr. (U.S. Air Force, retired) Former Commander, Air Weather Service FAA RE&D Advisory Committee MemberMr. Bruce Landsberg Executive Director, AOPA Air Safety Foundation FAA RE&D Advisory Committee Member Mr. Robert Massey Chairman, Air Line Pilots Association Weather Committee NRC Aviation Weather Services Committee Member Mr. William Sears Director of Air Traffic Capacity and Meteorology Air Transport Association of America (Representing Jack Ryan, FAA RE&D Advisory Committee Member) As discussed in appendix I, we convened an expert panel to evaluate FAA’s progress in implementing eight recommendations rated highly by respondents to our August 1997 survey. For each of the eight recommendations, the panelists were presented with the recommendation and offered the opportunity to comment on its intent; presented with FAA’s response and supporting documentation, as well as other evidence of FAA’s activities that we identified during our review and given the opportunity to add any other FAA activities of which they were aware; and given a period of time to discuss the evidence presented. After the discussion, the panelists were asked to individually rate FAA’s overall progress using the following question: Considering FAA’s actions and progress made, and any other factors you feel are relevant, what is your overall rating of FAA’s response to this recommendation? 1. ___ Very Poor 2. ___ Poor 3. ___ Fair 4. ___ Good 5. ___ Excellent The panelists answers are presented in table V.1. Table V.1: Expert Panel’s Ratings of FAA’s Overall Progress in Addressing Eight Recommendations Recommendations related to policy and leadership Recommendation 1: The FAA Administrator should provide a clear and cohesive policy statement regarding the agency’s important role in the provision of aviation weather services. The statement should reflect the need for further definition of the capability and responsibility of controllers and pilots in the issue of separating aircraft from hazardous weather. Recommendation 2: The policy statement and strategic plans should consider hazardous weather information as an aviation safety issue, as well as a capacity one. Recommendation 3: The FAA should expeditiously improve aviation weather services rather than delay action while the federal government decides whether to establish an air traffic services corporation to provide some or all of the functions currently provided by the FAA. Recommendation 4: The FAA should view meteorology as a significant component of every area of its responsibility in which weather could affect safety or efficiency. Recommendations related to interagency coordination Recommendation 5: The FAA and NWS should re-establish the practice of assigning high-level liaisons who are formally tasked with defining and coordinating aviation weather requirements for research, development, and operations between the FAA and NOAA/NWS. Recommendation 6: The FAA and NOAA should ensure that aviation weather research and development are closely coupled to operational components of these agencies so that new concepts and new ideas can be swiftly integrated into ongoing operations. Recommendations related to efforts to address user needs Recommendation 7: The FAA should support a weather architecture which includes the appropriate elements and interfaces needed to disseminate critical weather information to ALL aviation users, supported by adequate funding and priorities. Recommendation 8: Near-term efforts by the FAA and NWS to improve the effectiveness of aviation weather services should focus on the urgent, unmet needs of aviation weather users, which include the following: —a comprehensive national training program to improve the practical meteorological skills of users and providers of aviation weather services; —advanced weather products that are relevant, timely, accurate, and easy to comprehend (e.g., graphically displayed); —ground-to-air communications and cockpit display systems for en route dissemination of advanced weather products; and — weather observations and forecasts that offer improved temporal, geographic, and altitude-specific resolution. Additionally, the panelists were asked to answer three more specific questions about FAA’s efforts to address each recommendation. They were the following: Have FAA’s actions been consistent with the intention of the recommendation? Have FAA’s actions been sufficient to address the recommendation? and Has FAA made timely progress in implementing actions to respond to this recommendation? For each of these questions, the panelists were given the choice of five responses: 1. ___ Definitely no 2. ___ Possibly no 4. ___ Possibly yes 5. ___ Definitely yes The panelists’ responses are shown in table V.2. Finally, after reviewing the ratings assigned to the recommendations, the panel was asked to rate FAA on its general progress in addressing all eight recommendations. Mindi G. Weisenbloom The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined the Federal Aviation Administration's (FAA) efforts to implement the weather-related recommendations made by the National Research Council (NRC) and FAA's advisory committee, focusing on: (1) policy and leadership; (2) interagency coordination; (3) meeting different types of users' needs for weather information; and (4) the level of funding provided for weather activities. GAO noted that: (1) the panel of experts GAO convened concluded that FAA had made limited progress in implementing the weather-related recommendations made by NRC and FAA's advisory committee; (2) regarding the first area of concern, policy and leadership, the reports concluded that FAA is the agency best suited for leading federal aviation weather efforts but that it had not accepted that role; (3) the NRC report linked this criticism to the dispersal of responsibilities among several FAA organizations; (4) the reports also concluded that FAA did not have clear policy guidance to define its role in aviation weather activities; (5) since 1995, FAA has attempted to address these twin concerns by creating a new organization to direct aviation weather activities and by issuing a policy that states that FAA takes the responsibility for leading aviation weather activities; (6) GAO's expert panel concluded that because FAA has not yet produced a plan to implement the new policy, its actions did not go far enough to address the concerns that the report originally raised; (7) with regard to the second concern, interagency coordination, the reports questioned the adequacy of FAA's efforts to coordinate aviation weather activities with other federal agencies; (8) concerning the third area--FAA's efforts to meet the needs all types of users--the reports concluded that FAA was not providing consistent information or adequate training; (9) as evidence that it is meeting the needs of all types of users, FAA cited a list of systems it is developing to provide weather information to various users and a list of the training courses it offers; (10) GAO's expert panel expressed continuing concerns about whether the equipment FAA listed would form an integrated system to serve all users; (11) panelists also raised concerns about the training offered by FAA, stating that better training could help reduce disparities in the abilities of air traffic controllers to interpret weather information; (12) with respect to the amount of funding FAA has allocated for aviation weather activities, the reports raised questions about the low level of funding provided to weather-related projects compared with other activities; and (13) GAO's review of FAA's budget information for fiscal year (FY) 1990 through FY 1998 confirms that the agency has allocated less funding for aviation weather during this period than for most other acquisition and research priorities. |
In passing the Employee Retirement Income Security Act of 1974 (ERISA), Congress sought to encourage individuals to save for retirement in tax-favored retirement arrangements. Traditionally, account owners participating in these arrangements defer taxes on contributions to these accounts up to certain statutory limits, and in general, contributions and investment earnings on those contributions are not taxed as income until the account owner withdraws them from the account. In addition, account owners typically direct their investments and make decisions regarding the purchase, sale, reinvestment, and withdrawal of investments. Individuals saving for retirement in the United States typically save through IRAs or an employer-sponsored plan, like a 401(k) plan. What Is a Solo 401(K) Plan? While employer-sponsored plans generally have multiple employees participating in their plans, a solo 401(k) plan—also called an individual 401(k), a one-participant plan, self- employed 401(k), or self-directed 401(k) plan—are qualified retirement plans designed for the self-employed or a small business owner with no employees (beyond themselves and their spouse). The Economic Growth and Tax Relief Reconciliation Act of 2001 included reforms that generally provided small business owners with no employees and the self-employed the same advantages of a conventional 401(k) plan, such as employee deferrals, loan features, and Roth contributions. Solo 401(k) sponsors select plan investment options and may contribute to the plan both as plan sponsor and plan participant up to a combined $54,000 per account (or $60,000 if age 50 or older) in tax year 2017. In addition, plan sponsors may elect to maintain exclusive authority and discretion to manage and control plan assets without involving a third-party. to qualify for preferential tax treatment. The most common IRA types are: (1) traditional IRAs (which allow eligible individuals to make tax-deductible contributions and accumulate tax-deferred investment earnings); and (2) Roth IRAs, (which allow eligible individuals to make after-tax contributions and accumulate investment earnings tax-free). 401(k) plans have become the most common employer-sponsored retirement savings vehicle in the United States. Created by the Revenue Act of 1978, 401(k) plans typically allow participants to specify the size of their contributions and direct those contributions, as well as any made by their employer, to one or more investments among the options that the employer has preselected as offering effective diversification across broad asset classes. Investment options generally include mutual funds, company stock, and money market funds. DOL’s Employee Benefits Security Administration (EBSA) is responsible for, among other things, administering and enforcing the fiduciary, reporting, and disclosure provisions of Title I of ERISA. Self-employed individuals or owners of small businesses with no employees may sponsor a 401(k) plan, known as a solo 401(k) plan. To set up a solo 401(k) plan, individuals must adopt a written plan document, arrange a trust for the plan’s assets, develop a recordkeeping system, and provide plan information to employees eligible to participate. Solo 401(k) plan account owners can select the plan’s investment options as plan sponsor and invest in these options as the plan participant. (See sidebar for further description of solo 401(k) plans, the 401(k) plans we focus on in this report.) The owner of a tax-deferred account cannot keep retirement funds in their accounts indefinitely. When IRA owners or 401(k) plan participants reach age 70½, they generally have to start taking annual payments—known as required minimum distributions (RMD)—from their plan savings based on their account balance and life expectancy. The distribution for the year in which a participant turns 70½ must be made no later than April 1 of the following calendar year and no later than December 31 for any other subsequent year. Account owners have wide latitude in the types of assets in which they can invest and custodians can choose to limit which type of assets they will allow. While some custodians generally limit investments to publicly traded assets, other custodians allow investments in a range of unconventional assets. (See table 1.) Under Title I of ERISA, fiduciaries of ERISA-covered plans, such as conventional 401(k) plans, must carry out their responsibilities prudently and solely in the interest of the account participants and beneficiaries. For ERISA-covered plans, a fiduciary includes a person who has discretionary control or authority over the management of an account, including management over the account’s assets, or anyone who, for a fee or other compensation, renders investment advice with respect to an account. Among other duties, fiduciaries have a responsibility to select and monitor investment options and service providers; report account information to the federal government and to participants; and ensure that the services provided to their account are necessary and that the cost of those services is reasonable. Many actions needed to operate a qualified retirement account involve fiduciary decisions, whether an individual or plan hires someone to manage the plan or manages the account themselves. However, unlike conventional 401(k) plans, solo 401(k) plans are generally not covered under Title I of ERISA, including the fiduciary standards in Part 4, or subject to DOL oversight because they are sponsored by employers that have no employees. The IRC generally requires plans (including certain IRAs and 401(k) plans) to hold plan assets in a trust fund maintained for the exclusive benefit of employees and their beneficiaries. The IRC establishes the requirements that a trust must satisfy in order to “qualify” for favorable tax treatment. In some cases, both the employer and employee may contribute to the trust. The assets are held in trust until distributed to the employees or their beneficiaries according to the plan’s provisions. Individuals who own a business or are self-employed can establish a 401(k) plan that allows unconventional assets, a solo 401(k) plan, either on their own or by consulting a professional or financial institution—such as a bank, mutual fund provider, or insurance company—to help them establish and maintain the plan. Because solo 401(k) plans generally do not have to comply with ERISA Title I, Part 4 fiduciary requirements, the individual, as plan sponsor, may also serve as the plan fiduciary and trustee. Individuals can establish an IRA with a bank or qualified firm that acts as a trustee or custodian of investments contributed by the individual or purchased with funds contributed by the individual. A traditional IRA is established after a custodial agreement (IRS Form 5305) is fully executed by both the grantor (account owner) and the trustee (custodian). IRS developed a model Form 5305 and permits custodians who use the form to incorporate additional provisions as long as the provisions are agreed to by the account owner and custodian, and comply with federal and state requirements. These additional provisions typically outline the parties’ roles and responsibilities. Custodians are not required to use the model form, and are instructed not to file it with IRS. . IRA custodians have several reporting requirements with respect to IRA investments. IRA custodians are required to submit annually a Form 5498 IRA Contribution Information to IRS for each IRA account under custody. As part of this reporting, IRA custodians must ensure that all IRA assets (including those not publicly traded) are valued annually at their fair market value (FMV), and IRS requires custodians to report the year-end aggregate FMV of all investments in an IRA. Starting with tax year 2015, IRS required custodians to report the aggregate FMV of investments falling in specified categories of unconventional assets and to identify the category in which the assets belonged. Custodians are also required to submit Form 1099-R Distributions From Pensions, Annuities, Retirement or Profit-Sharing Plans, IRAs, Insurance Contracts, etc., each year that an IRA withdrawal took place and detail the total distributions from the account during the calendar year. Form 1099-R also provides information about the IRA distributions, such as whether the distributions were taken before age 59½. A 401(k) plan’s annual reporting requirements are generally satisfied through filing a Form 5500 Annual Return/Report of Employee Benefit Plan (Form 5500) and its accompanying schedules. The Form 5500 is the primary source of information collected by the federal government regarding the operation, funding, expenses, and investments of employee benefit plans. The Form 5500 reporting requirements vary depending on the size and type of plan: plan sponsors required to submit the long form must fill out multiple schedules and attachments that collect information on particular aspects of the plan, such as the value and types of plan assets; plans allowed to complete the short form have fewer reporting requirements. IRA owners and 401(k) plans are not permitted to engage in certain prohibited transactions. Prohibited transactions generally fall into two categories: Involving disqualified persons. An IRA owner and 401(k) plan are prohibited from engaging in a transaction with a range of entities, including a fiduciary, a person providing services, or members of the IRA owner’s family, including a spouse, ancestor, or descendant. Involving self-dealing. An IRA owner and 401(k) plan fiduciary are prohibited from engaging in a transaction where the account owner or the fiduciary benefits from the asset prior to retirement. Unlike a 401(k) plan sponsor, who has an opportunity to voluntarily correct some prohibited transactions after they occur, IRA owners face adverse tax consequences when engaging in a prohibited transaction. Examples of a prohibited transaction in an IRA may include: (1) directing the IRA to purchase a vacation home as a rental property for personal use; (2) selling their own property to the IRA; and (3) taking a salary from an IRA-funded business. Specifically, if the IRA owner engages in a prohibited transaction, the IRA loses its tax-favored status as an IRA, and the account is treated as distributing all of its assets to the IRA owner at the FMV on the first day of the year in which the transaction occurred. The IRA owner may also be subject to a 10 percent additional tax on early distributions unless an exception applies. Earnings and profits made in tax-favored savings vehicles are generally reinvested in the account with taxes deferred until distribution. However, two circumstances can generate current tax liability for retirement account owners: Unrelated Business Taxable Income (UBTI): Unrelated business taxable income is gross income generated from an ongoing trade or business (less allowable deductions) that is not related to the exempt or tax-deferred entity, such as an IRA. An IRA or 401(k) plan that earns $1,000 or more of gross income from an unrelated business must file Form 990-T with IRS and pay related taxes. Unrelated Debt-Financed Income (UDFI): Unrelated debt-financed income is a form of UBTI. If an asset purchased by an IRA is debt- financed (e.g., a mortgage on a rental property), income produced by that asset could be subject to taxes. Multiple federal, state, and independent entities provide regulatory oversight for retirement accounts invested in unconventional assets, although agency jurisdiction varies depending on the type of provider, the state in which it conducts business, and type of plan offered. Table 2 provides a summary of the regulatory activities of some of the major entities involved in overseeing these accounts. To date, federal data collection efforts capture limited information on the unconventional asset holdings in IRAs and solo 401(k) plans, making their overall prevalence unknown. Historically, IRS has not collected FMV data specific to unconventional asset holdings in IRAs. For tax year 2015, IRS began requiring IRA custodians to report selected information on unconventional asset holdings in IRAs. IRS did not fund electronic compilation for the 2015 data but plans to electronically compile data for tax year 2016 that will be filed in 2017. As of November 2016, IRS has not provided a date on when the new IRA asset type data will be available for further analysis. While estimates have been reported in recent years regarding the aggregate investment in IRAs that may hold unconventional assets, we could not determine the validity of these estimates based on the sources of data used to support them. In addition, DOL collects no data on unconventional asset holdings in solo 401(k) plans because sponsors of solo 401(k) plans are generally not required to report their investment holdings. As a result, federal data collection on unconventional assets is incomplete and unspecific with respect to the types of assets held in these accounts. (See table 3.) Seventeen of the 26 custodians we identified who allow investment in unconventional assets reported holding an aggregate of more than 488,000 retirement accounts at the end of calendar year 2015. Custodians reported that owners of these retirement accounts invested in a range of unconventional asset types and identified real estate, private equity, and hedge funds as the most common asset types held in these accounts. In addition, they reported that account owners invested in Limited Liability Companies (LLC) and limited partnerships, precious metals, promissory notes, church bonds, and private placements. Custodians also reported that these accounts have an aggregate value of approximately $50 billion; however, we could not determine whether the accounts exclusively held unconventional assets or, if not, which portion was attributable to cash or publicly traded assets held in the accounts. IRAs made up the vast majority of accounts and assets reported, and solo 401(k) plans constitute less than 1 percent of reported accounts and assets. (See table 4.) The aggregate data that custodians provided help establish a baseline of retirement account investment in unconventional assets, but these data are not comprehensive in that they exclude several other potential sources of such investment. For example, industry representatives stated that other banks, trust companies, and financial service providers may accommodate investments in unconventional assets, such as commercial real estate, for their high net worth clients. Additionally, some individuals with 401(k) plans who invest in unconventional assets serve as plan trustee and would not need to retain the services of a custodian to process account transactions. Therefore, it is likely that the total number of accounts invested in unconventional assets would be higher. Transferring existing retirement savings to a new account or plan that allows investment in unconventional assets expands the roles and responsibilities of account owners for managing aspects of their accounts. Many individuals seeking to set up a new account may be accustomed to tax-favored status for their IRA or 401(k) plan, but their decision to invest in unconventional assets is accompanied by a range of responsibilities that may be new and unfamiliar. For example, all 20 custodial agreements we reviewed required individuals to agree to be responsible for directing their investments; and oversee the selection, management, monitoring, and retention of all investments in the account. In addition, while account owners may seek the assistance of knowledgeable third parties, such as attorneys, accountants, tax advisors, and financial planners, the account owners bear the consequences of any mistakes made in managing their accounts. According to DOL, IRA and 401(k) plan investors often lack investment expertise and must rely on experts, but are unqualified to assess the quality of the expert’s advice or guard against its conflicts of interest. Moreover, DOL notes that many of these experts often receive fees (or other forms of compensation) that may introduce conflicts of interest between them and the plan officials, plan participants, and the IRA investors they advise. Selecting an Appropriate Account Type. Individuals seeking to save for retirement by investing in unconventional assets must first determine which type of retirement savings vehicle and investment aligns with their savings goals. Nine of 17 custodians reported that clients want to invest in unconventional assets for a variety of reasons, including avoiding the stock market, diversifying retirement portfolios, investing in a tangible or familiar asset, or investing in a company not yet publicly traded. One custodian said that most individuals seeking investment in unconventional assets did not own their own businesses, and that it was more common for these individuals to establish an IRA with a custodian that allowed this kind of investment. Self-employed individuals or owners of small businesses who employ no other full-time employees may qualify to sponsor a solo 401(k) plan. Two service providers who advise their clients on creating solo 401(k) plans to invest in unconventional assets said that their clients preferred the 401(k) plan for its higher annual contribution limits and the ability to take participant loans. Establishing and Funding an Account. An individual must take several steps to establish and fund an account that allows investment in unconventional assets. First, individuals opening an IRA must find a custodian willing to administer the asset. Eighteen of the 21 service providers reported that in their experience most individuals wanting to set up an account had a specific investment or asset type in mind before making initial contact with them. Twelve of the 17 custodians reported that they placed some restriction on the types of unconventional assets that they would allow. For example, they reported that they did not allow investment in certain assets, including foreign-based assets, precious metals, person-to-person promissory notes, or single-member LLCs. One custodian noted that such restrictions were necessary because some assets were administratively infeasible for its business plan. For example, a custodian who does not specialize in foreign-based assets may not accept custody of these types of assets. Second, prospective IRA owners must sign a custodial agreement that outlines the respective roles and responsibilities of the account owner and custodian over the lifecycle of an account. Our analysis of agreements available on 20 custodians’ websites found that 18 custodians used IRS’s model custodial agreement to help IRAs conform to IRS requirements. The model agreement includes several articles that IRS has approved regarding contribution limits, prohibited assets, RMDs, and the treatment of beneficiaries, among other things. The agreement also provides custodians with an opportunity to amend the form with additional provisions as long as these provisions comply with applicable requirements of federal and state law. Third, the individual must authorize the custodian to fund the new account either through a new contribution, or through a transfer or rollover of funds from another qualified retirement account. Purchasing Assets. The choice to purchase unconventional assets rather than publicly traded stocks and bonds in a retirement account marks a significant shift in the balance in account management responsibilities toward the account owner. On the one hand, solo 401(k) account owners sponsor the plan and can serve as its trustee, allowing them to purchase any asset permissible in the plan documents and deposit the asset in their designated trustee account. On the other hand, IRA owners investing in unconventional assets must locate an asset, determine its suitability for their retirement goals, and conduct due diligence on the investment and the investment sponsor. In addition, to finalize the purchase, these IRA owners must collect, review, and prepare all purchase documents and provide them to the custodian to execute the purchase on behalf of the IRA. IRA owners and custodians must complete this sequence of tasks each time an asset is placed in the account. (See table 5.) The amount and type of documentation that account owners must provide to custodians before an asset can be purchased can vary considerably depending on the type of asset being purchased. Our analysis of custodian documentation found that assets like promissory notes or precious metals require minimal documentation from the account owner, while other assets, such as real estate or private equity, can require the account owner to provide considerable documentation before a purchase can be made. Depending on the asset type selected, an account owner may have to expend considerable time and effort to finalize an asset purchase. (See table 6.) Some IRA owners may choose to exercise greater control and limit a custodian’s direct involvement in the purchase of unconventional assets by adding a checkbook control feature to their IRA. In order to obtain checkbook control, the account owner must first establish an LLC that is owned by the IRA. Once this new LLC is established, a business checking account linked to IRA funds is set up, and account owners are named manager of the LLC with control of the checkbook. Using the checkbook owned by the IRA’s LLC, the account owner can take advantage of time-sensitive offers and purchase assets directly from investment sponsors without having to wait for a custodian to execute a purchase or sale. Account Management and Compliance. Account owners who invest in unconventional assets typically agree to become responsible for the day- to-day management of their accounts and ensuring that the account remains compliant with laws and regulations, according to our review of selected custodial agreements. As an IRA custodian can only act on the direction of the account owner, an account owner generally must inform the custodian of the many tasks needed to maintain assets in their account—such as purchases, sales, earned income, payments due, requested distributions, and changes in the account’s value—and provide sufficient documentation to the custodian to facilitate each account transaction on their behalf. (See table 7.) The extent of custodian involvement in the ongoing management of an account can depend on the types of unconventional assets held in the account. For example, one custodian mentioned that promissory notes that promised a balloon payment of principal and interest at the end of a specified term generally required less recordkeeping from the custodian. The same custodian and another described the management of a real estate holding in an IRA as being a labor-intensive, manual process. In this case, a custodian would coordinate with the account owner to pay all expenses from IRA funds, such as maintenance, improvements, property taxes, condominium association fees, general bills, and insurance, and return any related income to the IRA. Account owners need to be mindful of fees and expenses associated with plan investments and services to determine whether they continue to be reasonable in light of the services provided. Given that retirement accounts must be held in trust, IRA custodians generally charge a range of administrative and transactional fees for the services they provide throughout the lifecycle of an IRA. While some transactions like precious metal storage lend themselves to a flat fee, other transactions like real estate purchases can involve multiple transactions requiring greater involvement by a custodian, leading to a higher incidence of fees that must be paid out of the account. In reviewing fee disclosures from custodians who made them publicly available on their websites, we found that custodians generally charged fees for similar services—account establishment, account maintenance, transactional fees, and account termination—but the type of fee structure used and amounts of fees charged varied among custodians. Custodians’ fee structures generally included a (1) flat fee; (2) a per-asset fee in an account; or (3) a fee based on a percentage of an account’s value. Some fee structures consisted of multiple categories, such as a flat fee plus a per-asset fee. The fee structure used can affect annual account costs, as shown in table 8. Finally, one fee disclosure we reviewed reminded account owners of their responsibility to monitor their accounts, noting that unfunded accounts and accounts with zero value would continue to incur fees until the account owner provided written instruction to close the account. IRA owners with checkbook control can perform some tasks associated with maintaining the account without a custodian’s assistance and thus avoid some of the custodian’s administrative and transaction fees related to these services. For example, an IRA owner, as manager of the LLC owned by the IRA, can manage several administrative services associated with rental real estate in an IRA, such as paying property taxes, insurance premiums, and utility bills, and writing checks from the LLC bank account to cover property repairs and maintenance. However, whether using a custodian or not, the account owner, in signing a custodial agreement, agrees to be responsible for ensuring that transactions do not run afoul of the prohibited transaction rules and for determining whether transactions constitute contributions to or distributions from the IRA. Closing an Account and Distributing Assets. As with purchasing assets and maintaining an account on an ongoing basis, account owners investing in unconventional assets are responsible for overseeing the distribution of assets from their accounts, such as determining the amount of any RMD and directing custodians to release the funds. Whether seeking to liquidate the assets in an account, take physical possession of the assets, or transfer assets to another custodian, account owners must direct the custodian to facilitate the removal or transfer of assets from the IRA. Distribution of assets. Either the account owner or the custodian can initiate a distribution of account assets. Account owners can submit a distribution request to the custodian. The custodian follows the account owner’s directions to distribute account assets to the account owner either in cash or other in-kind distribution. In the case of an in- kind distribution of assets, the custodian may ask the account owner to provide an updated FMV for the assets before completing the distribution. A custodian can also initiate a distribution of account assets for a variety of reasons, including engaging in a prohibited transaction or nonpayment of fees, The custodian must report distributions to IRS and the account owner on Form 1099-R. Custodian-to-custodian transfer. The account owner can establish a new account with another custodian and authorize a transfer of assets from the old account to the new account. The new custodian works with the account owner to re-title illiquid assets in the name of the new account and forwards the transfer request to the current custodian for execution. The original custodian confirms that all fees and expenses have been paid and illiquid assets are properly re-titled; and transfers the assets to the new custodian. Custodians are generally not required to report such transfers between IRAs to IRS or to account owners on Form 1099-R. Some custodial agreements contain provisions that a custodian may initiate a distribution of unconventional assets to the account owner (or transfer assets to another custodian) without an account owner’s consent. When this occurs, the custodian notifies an account owner of the custodian’s intent to resign and terminate an account, and generally gives the account owner 30 days to exercise their right either to request a lump sum distribution or to name a new custodian to initiate an in-kind transfer. If the account owner does not name a new custodian, the custodian can transfer the assets in-kind to another custodian of their choosing or distribute the account’s assets to the account owner. If distributed to the account owner, the custodian must report the distribution to IRS and provide a Form 1099-R to the account owner. IRA owners who invest in unconventional assets take on a heightened risk of engaging in a prohibited transaction and losing tax-favored status for their retirement savings. IRS officials stated that prohibited transactions are the most prominent compliance risk associated with investing IRA savings in unconventional assets. Prohibited transactions are more likely to arise with investments in promissory notes, private equity, and real estate because—unlike publicly traded stocks, bonds, and mutual funds—these investments can involve disqualified family members or other disqualified persons. Similarly, IRA investments in rental real estate, with its many transactions, for example, can leave IRA owners susceptible to a number of prohibited transactions, any one of which would result in the loss of the IRA’s tax-favored status, as shown in figure 1. Some custodians can serve as gatekeepers for obvious prohibited transactions, such as an IRA owner purchasing his own property for the IRA, though account owners must navigate the IRA tax laws and can face additional taxes for noncompliance. Account owners who invest in an LLC inside an IRA with a checkbook control feature, which limits custodial involvement in transactions, also take on a heightened risk of engaging in a prohibited transaction and losing their IRA’s tax-favored status. Checkbook control may offer IRA owners additional conveniences and reduce transaction fees charged by custodians, but these IRA owners must closely monitor each action for prohibited transactions. For example, IRA owners with checkbook control may pay IRA expenses directly without submitting requests through a custodian; however they must avoid paying IRA expenses with personal funds, such as writing a check from a personal account rather than from their IRA checkbook, or making payments to themselves or another disqualified person. Six of 17 custodians said that they did not offer IRAs with checkbook control due to the lack of custodian oversight for prohibited transactions, among other things. Earnings and profits made in tax-deferred savings vehicles generally get reinvested in the account without generating current federal tax liability, but investments in certain unconventional assets can generate ongoing tax liability for IRA owners. Certain investments can generate current tax liability from UBTI or UDFI earned in retirement accounts. Examples include using an IRA to invest in an active business or using debt to finance a portion of an asset’s purchase. (See fig 2.) IRA custodians need to monitor IRA investments for UBTI and pay any applicable taxes from the IRA, but we found custodians often delegated the responsibility for monitoring for tax liability to the account owner. IRS requires custodians of IRAs subject to these requirements to file a Form 990-T for any UBTI of $1,000 or more. Most custodial agreements we reviewed required account owners to agree to monitor their IRA for business income; however, two service providers told us IRA owners familiar with investing in more conventional assets that do not generate business taxable income may not realize that the responsibility to monitor continues as long as the asset remains in the IRA. Having to pay taxes from the IRA can pose additional challenges for account owners who invest in illiquid assets that cannot easily be sold to pay applicable taxes. First, illiquid unconventional assets, such as real estate, private equity, and promissory notes, may require account owners to find another investor to purchase their interest in the asset. Second, the account owner cannot pay the taxes with personal funds (a prohibited transaction) and must arrange to have a custodian pay the taxes from their IRA. Third, two service providers stated that IRA owners may not realize that once a retirement investment generates UBTI or UDFI, taxes must be estimated and paid quarterly if the tax is expected to be $500 or more, necessitating a certain level of liquidity to be maintained in the account. IRA owners who invest in unconventional assets may face challenges meeting their responsibilities to provide updated FMV information to their custodian to meet IRS’s annual FMV reporting requirement because some unconventional assets are inherently hard to value. Some unconventional assets, such as precious metals, have a readily available FMV; other assets, such as undeveloped land and private equity, may require IRA owners to obtain a third-party appraisal or rely on investment sponsors to provide the information. Many of the custodial agreements we reviewed made IRA owners responsible for obtaining and providing a year-end FMV of unconventional assets in their accounts to the custodian each year. However, some custodians contact the investment sponsor directly to obtain an updated FMV, and if unsuccessful, may report the last-known FMV or the original purchase price. As a result, the FMV reported to IRS may not reflect a nonpublicly traded asset’s current value. IRA owners who fail to provide updated FMV to their custodian within a specified time limit run the risk of their custodian distributing their assets from the IRA, which could lead to a loss of the account’s tax-favored status if the owner cannot identify a successor custodian willing to hold the assets. (See fig. 3.) Even though the last-known FMV that the custodian reports to IRS may not necessarily reflect the current estimated value of an asset, it can be treated as an asset’s true value and result in federal tax consequences for IRA owners. For example, the last-known FMV can be used to calculate federal tax liability for IRA owners: when calculating RMDs; when determining the distribution amount in the event the IRA engages in a prohibited transaction; and when reporting an in-kind distribution in the event a custodian resigns an account (e.g., IRA owner fails to pay custodial fees or to provide updated FMV). Distributions of assets at their last-known FMV can be especially damaging if the asset distributed is found to be valueless and/or an updated FMV is unattainable. Once a distribution is made from an account, the value of that distribution—even if based on a last-known FMV—becomes the value used to determine the account owner’s federal tax liability. IRS identifies noncompliance with IRA distribution rules through automated matching of custodians’ Form 5498 and Form 1099-R returns with taxpayers’ income tax returns. According to an IRS official, if a Form 1099-R incorrectly reported a distribution of a worthless asset as a distribution of a valuable asset, the individual receiving the incorrect distribution would be responsible for proving that the asset was worthless. The official further noted that, to remain compliant, the taxpayer could pay the tax due and subsequently file a claim for a refund, or report the depreciation of the distributed asset’s value on an annual income tax filing. Our review of consumer complaints found several examples of a custodian’s decision to report the last-known FMV creating tax consequences for account owners, as shown in table 9. Custodians provided information to account owners through custodial agreements and other forms on owner responsibilities for providing FMV information for unconventional assets and supporting documentation to the custodian. For example, thirteen of the 20 custodial agreements we reviewed included custodian-specific policies informing account owners of their responsibilities to comply with IRS’s annual FMV reporting requirements. We found a range of policies, varying by asset type and custodian, for the FMV information that would be reported to the IRS and for when account owners needed to obtain independent appraisals to substantiate an asset’s FMV. For example, some of these agreements and other custodian forms: required IRA owners to obtain independent property appraisals for real estate investments at least once every 3 years; required IRA owners to obtain an annual comparative market analysis for real estate investments; collected FMV information directly from investment sponsors and other third-parties for certain assets, such as limited liability companies, limited partnerships, and hedge funds; and reported the purchase price for other assets, such as promissory notes. However, some custodians also stated that requiring independent appraisals in years without a taxable event, such as a distribution, could be costly for account owners and discourage investments in certain types of assets. The difficulty with obtaining and verifying FMV of certain unconventional assets can expose account owners to fraud and allow losses in value to remain undetected in retirement accounts for some time, eroding IRA owners’ retirement savings. In 2011 SEC’s Office of Investor Education and Advocacy and the North American Security Administrators Association (NASAA) issued a joint investor alert warning investors that money held in accounts that allow investments in unconventional assets presented attractive targets for fraud promoters seeking to engage in fraudulent conduct. The alert included multiple examples of SEC and state enforcement cases that involved fraudulent schemes associated with IRAs that allow investments in unconventional assets. SEC officials said that even though the prevalence of accounts holding unconventional assets cannot be readily determined, the potential for fraud in these accounts remained high and that concrete examples of fraud involving these accounts have been, and continue to be, well documented. Three state securities administrators said complaints from account owners were often filed years after an account had lost value because these owners believed the periodic account statements sent by the custodian were correct. For example, if custodians, and in turn account owners, rely on the purchase price of a promissory note as an annual FMV, they may not realize when a borrower has filed for bankruptcy and thereby be unable to pay back the note. Similarly, according to the three state securities administrators, investment sponsors may report incorrect FMV information to custodians to perpetrate fraud. Finally, some account owners appeared to misunderstand the custodian’s responsibility to report annual FMV as a form of verification, leaving these account owners open to holding potentially fraudulent assets in their accounts. (See table 10.) Account owners may also face challenges when trying to liquidate certain unconventional assets to distribute as retirement income. Unlike publicly traded assets, which can be purchased and sold with relative ease in retirement accounts, account owners may experience difficulty finding a secondary market in which to sell certain unconventional assets. For example, account owners trying to liquidate a private equity investment must find other investors willing to purchase the asset. (See table 11.) Account owners who have difficulty liquidating unconventional assets may instead be required to accept in-kind distributions rather than cash to comply with the minimum distribution requirements. For example, an account owner invested in real estate may need to request that a custodian distribute a percentage of the property equivalent to their calculated annual RMD. In such a case, that individual would own an illiquid portion of the property personally while the account would own the rest. The account owner in this scenario would not be able to receive a cash distribution to use as retirement income, yet would be responsible for paying applicable income taxes on the in-kind distribution. Current IRS guidance provides little information to help IRA owners understand their expanded responsibilities and potential challenges associated with investing in unconventional assets. For example, in our review of complaint data, some account owners appeared to misunderstand their responsibilities documented in the custodial agreements, and expected their custodian to provide due diligence, monitor investments, or compensate them for investment losses. (See table 12.) Federal internal control standards require agencies to communicate effectively with external stakeholders to help achieve agency goals. IRS’s Taxpayer Bill of Rights states that taxpayers have the right to know what they need to do to comply with the tax laws, and are entitled to clear explanations of the laws and IRS procedures in all tax forms, instructions, publications, notices, and correspondence. In addition, IRS’s strategic plan states that IRS guidance should help taxpayers understand their tax responsibilities through targeted outreach, communications, and education. We previously recommended that IRS outreach and education explicitly target the risk of noncompliance for IRA owners investing in unconventional assets. We found three areas, in particular, where IRS guidance lacks specific information for account owners investing in unconventional assets: Prohibited transactions. IRS officials said that engaging in prohibited transactions is the most prominent compliance risk associated with investing IRA savings in unconventional assets, but IRS has not yet compiled data to help provide targeted outreach to IRA owners who invest in unconventional assets. In 2014, we recommended that IRS: (1) add an explicit caution in Publication 590 for taxpayers about the potential risk of engaging in a prohibited transaction when investing in unconventional assets; and (2) identify options to provide targeted outreach to taxpayers with unconventional IRA assets and their custodians to help them avoid losing their IRA tax-favored status by engaging in a prohibited transaction. In 2015, IRS added an explicit caution to IRA owners in Publications 590-A and 590-B about the heightened risk of engaging in a prohibited transaction, but has not compiled the data to help provide targeted outreach. IRS said it could refine its outreach to those taxpayers with nonpublic IRA assets using the new asset type data once compiled electronically. Unless IRS augments outreach based on reliable data about unconventional IRA investments, these taxpayers at greater risk may not be able to ensure compliance with rules on prohibited transactions. Four service providers told us that additional guidance would help IRA owners investing in unconventional assets understand the risks for prohibited transactions. Unrelated business taxable income. IRA owners unfamiliar with monitoring their investments for UBTI have limited guidance to assist them in identifying and calculating their tax liability. IRS Publications 590-A and 590-B, which together serve as a general IRA handbook for IRA owners, make no mention of UBTI or its tax consequences for IRAs. IRS Publication 598 -Tax on Unrelated Business Income of Exempt Organizations lists IRAs as one of many exempt entities subject to UBTI, and provides detailed examples of how to calculate UBTI and file tax forms. However, the publication does not provide specific examples of how IRA investments in unconventional assets can generate UBTI. An IRS official said that the IRAs subject to UBTI represented a small percentage of 990-T filers, and that neither the frequency nor consequences of noncompliance met the threshold for including specific IRA-related content in the IRA and UBTI guidance. However, without an explicit caution or guidance from IRS, account owners may inadvertently invest in assets that require additional monitoring and filing obligations; failure to meet these obligations may subject account owners to underpayment penalties. Fair market value. Current IRS guidance includes no guidance or advice to custodians or IRA owners regarding how to determine the FMV for unconventional assets held in retirement plans. An IRS official stated that FMV is a commonly understood term and that many professional appraisal companies could provide an unbiased value for any IRA asset. However, 9 of the 17 custodians reported challenges obtaining FMV information, such as from nonresponsive account owners or investment sponsors, or in obtaining adequate supporting documentation. Further, securities administrators in one state said that guidance could be improved to indicate to custodians the type of information that they should rely on for substantiating FMV. In addition, three service providers said that account owners needed clearer guidance on FMV reporting, such as the types of information that would fulfill the requirements. The inclusion of language from the IRS model form that a custodial agreement has been “pre-approved by the IRS” and the ability for custodians to amend the IRS model form may mislead account owners into thinking that IRS has reviewed and approved the entirety of the document they sign. Fifteen of the 20 custodial agreements we reviewed retained language from the IRS model form stating that the agreement had been “pre-approved by the IRS,” but an IRS official said that the agency does not receive, review, or approve custodial agreements. In fact, the model form instructs IRA owners and custodians not to file the form with IRS. In addition, 16 of the agreements resembled an IRS form, which may further lead account owners to believe that the provisions contained within the form had been verified by IRS. For example, these agreements included the IRS form number, an IRC section reference, and the IRS 2002 revision date for the model form at the top of the first page. Lastly, we found that all 20 of the custodians amended the form to include multiple provisions designed to protect the custodian. For example, some of these added provisions include: performing no due diligence on the investment and the investment indemnifying the custodian of responsibility for sharing known information on troubled assets with account owners; reserving the right to liquidate IRA assets of the custodian’s choosing; and employing and paying agents, attorneys, and accountants with IRA funds for any purpose deemed necessary. The amended form may cause confusion on the part of IRA owners. For example, IRA owners may not be able to differentiate between the model IRS language and terms and conditions added by custodians. IRA owners may also read the “pre-approved by the IRS” statement and assume that the IRS has vetted the provisions added by custodians. With few restrictions on the types of assets that can be held in an IRA or a solo 401(k) plan, individuals can invest their retirement savings in an increasingly diverse range of unconventional assets—some of which may not have been imagined as retirement plan assets when the first IRAs or solo 401(k) plans came on the market. Unconventional assets, such as virtual currency and unsecured promissory notes, when placed in retirement accounts introduce different kinds of risk to account owners, and can have potential federal tax implications. These risks are particularly high for older workers nearing retirement who transfer savings accumulated over the course of their careers to establish new retirement accounts that allow investments in certain assets that they may not fully understand and may expand their responsibilities. IRS has published detailed guidance to assist account owners with directing their retirement investments, but this guidance has not kept pace with changes in investment options. As a result, account owners who take on responsibility for accounts invested in unconventional assets must rely on guidance better suited for investments in more conventional assets— publicly traded stocks, bonds, and mutual funds—which are often professionally managed. As account owners consider investing in unconventional assets, they should have a clear understanding of what is required to manage such investments in a retirement account, ensure proper valuation and tax reporting, and navigate a complex set of rules that govern tax-favored retirement investments. The consequences for account owners who make a mistake can be severe and jeopardize a lifetime of retirement savings. To assist IRA owners in addressing challenges associated with investing their retirement savings in unconventional assets, we recommend that the Commissioner of Internal Revenue take the following three actions: Provide guidance to IRA owners on the potential for IRA transactions involving certain unconventional assets to generate unrelated business taxable income subject to taxation in the current tax year and subsequent years. For example, IRS could consider adding an explicit caution in Publication 590 Individual Retirement Arrangements (IRAs) and include a link in Publication 590 to Publication 598 Tax on Unrelated Business Income of Exempt Organizations to provide examples demonstrating how certain unconventional assets in IRAs can generate unrelated business income tax for account owners. Provide guidance to IRA owners and custodians on how to determine and document fair market value (FMV) for certain categories of hard- to-value unconventional assets. For example, IRS could consider updating Form 5498 instructions to custodians on how to document FMV for hard-to-value assets (e.g., last-known FMV based on independent appraisal, acquisition price) and provide guidance directed at account owners that provides examples of how to ascertain FMV for different types of unconventional assets. Clarify the content of the model custodial agreement to distinguish what has been reviewed and approved by IRS and what has not. For example, IRS could consider: (1) restricting custodians from stating that the form has been “preapproved by the IRS” on the form; (2) adding language to specify which articles have been preapproved by the IRS and which have not; and (3) limiting custodians from adding provisions to the model form other than those preapproved by the IRS. We provided a draft of this report to the Commissioner of Internal Revenue, the Secretary of the Treasury, the Secretary of Labor, the Chairman of the Securities and Exchange Commission, and the Chairman of the Federal Deposit Insurance Corporation for comment. In its written comments, reproduced in appendix III, IRS generally concurred with our findings and recommendations. In addition, each agency provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Internal Revenue Service, the Department of the Treasury, the Department of Labor, the Securities and Exchange Commission, the Federal Deposit Insurance Corporation, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Charles A. Jeszeck at (202) 512-7215 or James R. McTigue, Jr. at (202) 512-9110. You may also reach us by e-mail at jeszeckc@gao.gov or mctiguej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who contributed to this report are listed in appendix IV. The objectives for this study were to determine: (1) what is known about the prevalence of retirement accounts that invest in unconventional assets; (2) how these accounts are managed, and; (3) what challenges, if any, are associated with the administration of these types of retirement accounts. Each of the engagement’s three researchable objectives required us to identify various arrangements for investing individual retirement account (IRA) and 401(k) plan savings in unconventional assets. For all objectives, we reviewed pertinent federal laws, regulations, and guidance on the use of retirement account savings, as well as recent federal and state enforcement actions against entities involved with these types of retirement accounts. In addition, we searched for relevant scholarly and peer reviewed materials, trade and industry articles, government reports, conferences papers, research publications, and working papers. We also interviewed a range of federal and state regulators, industry stakeholders, and participant advocates. To determine the prevalence of retirement accounts that invest in unconventional assets and better understand how these accounts are managed, we conducted multiple structured data collection efforts from custodians and service providers that allow investment in unconventional assets. To identify challenges associated with administering retirement accounts that invest in unconventional accounts, we analyzed investor complaint data from three federal agencies and two independent organizations. We also used the multiple structured data collection efforts from custodians and service providers to obtain their perspectives on potential challenges for account owners and providers. In addition, we reviewed publicly available custodian and service provider documentation, including application forms, fee disclosures, and custodial trust account agreements, which are used to establish a traditional IRA and specify account owners’ roles and responsibilities. All data collected through these methods are nongeneralizable and do not necessarily reflect the experience of the entire population of custodians and service providers that hold retirement accounts invested in unconventional assets, and account owners who invest in unconventional assets. However, we believe the insights gained through these methods produced valuable information to better understand the extent and form of retirement account investment in unconventional assets. To identify custodians and service providers of retirement accounts who allow retirement account owners to invest in unconventional assets, we conducted a literature search, reviewed industry websites, interviewed industry stakeholders, and reviewed the Internal Revenue Service’s (IRS) list of nonbank custodians. Through this process we identified 26 custodians and 48 service providers with experience working with account owners in establishing and managing such accounts. We determined that these data were sufficiently reliable for the purpose of this report. Next, to learn about the approximate number and value as well as types of retirement accounts held under custody at the end of calendar year 2015, we conducted a survey data collection effort from the 26 custodians that we had identified from available sources. We obtained responses from 17 custodians who varied in terms of the number and value of accounts they held, and the types of unconventional assets they allowed. We focused our effort on custodians as they are responsible under the Internal Revenue Code (IRC) for reporting the fair market value of all IRAs under custody to IRS. This removed the potential for double counting accounts and assets reported by both a service provider and the custodian they use to house clients’ account assets. Through this effort, we also collected information on the types of unconventional assets account owners most commonly invest in. We confirmed the accuracy of account and asset data reported by 10 custodians who agreed to participate in a second structured data collection effort. To examine how these retirement accounts are managed and get a sense of the challenges they present to account owners and providers, we conducted additional structured data collection efforts. We developed a follow-up set of questions for the same 17 custodians and received responses from 10 of them. We also developed a similar set of questions for the 48 service providers we identified who provide administrative support for IRAs and solo 401(k) plans, collecting information from 21 of them. Through these efforts, we collected information on the steps needed to establish a retirement account for the purpose of investing in unconventional assets, as well as the roles and responsibilities of various stakeholders in managing and administering accounts. We also collected providers’ perspectives on the reasons why some account owners choose to invest IRA and 401(k) plan savings in these types of assets and the challenges to account owners and providers may face in doing so. We determined that these data were sufficiently reliable for the purpose of this report. To augment our understanding of challenges associated with managing retirement accounts invested in unconventional assets, we analyzed 334 investor complaints obtained from three federal entities—the Department of Labor (DOL), the Office of the Comptroller of Currency (OCC), and the Securities and Exchange Commission (SEC)—and two independent organizations—the Financial Industry Regulatory Authority (FINRA) and the Better Business Bureau. Our list of custodians and service providers was used to query databases for relevant complaints. We analyzed the complaints to develop general categories of challenges. We then conducted a two-step review and arbitration process to ensure proper categorization of each of the 211 complaints that we determined were directly associated with retirement accounts invested in unconventional assets based on available information. We did not confirm the authenticity of these complaints or investigate any of the allegations made. In addition, we did not collect information about all of their resolutions. We determined that the data we collected were sufficiently reliable for the purposes of this report. To assess the clarity of information new IRA owners receive about their roles and responsibilities from custodians who allow investment in unconventional assets, we collected and reviewed 20 publicly available custodians’ individual retirement custodial account agreements, which IRS requires custodians and account owners to complete to establish a traditional IRA. We analyzed the content of these agreements and compared their format with the format of the IRS model Form 5305-A. In addition to the contacts named above Christopher Falcone, Emily Gruenwald, David Lehrer, Jonathan S. McMurray, Jessica Moscovitch, Thomas Moscovitch, and MaryLynn Sergent made key contributions to this report. James Bennett, Amy Bowser, Caitlin Croake, Sheila McCoy, Jean McSween, Jessica Orr, Walter Vance, Kathleen Van Gelder, and Adam Wendel also provided support. | Federal law places few restrictions on the types of investments allowable in tax-favored retirement accounts, such as IRAs or employer-sponsored 401(k) plans. Recent federal and state investigations and litigation have raised questions as to whether investing in unconventional assets may jeopardize the accounts' tax-favored status, placing account owners' retirement security at risk. GAO was asked to examine issues related to the potential risks and responsibilities associated with investments in unconventional assets. GAO examined: (1) what is known about the prevalence of accounts that invest in unconventional assets; (2) how these accounts are managed; and (3) what challenges are associated with administering these retirement accounts. GAO reviewed relevant federal laws, regulations, and guidance; analyzed data collected from the retirement industry; analyzed available industry documents; and reviewed 334 related consumer complaints collected from three federal agencies and two independent entities. Federal data collection efforts to date have captured little information on retirement accounts holding unconventional assets—such as real estate, precious metals, private equity, and virtual currency—making the prevalence of such accounts unknown. In tax year 2015, the Internal Revenue Service (IRS) began requiring that custodians or trustees of individual retirement accounts (IRA)—including banks or other institutions approved to hold account assets—report selected information on unconventional assets in their clients' accounts to IRS. As of November 2016, IRS plans to begin compiling the new IRA asset data in 2017, but has not specified when the new IRA asset data will be available for analysis. Seventeen of the 26 custodians, who GAO identified as allowing retirement accounts with unconventional assets and who participated in GAO's data collection effort, reported having nearly half a million of these accounts in their custody at the end of calendar year 2015. IRAs made up the vast majority of accounts and assets reported. An IRA owner's decision to invest in unconventional assets can expand their role and responsibilities substantially. GAO's review of industry documents found that individuals wanting to invest in unconventional assets through their IRA generally agree to be responsible for overseeing the selection, management, and monitoring of account investments and shoulder the consequences of most decisions affecting their accounts. For example, owners of such accounts assume a fiduciary role, which makes them assume greater responsibility for overseeing the selection, management, and monitoring of account investments, and shoulder the consequences of most decisions affecting their accounts. Current IRS guidance provides little information to help IRA owners understand their expanded responsibilities and potential challenges associated with investing in unconventional assets. Targeted IRS guidance for these IRA owners may help them navigate the potential compliance challenges associated with certain types of unconventional assets. For example, GAO found that some IRA owners can experience challenges in the following areas: Monitoring for ongoing federal tax liability : IRA owners are not always aware of the need to monitor the gross income from certain unconventional assets in their accounts for ongoing federal tax liability. For example, IRA owners who invest in active businesses or debt-financed properties need to monitor their accounts for ongoing tax liability that must be paid from the IRA. Failure to do so can result in underpayment penalties. Obtaining annual fair market valuations for nonpublicly traded assets : IRA owners investing in hard-to-value unconventional assets can face challenges meeting their responsibilities to provide updated fair market value information to their custodian to meet IRS's annual reporting requirement. Failure to provide an updated fair market value in a timely manner can result in a custodian prematurely distributing account assets to the owner at a fair market value that is not current, potentially incorrect, and which could lead to a loss of tax-favored status for their retirement savings. GAO is making three recommendations to the Commissioner of Internal Revenue to, among other things, improve guidance for account owners with unconventional assets on monitoring for ongoing federal tax liability and to clarify how to determine the fair market value of hard-to-value unconventional assets. IRS generally agreed with these recommendations. |
According to AOC, the entire base project is about 60 percent complete. Except for some punch-list items, such as fixing water leaks, construction work under the sequence 1 contract is now complete. This work includes the basic structure, the truck and Library of Congress tunnels, and the East Front interface. AOC and its contractors also completed work associated with the Inauguration. Work has started on the sequence 2 contract, including fitting out and finishing the basic structure and the Library of Congress tunnel and constructing the utility tunnel and space for the exhibits. AOC has just made contractual arrangements for fitting out and finishing the Senate and House expansion spaces and is now procuring the House Connector tunnel and the connection between the Library of Congress tunnel and the Jefferson building. AOC’s scheduled completion date for CVC is now September 2006, nearly 20 months later than originally planned. We believe, given past problems and future risks and uncertainties, that the completion date may be delayed until sometime between December 2006 and March 2007. Additionally, AOC’s scheduled completion date for the interior of the House and Senate expansion spaces is March 2007. The project’s schedule delays are due in part to scope changes, design changes, and unforeseen conditions beyond AOC’s control (e.g., adding the Senate and House expansion spaces and encountering underground obstructions). However, factors more within AOC’s control also contributed to the delays. First, the original schedule was overly optimistic. Second, AOC has had difficulty obtaining acceptable, contractually required schedules from its contractors, such as a master summary schedule from its construction management contractor. In addition, AOC and its contractors did not adhere to contract provisions designed for effective schedule management, including those calling for monthly progress review meetings and schedule updates and revisions. AOC and its construction management contractor also had difficulty coordinating the work of the sequence 1 and 2 contractors and did not systematically track and document delays and their causes as they occurred or apportion time and costs to the appropriate parties on a timely basis. Additionally, AOC has not yet reached full agreement with CPC on the extent to which construction must be completed before the facility can be opened to the public, and AOC has not yet developed an overall summary schedule that links the completion of construction with the steps necessary to prepare CVC for facility operations. Finally, AOC needs to fully implement our recommendation that it develop plans to mitigate the project’s remaining risks and uncertainties, such as shortages in the supply of stone or skilled stone workers, unforeseen conditions associated with the remaining underground tunnels, and commissioning the building in the allotted time. We have made numerous recommendations to improve schedule management, and AOC has taken actions to implement most of them. We believe, however, that both AOC and its construction management contractor will need to sustain their attention and apply additional effort to managing the project’s schedule, as well as fully implement our recommendations, to help keep the project on track and as close to budget as possible. More specifically, AOC needs to give priority attention to: obtaining and maintaining acceptable project schedules, including reassessing the times allotted for completing sequence 2 work; aggressively monitoring and managing contractors’ adherence to the schedule, including documenting and addressing the causes of delays; developing and implementing risk mitigation plans; reaching agreement on what project elements must be complete before CVC can open to the public; and preparing a summary schedule, as Congress requested, that integrates the major steps needed to complete CVC construction with the steps necessary to prepare for operations. AOC is relying on contractors to design, build, and help manage CVC’s construction and help prepare for its operation. AOC has obligated over $350 million for contracts and contract modifications for these activities. We found that AOC needed to take additional steps to ensure that it was (1) receiving reasonable prices for proposed contract modifications, (2) obtaining adequate support for contractors’ requests for reimbursement of incurred costs, (3) adequately overseeing its contractors’ performance, and (4) taking appropriate steps to see that contractual work is not done before it is appropriately authorized under contractual arrangements. Initially, AOC was not preparing independent government estimates as part of its price analyses for proposed modifications to the two major contracts. In early 2004, AOC hired an employee for the CVC staff with contract management experience, and AOC has improved its capacity to obtain reasonable prices by, among other things, preparing government estimates as part of its effort to evaluate the reasonableness of prices offered by the contractors for the proposed modifications. Although most CVC work is being done under fixed price contracts, for which payment is not based on incurred costs, AOC has received or is anticipating requests for reimbursement of over $30 million in costs that the contractors say they incurred because of delays. In addition, AOC has awarded some contract modifications for unpriced work that will require reliable information on incurred costs. According to the Defense Contract Audit Agency, several concerns relating to the contractors’ accounting systems need to be addressed to ensure the reliability of the contractors’ incurred cost information. AOC has continued to experience difficulty getting fully acceptable performance from contractors. For example, as of April 30, 2005, the construction management contractor had not provided an acceptable master schedule identifying appropriate links between tasks and key milestones, and it has not been providing AOC with accurate safety data for an extended period of time. Similarly, one of AOC’s major construction contractors had not corrected recurring safety concerns over an extended period. One of AOC’s CVC consultants began work several months before AOC had awarded a contract to it authorizing the work. AOC agreed to take action to prevent this type of problem from recurring. We have made several recommendations to enhance AOC’s contract management. AOC has generally agreed and taken action to implement these recommendations. For example, it has enhanced its capacity to review cost-related data submitted by contractors with requests for reimbursement based on incurred costs, and it has better evaluated its construction management contractor’s performance and taken action to obtain improvements. To help prevent further schedule delays and control cost growth, AOC needs to aggressively manage its contractors’ performance, particularly in the areas of managing schedules and obtaining reasonable prices on contractual actions, and continue to ensure that contractors’ requests for payment based on incurred costs are adequately evaluated. It also needs to ensure that its contractors report accurate safety data and promptly act to correct safety concerns. We currently estimate that the cost to complete the construction of the CVC project, including proposed additions to its scope, is about $522 million without any allowance for risks and uncertainties. Of this amount, $483.7 million has been provided to date. In November 2004, we estimated that the cost to complete the scope of work approved at that time was likely to be about $515 million, without an allowance for risks and uncertainties. Since November 2004, AOC and the U.S. Capitol Police have proposed about $7 million in scope changes that we included in our current estimate, bringing it to $522 million. However, the project continues to face risks and uncertainties, such as unforeseen conditions, scope gaps and changes, and possible further delays. To provide for these, we estimated in November 2004 that an additional $44 million would likely be needed, bringing our estimate of the total cost to about $559 million. We continue to believe that this estimate of the project’s total costs is appropriate. We have not increased our allowance for risks and uncertainties in response to the recent requests for $7 million in scope changes because we consider such changes among the risks and uncertainties that the project faced in November. Over the years, CVC construction costs have increased considerably. Most of these costs were outside or largely outside AOC’s control, but other costs were more within its control. About $147 million of the cost increase was due to changes in the project’s scope, many of which were for security enhancements following September 11 and the anthrax attacks in October 2001. Congress added the House and Senate expansion spaces and the Library of Congress tunnel to the project’s scope after the original project’s cost was estimated; similarly, the Department of Defense recommended and funded an air filtration system for the facility. Other factors also outside or largely outside AOC’s control contributed about $45 million to the increase. For example, bid prices for the sequence 1 and 2 contracts exceeded budgeted costs, and unforeseen field conditions, such as underground obstructions, necessitated additional work. Finally, factors more within AOC’s control accounted for about $58 million of the expected additional project costs. For example, the project experienced significant delays during sequence 1, and we expect AOC will incur additional costs in the future because we believe the sequence 2 work will not be done by AOC’s September 2006 completion date; slow decision- making by AOC also contributed to higher costs. In its fiscal year 2006 budget request, AOC asked Congress for an additional $36.9 million for CVC construction. AOC believes this amount will be sufficient to complete the project’s construction and, if approved, will bring the total funding provided for the project to $520.6 million. AOC’s request includes the $4.2 million for potential additions to the project’s scope (e.g., congressional seals, an orientation film, and backpack storage space), but does not include $1.7 million for the air filtration system—-an amount that AOC thought it would not need and returned to DOD, but that we believe AOC will still likely need. AOC believes that it could obtain these funds from DOD if needed. Thus, with a $1.7 million increase for the air filtration system, the total estimated cost to complete the project’s construction would be the $522.3 million cited above without provision for risks and uncertainties. To continue to move the project forward, Congress will have to consider the additional funding AOC has requested for fiscal year 2006 to complete the project, including the $4.2 million in additional scope items. Through effective risk mitigation, as we have recommended, and effective implementation of our other recommendations for enhancing schedule and contract management, AOC may be able to avoid some of the $44 million that we allowed for risks and uncertainties. However, given the project’s complexity and the additional requests for funds already made and anticipated, we believe AOC will likely need much of this $44 million even with effective implementation of our recommendations. Already, it appears that AOC may need additional funds for sequence 2 changes in fiscal year 2005. For example, as of April 30, 2005, AOC had identified proposed changes to the sequence 2 contract that it considered necessary and expected to cost about $13.8 million. This sum is about $700,000 less than the $14.5 million AOC has available during fiscal year 2005 for sequence 2 changes. Because the number of construction workers at the CVC site is soon expected to increase significantly, worker safety will continue to be an important issue during the remainder of the project. Our review of worker safety issues found that the construction management contractor’s monthly CVC progress reports contained some inaccurate data for key measures of worker safety, including injuries and illnesses and lost time. For example, the contractor reported 3 lost-time incidents for 2004, but our analysis identified 45 such incidents. These inaccuracies resulted in both overstatements and understatements of rates. For instance, the contractor reported a rate of 6.3 injuries and illnesses for April 2004, whereas our analysis identified 12.5. The construction management contractor attributed the inaccuracies to key data missing from its calculations, unawareness of a formula change that began in 2002, mathematical errors, and poor communication with the major construction contractors. According to our analysis, the rates for injuries and illnesses and for lost time were higher for CVC than for comparable construction sites. For 2003, the injury and illness rate was about 50 percent higher, and the lost- time rate was about 160 percent higher. Additionally, both the numbers and the rates for injuries and illnesses and for lost time worsened from 2003 to 2004. For example, the injury and illness rate increased from 9.1 in 2003 to 12.2 in 2004, and the lost-time rate increased from 8.1 to 10.4. AOC and its contractors have taken some actions to promote and manage safety on the site, such as conducting monthly safety audits and making recommendations to improve safety. However, at the time of our review, neither AOC nor its construction management contractor had analyzed the results of the monthly safety audits to identify trends or concerns, and neither had reviewed the safety audit findings in conjunction with the injury and illness data. Our analysis of key safety audit data for the first 10 months of 2004 identified about 700 safety concerns, the most frequent of which was inadequate protection against falls. Furthermore, AOC had not fully exercised its authority to have the contractors take corrective actions to address recurring safety concerns. We recommended that, to improve safety and reporting, AOC ensure the collection and reporting of accurate injury and illness and lost-time data, work with its contractors to develop a mechanism for analyzing the data and identifying corrective actions, and more fully exercise its authority to take appropriate enforcement actions when warranted. AOC agreed with our recommendations and initiated corrective actions. However, follow-up work that we did in early 2005 at AOC’s request indicated the corrective actions had not yet fully eliminated errors in reporting. AOC agreed that continued action on our recommendations was essential. Both AOC and its construction management contractor prepare monthly progress reports on CVC. AOC relies heavily on its contractor for the information it puts into its own reports, which it sends to Congress. We have found that AOC’s reports have sometimes failed to identify problems, such as cost increases and schedule delays. This has resulted in certain “expectation gaps” within Congress. We have suggested to AOC that its reports could be more helpful to Congress if, for example, they discussed critical issues facing the project and important upcoming decisions. AOC has been making improvements to its monthly reports and has agreed to continue doing so. Mr. Chairman, this completes my prepared statement. We would be happy to answer questions that you and other Subcommittee Members may have. For further information about this testimony, please contact Bernard Ungar at (202)512-4232 or Terrell Dorn at (202) 512-6923. Other key contributors to this testimony include Shirley Abel, Timothy DiNapoli, Brett Fallavollita, Jeanette Franzel, Jackie Hamilton, Bradley James, David Merrill, Scott Riback, Susan Tindall, and Kris Trueblood. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Approved in the late 1990s, the Capitol Visitor Center (CVC) is the largest project on the Capitol grounds in over 140 years. Its purposes are to provide greater security for all persons working in or visiting the U.S. Capitol and to enhance the educational experience of visitors who have come to learn about Congress and the Capitol building. When completed, this three-story, underground facility, located on the east side of the Capitol, is designed to be a seamless addition to the Capitol complex that does not detract from the appearance of the Capitol or its historic landscaping. According to current plans, it will include theaters, an auditorium, exhibit space, a service tunnel for truck loading and deliveries, storage, and additional space for use by the House and Senate. This testimony discusses the Architect of the Capitol's (AOC) management of the project's schedules and contracts; the project's estimated costs, including risks and uncertainties; worker safety issues; and AOC's monthly reporting to Congress on the project. This testimony also discusses recommendations that we have made in previous testimony and briefings and the actions AOC has taken in response. In summary, the CVC project is taking about 2 years longer than planned and is expected to cost between about $522 million and $559 million--significantly more than originally estimated. The majority of delays and cost increases were largely outside AOC's control, but weaknesses in AOC's schedule and contract management contributed to a portion of the delays and cost overruns. Of the project's estimated cost increase, about $147 million is due to scope changes, such as the addition of the House and Senate expansion spaces; about $45 million to other factors also outside or largely outside AOC's control, such as higher than expected bid prices on the sequence 2 contract; and about $58 million to factors more within AOC's control, such as delays. Also, our analysis of CVC worker safety data showed that the injury and illness rate for 2003 was about 50 percent higher for CVC than for comparable construction sites and that the rate for 2004 was about 30 percent higher than the rate for 2003. Finally, a number of AOC's monthly reports to Congress have not accurately reflected the status of the project's construction schedules and costs and have transmitted inaccurate worker safety data. This has led to certain "expectation gaps" within Congress. AOC has taken a number of actions to improve its management of the project; however, these actions have not yet fully corrected all identified problems. To help prevent further schedule delays, control cost growth, and enhance worker safety, AOC urgently needs to give priority attention to managing the project's construction schedules and contracts, including those contract provisions that address worker safety. These actions are imperative if further cost growth, schedule delays, and worker safety problems are to be avoided. AOC also needs to see that it reports accurate information to Congress on the project. Furthermore, decisions by Congress will have to be made regarding the additional funding needed to complete construction and address any risks and uncertainties that arise. |
Our objective was to assess IRS’ performance during the 1997 filing season, with particular emphasis on those areas that were identified as problematic in our reviews of past filing seasons. To achieve our objective, we interviewed IRS National Office officials and IRS officials in the Atlanta, Cincinnati, Fresno, and Kansas City service centers responsible for the various activities we assessed; analyzed filing season related data from various management information systems, including IRS’ Management Information System for Top Level Executives; analyzed IRS data relating to its telephone assistance and conducted a test of IRS’ telephone accessibility during the last 2 weeks of the filing season (see app. I for information on our test methodology); analyzed IRS data on alternative filing methods, including IRS surveys of TeleFile users and nonusers; visited two lockbox banks, one in Atlanta and one in St. Louis, to review remittance processing procedures; interviewed staff from FMS, which is responsible for negotiating and administering lockbox contracts, about the use of lockboxes to process Form 1040 tax payments and analyzed cost/benefit data related to lockbox processing; interviewed officials from IRS’ Taxpayer Advocate’s Office about the impact of various filing season activities on taxpayers; analyzed activity data for IRS’ Internet World-Wide Web site and forms distribution centers; and reviewed relevant IRS internal audit reports. We did our work from January through October 1997 in accordance with generally accepted government auditing standards. We obtained written comments on a draft of this report from the Deputy Commissioner of Internal Revenue. Those comments are discussed at the end of this letter and are reprinted in appendix II. IRS uses various indicators to measure its filing season performance. Those indicators relate to workload, like the number of returns processed; timeliness, like the number of days needed to process and issue refunds; and quality, like the accuracy of IRS’ answers to taxpayers’ questions and the accuracy with which IRS processes individual income tax returns and refunds. As shown in table 1, those indicators show that IRS met or exceeded most of its performance goals for the 1997 filing season. During each filing season, millions of taxpayers call IRS with questions about the tax law, their refunds, or their accounts. The number of callers who get through to an IRS assistor is an important indicator of filing season performance. According to IRS data, as shown in table 2, telephone accessibility, as we have defined it in the past, increased substantially during the 1997 filing season. Results of our independent test also pointed to an improvement in accessibility. Despite the improvement, however, accessibility is still low. To check whether accessibility had increased, we conducted an independent test to measure taxpayer access to IRS’ telephone system from March 31 through April 15, 1997. Our results, compared with the results of a similar test we conducted in 1995, showed that accessibility had improved. For example, during the 1997 test, we had to make 584 calls to gain access to a live assistor 211 times—a 36-percent accessibility rate. That was a significant improvement over 1995, when we had to make 1,655 calls to gain access 98 times—a 6-percent accessibility rate. Also, of the 584 calls placed during the 1997 test, 288 resulted in busy signals—a 49-percent busy rate. That compares favorably with a 92-percent busy rate during the 1995 test. Our test methodology and detailed results are described in appendix I. Despite the significant increase in 1997, telephone accessibility is still too low. The National Commission on Restructuring IRS made that point in its June 25, 1997, report. After noting how accessibility had improved to 51 percent, the Commission noted that “the level of access continues to be unacceptable and inferior to service performance in private sector service organizations.” As table 2 showed, the increase in IRS’ telephone accessibility between the 1996 and 1997 filing seasons was due to a combination of more calls being answered and fewer calls coming in (i.e., “call attempts”). Two primary reasons for the increase in the number of calls answered were (1) a revision to IRS’ procedures for handling calls involving complex tax issues and (2) more staff assigned to answer the telephone, some of whom were detailed from other IRS functions. IRS’ decision to detail staff from other functions resulted in some opportunity costs because these staff were not available to perform their normal duties, such as auditing tax returns. In an effort to increase the number of calls answered, IRS conducted a study to analyze the subject and length of taxpayer telephone calls. According to IRS, the study showed that several areas of complicated tax law involved 20- to 30-minute telephone conversations and that an assistor could answer about 5 simpler calls within the same amount of time. Thus, for the 1997 filing season, IRS revised its procedures so that callers with questions in certain complex tax areas were automatically connected to a voice messaging system. Those callers were asked to leave their name, telephone number, and the best time for IRS to call back, and they were told that someone would be calling back within 2 working days. According to IRS, it received 619,310 calls to the voice messaging system during the filing season and contacted 451,051 taxpayers in response to those calls. IRS said that there are several reasons why it may not have responded to a message. For example, the message may have been garbled, thus preventing IRS from deciphering the caller’s telephone number; callers may have failed to include an area code; or IRS attempts to contact the caller may have gone unanswered. To help return calls to the messaging system, IRS detailed staff from the Examination function, which is the IRS organization primarily responsible for auditing tax returns. IRS data show that staff who were detailed from the Examination function spent about 125 full-time-equivalent staff years returning calls received by the messaging system. A cognizant official in the Examination function estimated that the use of Examination staff to answer taxpayer questions resulted in about $55 million in foregone revenue because those staff were not available to audit returns. We did not assess the validity of that estimate. Another factor that contributed to the increase in the number of calls answered was IRS’ decision to assign more staff to answer the telephone. Nationwide, according to data provided by IRS’ Customer Service function, IRS dedicated 2,546 full-time-equivalent staff years to answer taxpayers’ telephone calls between January 1 and April 30, 1997. This was an increase of 605 staff years over the 1,941 staff years dedicated during the same period in 1996. In addition, some field offices, including the three service centers we visited, temporarily detailed staff to help answer the telephone, some of whom came from functions other than Customer Service. According to IRS, some of these staff were used only as needed, while others were detailed for a few months. The increase in the number of calls answered contributed to the decrease in the number of call attempts. As IRS improves its ability to answer the telephone, taxpayers should encounter fewer busy signals. Fewer busy signals reduce the need for taxpayers to redial, which reduces the number of call attempts. In that regard, IRS’ telephone data showed that the number of busy signals dropped from 86.0 million during the 1996 filing season to 22.7 million during the 1997 filing season and that the average number of call attempts per taxpayer dropped from 2.5 during the 1996 filing season to 1.4 during the 1997 filing season. IRS cited two other contributors to the decrease in call attempts—the elimination of certain notices and the availability of information through other IRS sources, such as the Internet. Before the 1997 filing season began, IRS eliminated 23 notices that it deemed unnecessary, which, in turn, reduced the need for persons to call IRS with questions about these notices. IRS estimated that its action eliminated the issuance of about 7.5 million notices, but IRS could not estimate how many calls might have been eliminated because every notice does not necessarily generate a telephone call to IRS. IRS has a World-Wide Web site on the Internet that was first available during the 1996 filing season. The Web site provides, among other things, some interactive applications that answer tax questions, IRS regulations with “plain English” summaries, answers to the most frequently asked tax questions, and tax forms. IRS data showed a significant growth in the use of IRS’ Web site in 1997. For example, taxpayers accessed the Web site about 117 million times between January 1 and April 20, 1997, compared with about 102 million accesses throughout 1996, and taxpayers downloaded about 6.3 million files during the 1997 filing season compared with about 2.4 million files for the same period in 1996. As noted earlier, the data in table 2 reflect our traditional way of measuring telephone accessibility. Over the last few years, IRS has used another indicator, which it calls “level of access,” to measure its performance in providing telephone assistance to taxpayers. IRS defines level of access as the number of calls answered divided by the number of callers (i.e., the number of taxpayers seeking assistance). Because IRS’ indicator is based on the number of callers, it shows a higher level of performance than does our indicator, which is based on the number of call attempts. Nonetheless, IRS’ indicator, like ours, showed a significant improvement in performance during the 1997 filing season. IRS reported its level of access as 71 percent through April 19, 1997, compared with 51 percent during a comparable period in 1996. We have been working with IRS to establish one mutually agreeable measure of telephone accessibility. As a result, we have reached agreement on a measure to be used in future filing seasons. That measure defines accessibility as the number of calls that get into IRS’ automatic call distribution system, including those that are answered and those that are abandoned by the caller before getting assistance, divided by the total number of call attempts, which would consist of calls answered, calls that are abandoned, and calls that receive a busy signal. As part of that measure, IRS agreed to show, for the calls that got into the automatic call distribution system, how many were answered and how many were abandoned by the caller before receiving assistance. Using IRS data as of April 19, 1997, the new measure shows that taxpayers calling IRS were able to gain access 64 percent of the time (39.8 million calls that got into IRS’ automatic call distribution system divided by 62.4 million call attempts). Of the 39.8 million calls that got into IRS’ system, 31.8 million (80 percent) were answered and 8 million (20 percent) were abandoned by the caller before getting assistance. As of October 31, 1997, IRS had received 120.9 million individual income tax returns, an increase of 1.8 percent compared to the 118.8 million received at the same time last year. Although the increase in the overall number of returns filed was small, the increase in the number filed through alternative methods was significant—about 25 percent higher than last year. IRS offers three alternatives—electronic filing, TeleFile, and 1040PC—to the filing of traditional paper returns. Among other benefits, returns filed through these alternatives involve fewer errors and are presumed to be less costly for IRS to process. As shown in table 3, of the three alternatives, TeleFile had the largest percentage change, by far, in 1997. There were three changes to TeleFile in 1997 that most likely contributed to the large increase in filings: the eligibility criteria were expanded to include certain married persons filing joint returns, persons using TeleFile could request that any refund be directly deposited to their bank account, and IRS changed the tax package sent eligible TeleFile users in an attempt to encourage their use of the system. IRS data show that about 191,000 TeleFile returns were filed jointly by married couples, thus accounting for about 10 percent of the growth in 1997. The amount of growth due to the other two changes could not be quantified. IRS’ decision to change the TeleFile tax package was the subject of some disagreement within IRS. In past years, IRS sent taxpayers who appeared eligible to use TeleFile a package that included not only TeleFile materials but also a Form 1040EZ and related instructions. Thus, taxpayers who could not or did not want to use TeleFile had the materials they needed to file on paper, assuming they were still eligible to file a Form 1040EZ. For the 1997 filing season, IRS eliminated the Form 1040EZ and related instructions from the package sent to taxpayers who were apparently eligible to use TeleFile—hoping that more taxpayers would be inclined to use TeleFile if they received only the TeleFile materials. Officials from the Taxpayer Advocate’s Office said that they did not agree with IRS’ decision. They said that they were originally led to believe that IRS would be sending the revised TeleFile package only to persons who had used TeleFile in 1996 and to a sample of other taxpayers. By the time they learned that IRS was going to send the package to all apparently eligible TeleFile users, it was too late to effect a change. According to the officials, their concern centered on the extra burden the revised package would impose on taxpayers who wanted a Form 1040EZ, as well as the extra costs IRS might incur in filling additional mail and telephone orders for Form 1040EZ from those taxpayers. Internal Audit expressed similar reservations in communications with IRS management before the start of the filing season. Management responded by saying that (1) their intent was to increase the use of TeleFile, which would actually reduce taxpayer burden for those who used it and (2) they expected few of the affected taxpayers to contact IRS’ form distribution centers for copies of Form 1040EZ. Officials from the Taxpayer Advocate’s Office told us that they did not receive many complaints from taxpayers and found no evidence that the number of taxpayer orders for Form 1040EZ was significantly higher than in past years. Nonetheless, they said that they continue to be concerned about this procedure, which IRS has indicated will remain unchanged for the 1998 filing season. For the 1997 filing season, IRS sent about 26 million TeleFile tax packages to taxpayers who, based on the tax returns they filed in 1996, would be eligible to use TeleFile in 1997. After allowing for the fact that some of those taxpayers might no longer be eligible to use TeleFile because they no longer met the qualifying criteria, IRS estimated that about 15.6 million of the taxpayers would be eligible to use TeleFile in 1997. As of October 31, 1997, about 4.7 million taxpayers had filed their returns using TeleFile (about 30 percent of the number IRS estimated to be eligible). Assuming the validity of IRS’ estimate of eligible users, about 10.9 million of those taxpayers chose not to use TeleFile in 1997. IRS conducted three TeleFile surveys in 1997—one electronic and one written survey of users and one written survey of nonusers—that shed some light on taxpayers’ reactions to the revised tax package and the reasons why more people did not use TeleFile. Results of the electronic user survey showed that 30.3 percent of the TeleFile users in 1997 were repeat customers, while the rest were using it for the first time. The results also show that 84.5 percent of the users were able to complete their filing with one call to IRS, and 98.8 percent would use TeleFile again. When questioned about the new tax package, 85.2 percent of the respondents said that the package “encouraged” them to use TeleFile, 2.9 percent said that it “frustrated” them, and 2.5 percent said that it forced them to use TeleFile. Results of the written user survey showed that 88 percent of the users were very satisfied with TeleFile and another 10 percent were somewhat satisfied. Also, 97 percent of the users said they would use TeleFile again if they could, and about 96 percent said that they were very satisfied or somewhat satisfied with the new TeleFile tax booklet. Results from the nonuser survey are critical, in our opinion, if IRS is to identify and effectively deal with barriers that are preventing eligible taxpayers from using TeleFile. In past surveys, IRS learned that most nonusers preferred filing on paper. But IRS did not solicit more specific information on the reasons for that preference. In our report on the 1996 filing season, we recommended that IRS conduct a survey of nonusers during the 1997 filing season that included some specific questions on why they prefer to file on paper. The questionnaire IRS used for the nonuser survey in 1997 solicited more specific data on why taxpayers did not use TeleFile, which, we believe, make the results more useful than earlier surveys. From a list of several potential reasons provided on the questionnaire, respondents were asked to identify the main reason they did not use TeleFile. The main reasons they cited were: filed a Form 1040 or 1040A, which made them ineligible to use TeleFile (25 percent); did not receive the TeleFile tax package (17 percent); used a tax preparer or accountant (15 percent); got help from a friend or family member in filing their return (15 percent); preferred a paper copy of the return for their records (12 percent). In response to questions about the revised TeleFile package and its impact, 12 percent of the nonusers said that the package caused a great deal of inconvenience. Remembering the Taxpayer Advocate’s concern that the absence of a Form 1040EZ in the revised package might significantly increase the number of mail and telephone orders for that form, IRS also asked nonusers who prepared their own returns where or how they got the tax form. Only 2 percent said that they called or wrote IRS. The vast majority (about 82 percent) said that they got the form from a post office, library, or bank. The remaining (about 16 percent) mentioned other methods, such as visiting an IRS walk-in site or downloading a form from IRS’ Internet World-Wide Web site. IRS plans few changes to TeleFile for the 1998 filing season. For example, the TeleFile package for 1998 will again not include a Form 1040EZ. However, one change that might eventually make TeleFile more attractive to taxpayers is a pilot program with Indiana and Kentucky that will allow TeleFile users to submit their state returns at the same time they file their federal return. In that regard, responses to the TeleFile nonuser survey showed that about 44 percent of the nonusers might be encouraged to use TeleFile if they could also use it to file their state tax returns. Electronic filing began as a pilot test in 1986, and the number of individual income tax returns filed electronically continued to grow each year until a drop in 1995. IRS attributed that drop to the various steps it took to deal with refund fraud. As shown in figure 1, electronic filing recovered somewhat in 1996 and continued to grow in 1997, establishing a new high of about 14.5 million returns as of October 31, 1997. One impediment to even more growth in electronic filing is the fact that the method is not completely paperless. Taxpayers must still send IRS their W-2s and a signature document (Form 8453) after their returns have been electronically transmitted. IRS must then manually input these data and match them to the electronic returns. In an attempt to eliminate the paper associated with electronic returns, IRS began testing the use of digitized signatures at three locations during the 1996 filing season. IRS planned to expand the test to seven locations in 1997. The seven locations included three private tax return preparation offices and four sites that were part of IRS’ Volunteer Income Tax Assistance Program. Because of some technical problems with the software, however, IRS delayed its distribution to the seven test sites until April 1, 1997. Because one of the four volunteer sites prepared very few returns after April 1, it did not participate in the test. The test consisted of preparers offering eligible taxpayers the option of signing with a stylus “pen” on an electronic signature pad in place of signing a Form 8453. The electronic signature would then be attached to the taxpayer’s electronic return and both would be transmitted to IRS. From April 1 through April 17, 1997, the test generated 435 returns that were submitted with electronic signatures. IRS did not collect information on the number of taxpayers who were offered the chance to participate in the test but declined. According to IRS, the six participating locations provided feedback that was overwhelmingly positive, including the reduced cost or burden from not having to store the Forms 8453 and not having to pay someone to batch, mail, and track the forms. IRS plans to conduct the test again in 1998 at the same seven locations. An important change for the 1997 filing season involved IRS’ implementation of new procedures for handling returns filed with missing or incorrect SSNs. The amount of revenue protected as a result of these new procedures greatly exceeded the amount protected under the previous procedures. Correct SSNs help ensure that taxpayers are entitled to the credits and dependency exemptions they claim. While missing or incorrect SSNs are often the result of honest taxpayer errors, they have also been linked to fraudulent attempts to reduce tax liabilities and obtain refunds and/or Earned Income Credits. Accordingly, over the last few years, IRS has become more vigilant in checking SSNs. During the last few filing seasons before 1997, when IRS identified a missing or incorrect SSN, it was to delay the taxpayer’s refund and correspond with the taxpayer to resolve the issue. This procedure often required multiple correspondence and months to resolve. As we reported in 1996, IRS did not have enough resources to pursue all of the cases involving missing or incorrect SSNs and ended up releasing many of the refunds associated with those cases. IRS’ SSN error procedures changed in 1997 as a result of a provision in the Welfare Reform Act of 1996. That provision authorized IRS to treat missing or incorrect SSNs as math errors, similar to the way it has historically handled computational mistakes. Under the new procedures, if IRS identifies a missing or incorrect SSN while processing a return, it can immediately adjust the return. For example, if a taxpayer claims one dependent and the child care credit, but lists an incorrect SSN for the dependent, IRS is to increase the taxable income by the personal exemption amount claimed for the dependent and not allow the child care credit. IRS then is to adjust the taxpayer’s tax liability and reduce the taxpayer’s refund, if any. The taxpayer is to receive a notice explaining the change to his or her tax liability and/or refund. The standard notice IRS used in 1997 provided a special toll-free telephone number that taxpayers could call if they wanted to discuss IRS’ changes and/or provide corrected information to support their claims. Taxpayers could also write to IRS to resolve the issue. If taxpayers do not respond to IRS’ notice, there is to be no further correspondence unless they fail to pay any additional tax that was assessed as a result of IRS’ change. In planning for this new procedure, IRS estimated that it would send about 2.4 million notices to affected taxpayers in 1997 and that those notices would generate about 1.68 million responses (telephone calls or letters) from taxpayers. As of September 1, 1997, IRS had sent about 2.2 million notices, which generated about 876,000 calls and letters. IRS said that based on those responses, it subsequently allowed some of the claims it had originally disallowed. As of September 1, after netting out adjustments made in response to taxpayers’ calls and letters, IRS reported that it had protected about $1.46 billion in revenue (i.e., claimed refunds or credits not paid and additional taxes assessed). That is about 150 percent more than the amount of revenue IRS reported as having been protected as a result of the procedures used in 1996. That year, according to IRS, it sent out about 629,000 notices that resulted in the protection of about $590 million. We asked officials in the Taxpayer Advocate’s Office whether the new SSN error procedures posed any problems for IRS and/or taxpayers. They said that they did have concerns about the procedures, which they voiced to IRS management before the start of the filing season. They were concerned, for example, that (1) the procedures may lead to an unmanageable workload for IRS and (2) the notices were not clear. According to the officials, as a result of their input, some changes were made before the filing season began. They also told us that they had not received a significant number of complaints from taxpayers nor had there been an increase in the number of problem resolution program cases or hardship requests for refunds. Even though significant problems did not arise, officials of the Taxpayer Advocate’s Office believed that some additional changes are needed. For example, they said that some individuals have problems obtaining an SSN or some other taxpayer identification number either because of religious affiliations or questionable alien status. Officials also think the notices should be revised to provide the taxpayer with specific information about the error. In an effort to improve remittance processing and deposit tax receipts more timely, IRS has been using lockboxes to process tax payments, including the payments associated with individual income tax returns (Forms 1040). IRS and FMS assume that the use of lockboxes is beneficial to the government because, in general, banks can get the payment processed and the money deposited to a Treasury account quicker than service centers can. This means that Treasury would not have to borrow as much to pay government obligations, thereby avoiding interest charges. In our report on the 1996 tax filing season, we expressed our concern about the way IRS was using lockboxes for Form 1040 payments. Our concern then was not with the processing of the payments but with IRS’ decision to have taxpayers send their tax returns along with their tax payments to the lockboxes and to have the banks sort those returns before shipping them to IRS service centers for processing. Information we received from FMS, which has been paying the lockbox fees, and IRS indicated that having banks sort and ship tax returns increased the cost of the lockbox service by about $4.7 million during the first 8 months of 1996. For example, FMS said that it paid the banks an average of 92 cents per return to sort the 7 million returns received during those 8 months—a function that, according to IRS, service centers performed at an average cost of 37 cents per return. FMS also paid the banks 13 cents per return to ship the tax returns to IRS for processing. Our concern about the Form 1040 lockbox program has intensified since last year. We are no longer concerned only about having lockboxes receive and sort tax returns but about the use of lockboxes to process the Form 1040 payments themselves. Information we obtained this year called into question a key assumption used to calculate the interest cost avoidance figures that IRS and FMS have cited to support the use of lockboxes to process those payments. For the last several years, IRS has been testing the use of lockboxes to process Form 1040 remittances. Those test results and various studies done for IRS led to the decision to have certain taxpayers send their returns and tax payments to lockboxes. Under the current procedure, many taxpayers receive a tax package with one envelope and two differently colored mailing labels. If their return involves a payment, they are to use one label that directs their return and payment to a lockbox. If their return does not involve a payment, they are to use the other label that directs their return to an IRS service center. In explaining the decision to have persons who were making payments send their returns along with their payments to a lockbox, IRS officials responsible for the lockbox program said that they believed an increase in taxpayer burden would result if taxpayers were required to separate their payments from their returns and mail each to a different address. They cited the results of taxpayer surveys done in 1993 and 1994, which, they said, showed that taxpayers preferred to keep their payment and return together. IRS interpreted this preference as an indicator that asking taxpayers to separate their return from their payment would impose a burden. We reviewed the taxpayer surveys and considered the results to be inconclusive as they relate to burden—45.9 percent of the taxpayers surveyed said that they felt uneasy about mailing their checks and returns in separate envelopes while 41.2 percent said that they did not feel uneasy (the other 12.9 percent did not know). Even for those respondents who said they felt uneasy, it was unclear whether they considered the use of separate envelopes an unreasonable burden when weighed against the extra cost to the government associated with sending returns to lockboxes. We realized, in preparing our report on the 1996 filing season, that it was too late to do anything to change IRS’ lockbox plans for the 1997 filing season. Thus, we recommended that IRS take action that would be effective for filing seasons after 1997. Specifically, we recommended that if the government was unable to negotiate lockbox fees that were more comparable to service center costs and in the absence of more compelling data on taxpayer burden, IRS should either discontinue having returns sorted by the banks or reconsider the decision to have taxpayers send their tax returns to the lockboxes along with their tax payments. As noted earlier, the combined fee paid banks for sorting and shipping tax returns dropped only slightly from 1996. And, as discussed below, IRS still does not have conclusive data on taxpayer burden. In May 1997, an IRS/FMS task force that had been formed to identify a solution to this issue for 1998 and beyond recommended that IRS have taxpayers separate their returns from their payments—mailing the former to an IRS service center and the latter to a lockbox. While recognizing the extra burden on taxpayers (e.g., the extra postage associated with mailing two envelopes and possible confusion over which envelope to use), the task force said that such a procedure would minimize lockbox costs and would enable the banks to deposit remittances faster because they would no longer have to handle tax returns. Despite the task force’s recommendation, IRS decided that lockboxes would continue to receive and sort tax returns in 1998. The IRS official responsible for the lockbox program told us that IRS continues to believe that an increase in taxpayer burden would result if taxpayers were required to separate their payments from their returns and mail each to a different address, a view shared by representatives from the Taxpayer Advocate’s Office. To support its position, IRS cited the results of several focus groups that became available after the task force had completed its work. IRS held 8 focus groups in 4 cities involving a total of 29 taxpayers who prepared their own federal income tax returns and 31 tax practitioners. According to IRS, “even though there was not a dominant trend from the , taxpayers noted the cost of two stamps and the confusion of two envelopes as burden issues.” Although focus groups are useful in providing insight on a particular issue, they are not statistically representative of the population and should not, in and of themselves, provide the basis for far-reaching conclusions. Given that and after reviewing transcripts of the focus groups and a July 7, 1997, summary report on the focus group results, we believe that IRS still does not have conclusive evidence that the additional taxpayer burden that may be caused by requiring the use of two envelopes would outweigh the millions of dollars in additional costs the government is incurring to have banks sort and ship tax returns. For example, although the report noted that participants were concerned about the extra postage associated with using two envelopes, it went on to say that taxpayers participating in the focus groups viewed the extra cost “as something that would be accepted” and that “some taxpayers were willing to accept additional burden so that IRS could operate more effectively.” In that regard, focus group participants were not told about the amount of additional cost being incurred by the government to have banks sort and ship the tax returns. The report also said that “several participants voiced concern about the check being separated from the return prior to receipt by the IRS.” However, there is no evidence that participants were told how the two-envelope procedure compares to the current procedure and that, even under the current procedure, the tax return and check get separated. Under the current procedure, even though returns and payments are mailed in one envelope to one location (the lockbox), they are separated at the bank. The return is shipped to IRS for processing while the bank processes the payment. In a July 15, 1997, memorandum to the then Acting Commissioner of Internal Revenue, Treasury’s Fiscal Assistant Secretary provided his views on the processing of Form 1040 tax payments. He noted that he had sought IRS’ support for having taxpayers mail their returns and payments separately but that IRS had rejected that option because of the perceived taxpayer burden. That left only two viable options in the Assistant Secretary’s opinion—continue the current arrangement or return the processing of Form 1040 tax payments to IRS’ service centers. The Assistant Secretary noted, however, that it was his understanding that equipment, personnel, and space issues and the lack of sufficient planning time made it infeasible to move processing back to the service centers for fiscal year 1998. Thus, he concluded that it would be in the best interest of the government to continue the current lockbox arrangement for at least 1 more year. He said that this issue should be reviewed in March/April 1998 to make decisions about fiscal year 1999 and that IRS should continue to seek ways to reduce the cost of this program, by either changing processing procedures or by continuing its search for a resolution on how to direct the Form 1040 returns to the service centers without causing significant taxpayer burden. Our concerns about the Form 1040 lockbox program were heightened this year by new information relating to the interest cost avoidance figures that IRS and FMS have used to show the program’s cost effectiveness. This new information calls into question not just the decision to have tax returns sent to the banks but the more basic decision to use lockboxes to process Form 1040 payments. As part of its review, the lockbox task force compared how much various procedural options for processing Form 1040 remittances would cost IRS and FMS in 1998. Assuming a volume of 11,373,133 items, the task force estimated that the current lockbox procedure would cost about $23.3 million, compared with about $14.5 million if two envelopes were used and about $12.8 million if IRS decided to stop using lockboxes to process Form 1040 remittances and return that function to the service centers. Although this comparison would seem to argue against the use of lockboxes, IRS and FMS assume that having lockboxes process Form 1040 remittances generates savings, in the form of interest cost avoidance, that more than offset the increased IRS and FMS costs. According to FMS, for example, lockboxes processed about 9.7 million Form 1040 tax payments from October 1996 through July 1997, which resulted in an interest cost avoidance of about $23.8 million. As in past years, the interest cost avoidance was calculated on the basis of a general assumption that lockboxes can process and deposit tax payments an average of 3 days faster than IRS service centers during peak workload periods. The validity of that assumption is critical because, according to the lockbox task force, if lockboxes are not processing payments at least 2 days faster than service centers, the amount of interest cost avoidance would be insufficient to offset the additional costs associated with having lockboxes handle tax returns. In that regard, the results of an IRS-commissioned study, issued in March 1997, show that, on average, lockboxes processed payments only about 1 day faster than service centers. However, that comparison covered peak and nonpeak workload periods; there was not a similar comparison just for the peak workload period (April 13 to May 1, 1995), when most of the Form 1040 payments are received and when differences in processing times might be more pronounced. Although the reported study results are insufficient to make an informed judgment, they do raise questions about the assumption that lockboxes process payments 3 days faster than service centers. FMS had planned to commission another study to assess the comparative processing times for lockboxes and service centers. However, those plans have been deferred, and FMS could give us no assurance when such a study would be done. One of IRS’ major business objectives is to move away from a labor-intensive tax return processing system that relies on thousands of employees transcribing data from paper tax returns and move to an electronic system that reduces processing costs and eliminates transcription errors. One strategy for achieving that objective is to reduce the number of paper returns by increasing the number of returns filed electronically. We discussed IRS’ progress in that area earlier in this report. For returns that will continue to be filed on paper, IRS planned to achieve its objective through document imaging and optical character recognition systems. SCRIPS is one such system. IRS uses SCRIPS, which was implemented in 1994, to process tax returns filed on Form 1040EZ, Federal Tax Deposit (FTD) coupons, and information returns (e.g., Forms 1099). In January 1997, we reported that one of the major problems with SCRIPS was slow processing rates (i.e., the number of documents processed per hour). As the data in table 4 show, this problem intensified with respect to Forms 1040EZ and FTD coupons in 1997. An IRS official in the SCRIPS project office attributed the decline in the processing rate for Forms 1040EZ, at least in part, to IRS’ decision, as discussed earlier, to issue TeleFile packages that did not have a Form 1040EZ. Because the package contained no form that the taxpayer could use to file on paper, it also contained no preprinted label for the taxpayer to affix to a paper form. Successful optical character recognition operations depend, in part, on the kind of clearly printed data provided by a label. In that regard, IRS estimated that about 95 percent of the Forms 1040EZ processed through SCRIPS in 1997 did not have the scannable preprinted address label, compared with about 50 percent in 1996, causing a significant increase in the amount of data IRS had to manually transcribe. IRS attributed the decrease in the processing rate for FTD coupons to a problem associated with the way in which blocks of data are transferred throughout the system. In those instances where characters on the document cannot be identified correctly by the scanner, the electronic block of work that contains those documents is sent to a workstation operator. That operator retrieves the block of work, reviews the image of the document to determine what corrections are needed, then updates the file, which is sent back to the file server. Each time a block of work is moved from one component of SCRIPS to another, a time delay results. According to IRS, the contractor provided a software solution to this problem and, since then, processing times have improved, except on those days when the service centers have to process the largest volumes of FTD coupons. IRS made noteworthy progress in several critical areas during the 1997 filing season. It achieved significant increases in telephone accessibility and alternative filings, and it implemented a major change in dealing with missing or incorrect SSNs, all without any noticeable major problems. A couple of these successes involved trade-offs. By detailing staff to answer telephone calls, IRS improved accessibility but, according to IRS, the detailing of that staff caused it to forgo some enforcement revenue. By not including a Form 1040EZ in the tax package sent potential TeleFile users, IRS apparently encouraged some taxpayers to file their returns by telephone. But, in doing so, IRS imposed some burden on recipients of the TeleFile package who needed or wanted a Form 1040EZ and caused a reduction in SCRIPS processing rates. Such trade-offs are inevitable, given the fact that IRS does not have unlimited resources, and we saw nothing to indicate that either trade-off was inappropriate. We are concerned, however, about the cost effectiveness of IRS’ use of lockboxes rather than service centers to receive and process Form 1040 tax payments. On the basis of the data currently available, we do not believe IRS is in a position to make an informed decision on whether to continue to use lockboxes for that purpose. The results of an IRS-commissioned study suggest that the interest cost avoidance figures IRS and FMS have cited to support the use of lockboxes to process Form 1040 payments may not be valid. Although FMS had planned another study to further assess the comparative processing times and costs for lockboxes and service centers, those plans have been deferred, and it is unclear when such a study will be done. As a result, it is unclear whether the government actually realizes savings, and if so how much, through the use of lockboxes. Because IRS has decided to continue the use of lockboxes for the 1998 filing season and it is too late to alter that decision, it seems that IRS and FMS have an opportunity during that filing season to develop the definitive data needed to make more informed decisions on the future use of lockboxes. Such data would include the average amount of time needed by both banks and IRS to process Form 1040 tax payments during both peak and off-peak periods and the average interest costs to the government of borrowing during those periods. It seems that such data would be readily available and there is still time to make any arrangements that might be needed to capture these data during the 1998 filing season. We are equally concerned that IRS’ decision to continue having taxpayers send both tax returns and payments to lockboxes is also based on inconclusive evidence. Another option would be to have taxpayers mail their payments to a lockbox and their tax returns to IRS. An IRS/FMS task force studied these two options and recommended the latter because it would save the government millions of dollars in payments to banks for processing the tax returns. Although Treasury’s Fiscal Assistant Secretary also supported this recommendation, IRS decided against this change because of the added taxpayer burden, including mailing cost. However, our review indicated that IRS did not have persuasive evidence on the amount of taxpayer burden, how this burden would compare with the government’s savings, or whether taxpayers considered the burden to be unreasonable. The new evidence available to IRS in making its decision for the 1998 filing season was from taxpayer focus groups. The number of taxpayers participating in the focus groups was small, and the information they provided on burden was inconclusive. In that regard, the participants were not provided all of the information needed to make an informed response about the burden associated with mailing tax returns in one envelope and tax payments in another. Specifically, they were not told that mailing both tax returns and tax payments to lockboxes is costing the government, and thus taxpayers, millions of extra dollars. Had the focus group participants been informed about all the relevant factors—the additional postage and burden involved in separating returns from payments and mailing them in two envelopes versus the savings that would accrue to taxpayers overall in the form of savings to the government if this were done—we believe that they would have been in a better position to assess the trade-offs involved in deciding on the reasonableness of the burden involved. IRS has three basic options concerning the use of lockboxes for Form 1040 tax payments: (1) continue the existing practice, (2) discontinue the use of lockboxes altogether, or (3) revise the existing practice to have taxpayers send their returns to service centers and their payments to lockboxes. In deciding which option to select, IRS faces the following two basic issues. First, in deciding between options 1 and 2, IRS needs to know whether using lockboxes to process Form 1040 tax payments generates a net savings to the government. Second, if the use of lockboxes generates significant savings, which would remove option 2 from the equation, IRS needs to know, in choosing between options 1 and 3, whether the additional savings to the government of having taxpayers send their tax returns to IRS service centers outweigh the taxpayer burden associated with taxpayers sending two different envelopes to two locations. Although these issues would involve different analyses, the results and related decisions are intertwined. That is, if the analysis for the first issue shows that using lockboxes for Form 1040 tax payments does not result in a significant net savings to the government and IRS decides to stop using lockboxes, the second issue would become moot. We recommend that the Commissioner of Internal Revenue require the appropriate IRS officials to conduct, during the 1998 tax filing season, the analyses necessary to determine (1) whether there are net savings to the government attributable to the use of lockboxes to process Form 1040 tax payments and (2) whether the potential savings of requiring affected taxpayers to mail their tax returns to IRS and their tax payments to lockboxes in separate envelopes outweigh the estimated additional cost and other burden that this could be expected to cause taxpayers. In doing these analyses, the officials should collect definitive data on (1) the actual time and interest cost differences between sending tax payments to lockboxes and sending them to IRS during peak and off-peak periods and (2) whether taxpayers believe, given the processing cost savings to the government, that it would cause them an unreasonable burden to mail tax returns and tax payments to different locations. If the analyses indicate that using lockboxes does not produce a net savings to the government, we recommend that the Commissioner take steps to have IRS service centers process all Form 1040 payments starting with the 1999 tax filing season. If the analyses indicate that the use of lockboxes produces savings and that taxpayers would support the practice of mailing returns and payments to different locations, we recommend that the Commissioner change the current lockbox procedures as soon as possible and instruct taxpayers to send their returns to IRS and their payments to lockboxes. We obtained written comments on a draft of this report from the Deputy Commissioner of Internal Revenue (see app. II). He said that IRS generally agreed with our findings and recommendations and that the ultimate determination of the benefit of using lockboxes to the taxpaying public requires information and analysis by both IRS and FMS. He said that IRS would pursue a study with FMS during fiscal year 1998 to reach a long-term decision about lockbox processing and that IRS would welcome the opportunity to assist FMS in analyzing the savings to the government attributable to the use of lockboxes to process Form 1040 tax payments. The Deputy Commissioner also said that (1) it would be impossible to determine conclusively whether any savings from having taxpayers mail their returns and payments in separate envelopes outweighed the additional cost and other burden to taxpayers because “the perceived burden of this new way of paying and filing may involve intangibles which cannot be measured (e.g., a negative reaction to changes in procedures)” and (2) a broad-based study to determine taxpayers’ perception of burden and their willingness to accept that burden would be necessary to satisfy our recommendation. In addition, the Acting Chief of Customer Service, at a meeting to discuss IRS’ comments, told us that IRS would always come down on the side of reduced burden. We agree that burden cannot be measured conclusively. However, it is not necessary to conclusively measure burden before deciding whether to use two envelopes. In fact, IRS has made other decisions involving new ways of paying and filing without measurable data on burden. For example, the decision to not include a Form 1040EZ in the TeleFile tax package involved a change in procedures that caused additional taxpayer burden by requiring taxpayers who could not or did not want to use TeleFile to find a Form 1040EZ elsewhere. IRS made that decision although it could not measure the impact of the additional burden on taxpayers. In the TeleFile case, IRS did not come down on the side of reduced burden but decided, instead, that the additional burden, though unmeasured, was acceptable given the expected benefits. In fact, IRS plans to send out the same kind of abbreviated TeleFile tax package for the 1998 filing season even though 12 percent of the TeleFile nonusers indicated that the package IRS sent out for the 1997 filing season caused a great deal of inconvenience. We believe that asking taxpayers to use two envelopes might be another instance where IRS might accept a small amount of burden, if the potential savings to the government are significant. It is unclear what, if anything, IRS intends to do to get better information on burden. However, if it decides to obtain more definitive data, it is important that it structure its data collection to get specific input from taxpayers on their willingness to assume the additional burden of dealing with two envelopes in light of the savings to the government. Among other things, that would mean asking them to compare the two-envelope approach with the current approach and making sure, for both approaches, that they know the government’s costs and benefits. With respect to our second recommendation, the Deputy Commissioner indicated that necessary studies could not be done and analyzed in time to make changes for the 1999 filing season, given the lead time needed in awarding contracts for the printing of tax packages. We do not understand why this would be the case. It seems to us that at least the first stage of the analysis sought by our recommendation—whether the government saves money by using lockboxes—can be made in time. Our prior analyses of IRS data indicate that the key data needed to make that determination—the comparative time it takes lockboxes and service centers to process and deposit Form 1040 payments—should be readily available and can be compiled and analyzed relatively quickly. There is no reason we are aware of that such analysis could not be done using data from the 1997 filing season, rather than waiting for new data from the 1998 filing season. Thus, if the analysis shows that the government is not benefiting from the use of lockboxes, IRS should have time to make the necessary changes to the tax packages for 1999. The second part of the analysis, which considers burden, would only be needed if the first part shows that using lockboxes to process Form 1040 payments saves the government money. If the analysis shows that it does not save the government money, IRS could change the procedure for handling Form 1040 tax payments without further analyzing burden. The revised procedure would still involve one envelope, but the envelope would be mailed to a service center rather than a lockbox. Since that change would result in all returns (remittance and nonremittance) being mailed to IRS, it would not only avoid the additional burden of having taxpayers deal with two envelopes but also reduce existing burden by negating the need for taxpayers to deal with two mailing labels. We revised our second recommendation to recognize the different decisions that will confront IRS depending on the results of the analyses called for in our first recommendation. As part of that revision, the reference to the 1999 filing season now applies only if the first part of the analysis shows that the government is not saving money by using lockboxes to process Form 1040 payments. In commenting on this report, the Deputy Commissioner identified a number of actions IRS took during the 1997 filing season and asked that we include the information in our report (see app. II). Some of the actions identified by the Deputy Commissioner, such as those relating to electronic tax administration, level of access, and complex tax law questions, are discussed in our report. However, we did no audit work on several other actions mentioned by the Deputy Commissioner and thus cannot comment on their effectiveness. We are sending copies of this report to the Subcommittee’s Ranking Minority Member, the Chairmen and Ranking Minority Members of the House Committee on Ways and Means and the Senate Committee on Finance, various other congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, the Director of the Office of Management and Budget, and other interested parties. Major contributors to this report are listed in appendix III. Please contact me on (202) 512-9110 if you have any questions. To assess the ability of taxpayers to reach IRS by telephone to ask a question about the tax law or their accounts, we conducted a nonstatistical test of IRS’ toll-free telephone assistance system. Our results relate just to the test calls; they cannot be projected. To conduct the test, we placed telephone calls at various times during each workday from March 31 through April 15, 1997. We made our calls from five metropolitan areas—Atlanta, Chicago, Kansas City, San Francisco, and Washington, D.C. Each attempt to contact IRS consisted of up to five calls spaced 1 minute apart. If we reached IRS during any of the five calls and made contact with an assistor, we considered the attempt successful. If we reached IRS during any of the five calls but were put on hold for more than 7 minutes without talking to an assistor, we abandoned the call, did not dial again, and considered the attempt unsuccessful (abandoned). If we received a busy signal, we hung up, waited 1 minute, and then redialed. If after four redials (five calls in total) we had not reached IRS, we considered the attempt unsuccessful. In conducting our test, we did not ask questions of the assistors because it was not our intent to assess the accuracy of their assistance. We attempted to contact IRS 330 times. Of 330 attempts to contact an assistor, 211 (64 percent) were successful—162 on the first call, 22 on the second call, and 27 after 3 to 5 calls. When the 16 calls that resulted in access to IRS’ voice messaging system were added, accessibility to IRS assistance increased to 69 percent. In another 69 cases (21 percent), we accessed IRS’ system but were put on hold more than 7 minutes and thus hung up before making contact with an assistor. The remaining 34 attempts (10 percent) were aborted after we received busy signals on each of our 5 dialing attempts. Our 330 attempts to contact an assistor required a total of 584 calls to IRS’ toll-free telephone number. Of those 584 calls, we succeeded in contacting an IRS assistor 211 times—a 36-percent accessibility rate. We followed the above methodology to conduct our 1995 test, but we placed telephone calls from two additional metropolitan areas (Cincinnati and New York) and for two separate 2-week periods (January 30 through February 11, 1995, and April 3 through April 15, 1995). Results of the 1995 test cited in the body of this report are only for the 2-week period from April 3 through April 15, 1995. The following are GAO’s comments on IRS’ letter dated November 26, 1997. 1. IRS says that lockbox payments of $213 billion were deposited during fiscal year 1997. That figure covers all tax payments processed by the lockbox banks. The lockbox discussion in our report focuses only on the processing of Form 1040 tax payments. 2. IRS says that the SCRIPS sites succeeded in processing over 90 percent of the Forms 1040EZ through SCRIPS. However, only 5 of IRS’ 10 service centers have SCRIPS. The 90-percent figure cited by IRS means that SCRIPS was used to process 90 percent of the Forms 1040EZ filed at those 5 centers. The other five centers used the traditional keypunching system to process the Forms 1040EZ they received. Also, as noted in our report, while the 5 SCRIPS centers may have processed 90 percent of the Forms 1040EZ they received, they did so at a slower rate than in 1996. 3. IRS says that the number of telephone calls answered increased from 99.2 million in fiscal year 1996 to 103.9 million in fiscal year 1997. These numbers differ from the numbers cited in table 2 of our report because (1) our numbers are for the filing season (January 1 through mid-April) while IRS’ numbers are for the fiscal year (October 1 through September 30) and (2) our numbers include just those calls answered by IRS assistors, while IRS’ numbers include calls answered by assistors and by automated systems, such as TeleTax (a system that has prerecorded information on about 150 topics). 4. IRS cites an initial contact resolution rate of above 95 percent. However, as IRS says, that rate only covers walk-in contacts and correspondence. It does not reflect the extent to which telephone inquiries are resolved with one contact. Sharon K. Caporale, Evaluator Suzy Foster, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO assessed the Internal Revenue Service's (IRS) performance during the 1997 tax filing season, focusing on: (1) the ability of taxpayers seeking answers to questions to reach IRS via the telephone; (2) the number of returns filed by means other than the traditional paper method; (3) IRS' efforts to deal with returns that have missing or incorrect social security numbers (SSN); (4) the use of banks, known as lockboxes, to process certain tax payments; and (5) performance of the imaging system IRS uses to process certain tax returns. GAO noted that: (1) the IRS met or exceeded most of its 1997 filing season related performance goals; (2) of particular note is the substantial improvement in two important areas where GAO has criticized IRS' performance in past filing seasons--telephone accessibility and the use of alternative filing methods; (3) according to IRS data, telephone accessibility increased from 20 percent during the 1996 filing season to 51 percent during the 1997 filing season; (4) the number of tax returns filed by means other than the traditional paper method increased by 25 percent over the last year, with the number of returns filed by telephone (TeleFile) showing the most significant increase--65 percent; (5) although the revised tax package apparently contributed to an increase in the use of TeleFile, it also apparently contributed to a decrease in the performance of the Service Center Recognition/Image Processing System (SCRIPS)--a document imaging and optical character recognition system that IRS implemented in 1994 to process Forms 1040EZ and certain other tax documents; (6) another major change during the 1997 filing season involved the procedures IRS used to process returns that were filed with missing or incorrect SSNs; (7) in 1997, as authorized by the Welfare Reform Act of 1996, IRS began treating missing or incorrect SSNs as math errors rather than as issues that, in the past, had to be resolved through a lengthy notice process; (8) as of September 1, 1997, according to IRS, it had protected about $1.46 billion in revenue through the disallowance of claimed credits or dependent exemptions in 1997, more than doubling the amount disallowed using the procedures IRS followed in 1996; (9) one issue that GAO discussed in a previous report that continues to be of concern is the cost-effectiveness of IRS' use of lockboxes to process 1040 tax payments; (10) additional information GAO obtained this year heightened its concern by calling into question a key assumption IRS and the Department of the Treasury's Financial Management Service (FMS) have used to calculate the interest cost savings associated with this use of lockboxes; and (11) although FMS had planned a study to further assess interest cost savings, those plans have been deferred, and there is no assurance when such a study will be done. |
DHS satisfied or partially satisfied each of the applicable legislative conditions specified in the appropriations act. In particular, the plan, including related program documentation and program officials’ statements, satisfied or provided for satisfying all key aspects of federal acquisition rules, requirements, guidelines, and systems acquisition management practices. Additionally, the plan partially satisfied the conditions that specified (1) compliance with the capital planning and investment review requirements of the Office of Management and Budget (OMB), (2) compliance with DHS’s enterprise architecture, and (3) the plan’s review and approval by DHS’s Investment Review Board, the Secretary of Homeland Security, and OMB. DHS has completely implemented, has partially implemented, is in the process of implementing, or plans to implement all the remaining recommendations contained in our reports on the fiscal years 2002, 2003, and 2004 expenditure plans. Each recommendation, along with its current status, is summarized below: Develop a system security plan and privacy impact assessment. The department has partially implemented this recommendation. First, the US-VISIT program has developed a security plan that provides an overview of system security requirements, describes the controls in place or planned for meeting those requirements, and refers to the applicable documents that prescribe the roles and responsibilities for managing the US-VISIT component systems. However, a security risk assessment of the program has not been completed, and the plan does not include a date for the assessment’s completion. Second, the US-VISIT program has completed a privacy impact assessment for Increment 2. However, the assessment does not satisfy all aspects of OMB guidance for such an assessment, such as fully addressing privacy issues in relevant system documentation. Develop and implement a plan for satisfying key acquisition management controls, including acquisition planning, solicitation, requirements development and management, project management, contract tracking and oversight, evaluation, and transition to support, and implement the controls in accordance with the Software Engineering Institute’s (SEI) guidance. The department is in the process of implementing this recommendation. The US-VISIT Acquisition and Program Management Office has initiated a process improvement program and drafted a process improvement plan. The office has also developed processes or plans, some of which are approved and some of which are in draft, for all except one of SEI’s Software Acquisition Capability Maturity Model (SA-CMM) Level 2 key process areas. Ensure that future expenditure plans are provided to the department’s House and Senate Appropriations Subcommittees in advance of US- VISIT funds being obligated. With respect to the fiscal year 2005 expenditure plan, DHS implemented this recommendation by providing the plan to the Senate and House Subcommittees on October 19, 2004. Ensure that future expenditure plans fully disclose US-VISIT system capabilities, schedule, cost, and benefits to be delivered. The department has partially implemented this recommendation. The expenditure plan identifies high-level capabilities and high-level schedule estimates. It also identifies the amounts budgeted for each increment for fiscal years 2003 through 2005, but it does not associate this funding with specific capabilities and benefits. Further, while the plan identifies several benefits and associates these benefits with increments, it does not include any information on related metrics or on progress against achieving any of the benefits. Ensure that future expenditure plans fully disclose how the US-VISIT acquisition is being managed. The department is in the process of implementing this recommendation. The fiscal year 2005 plan describes some activities being employed to manage the US-VISIT acquisition, such as the governance structure, program office organizational structure, and staffing levels. However, the department does not describe how other important aspects of the program are being managed, such as testing, system capacity, and system configuration. Ensure that human capital and financial resources are provided to establish a fully functional and effective program office. The department has partially implemented this recommendation. As of October 2004, US-VISIT had filled 59 of its 115 government positions, with plans to fill about half the vacant positions once security clearances have been completed. As of November 2004, the program office had filled 88 of a planned 117 contractor positions. The expenditure plan indicates that DHS has budgeted $83 million to maintain the US-VISIT program management structure and baseline operations. Clarify the operational context in which US-VISIT is to operate. The department is in the process of implementing this recommendation. In September 2003, DHS released version 1.0 of its enterprise architecture. We reviewed version 1.0 and found that it is missing, either partially or completely, all the key elements expected in a well-defined architecture, such as descriptions of business processes, information flows among these processes, and security rules associated with these information flows. Since we reviewed version 1.0 of the architecture, DHS has drafted version 2.0. We have not reviewed version 2.0. Determine whether proposed US-VISIT increments will produce mission value commensurate with cost and risks. The department is in the process of implementing this recommendation. US-VISIT developed a cost-benefit analysis for Increment 2B, but it is unclear whether this increment will produce mission value commensurate with cost and risk. For example, the analysis addresses only government costs and does not address potential nongovernmental costs. Further, the analysis identifies three alternatives and identifies the third alternative as the preferred choice. However, US-VISIT is pursuing an alternative more closely aligned with alternative 2, because alternative 3 was considered too ambitious to meet statutorily required time lines. Define US-VISIT program office positions, roles, and responsibilities. The department has partially implemented this recommendation. US- VISIT has developed descriptions for positions within each office, and working with the Office of Personnel Management (OPM), it has drafted a set of core competencies that define the knowledge, skills, abilities, and other competencies needed for successful employee performance. Develop and implement a human capital strategy for the US-VISIT program office that provides for staffing positions with individuals who have the appropriate knowledge, skills, and abilities. The department has partially implemented this recommendation. The US- VISIT program office, in conjunction with OPM, has drafted a Human Capital Plan. The plan includes an action plan that identifies activities, proposed completion dates, and the organization responsible for completing these activities. The program office has completed some of the activities called for in the plan, including the designation of a liaison responsible for ensuring alignment between DHS and US-VISIT human capital policies. Develop a risk management plan and report all high risks and their status to the executive body on a regular basis. The department has partially implemented this recommendation. The US- VISIT program office has developed a risk management plan and process and has established a governance structure involving three primary groups—the Risk Review Board, Risk Review Council, and Risk Management Team. The Risk Review Board represents the highest level of risk management within the program and is composed of senior level staff, such as the program director and functional area directors. However, US- VISIT has not reported high risks beyond this board. Define performance standards for each US-VISIT program increment that are measurable and reflect the limitations imposed by relying on existing systems. The department is in the process of implementing this recommendation. The US-VISIT program office has defined some technical performance measures—such as availability, timeliness, and output quantity—for Increments 1 and 2B, but it has not defined others, such as reliability, resource utilization, and scalability. Additionally, US-VISIT systems documentation does not contain sufficient information to determine the limitations imposed by US-VISIT’s reliance on existing systems that have less demanding performance requirements, such as the 98.0 percent availability of the Treasury Enforcement Communications Systems. Develop and approve test plans before testing begins. These test plans should (1) specify the test environment; (2) describe each test to be performed, including test controls, inputs, and expected outputs; (3) define the test procedures to be followed in conducting the tests; and (4) provide traceability between test cases and the requirements to be verified by the testing. The department is in the process of implementing this recommendation. According to the US-VISIT Systems Assurance Director, the Increment 2B system acceptance test plan was approved on October 15, 2004. However, no documentation was provided that explicitly indicated the approval of the plan. Further, the test plan did not fully address the test environment, include descriptions of tests to be performed, or provide test procedures to be followed in conducting the tests. The plan also did not provide traceability between test cases and the requirements to be verified by the testing. For example, 15 of the 116 requirements did not have test cases, and 2 requirements were labeled “not testable.” Ensure the independence of the Independent Verification and Validation (IV&V) Contractor. The department is in the process of implementing this recommendation. The US-VISIT Information Technology (IT) Management Office is developing high-level requirements for IV&V, including a strategy and statement of work for acquiring an IV&V contractor. Implement effective configuration management practices, including establishing a US-VISIT change control board to manage and oversee system changes. The department plans to implement this recommendation. The US-VISIT program office has not yet developed or implemented US-VISIT-level configuration management practices or a change control board. The office has developed a draft configuration management plan that describes key configuration management activities that are to be defined and implemented, such as defining and identifying processes and products to be controlled and recording and monitoring changes to the controlled items. The draft plan also proposes a governance structure, including change control boards. Identify and disclose management reserve funding embedded in the fiscal year 2004 expenditure plan to the Appropriations Subcommittees. The department has implemented this recommendation. The US-VISIT program office reported management reserve funding of $33 million for fiscal year 2004 in a briefing to the Subcommittees on Homeland Security, Senate and House Committees on Appropriations. Ensure that all future US-VISIT expenditure plans identify and disclose management reserve funding. With respect to the fiscal year 2005 expenditure plan, DHS implemented this recommendation. The fiscal year 2005 plan specified management reserve funding of $23 million. Assess the full impact of Increment 2B on land ports of entry workforce levels and facilities, including performing appropriate modeling exercises. The department has partially implemented this recommendation. The US- VISIT program office conducted an analysis to help determine the impact of Increment 2B on workforce and travelers. According to program officials, additional staff will not be needed to implement this increment at the land borders. In addition, the US-VISIT program office has conducted space utilization surveys at all of the 166 land ports of entry and has completed survey reports at 16 of the 50 busiest land ports of entry, with the remaining 34 reports planned to have been completed in the fall of 2004. Although the survey reports indicated that most of the ports reviewed were at or near capacity and that facilities had no room for expansion, the program office maintains that Increment 2B will not require expansion of any facilities and will only require minor modifications. Develop a plan, including explicit tasks and milestones for implementing all our open recommendations and periodically report to the DHS Secretary and Under Secretary on progress in implementing this plan; also report this progress, including reasons for delays, in all future US-VISIT expenditure plans. The Department is in the process of implementing this recommendation. The US-VISIT program office has developed a report for tracking the status of our open recommendations. This report is shared with the program office director but is not shared with the Secretary and Under Secretary. Our observations recognize accomplishments to date and address the need for rigorous and disciplined program management practices relating to describing progress against commitments, managing the exit alternatives pilot, managing system capacity, and estimating cost, as well as collaborating with DHS’s Automated Commercial Environment (ACE) program. An overview of specific observations follows: The program office has acquired the services of a prime integration contractor to augment its ability to complete US-VISIT. On May 28, 2004, and on schedule, DHS awarded a contract for integrating existing and new business processes and technologies to a prime contractor and its related partners. The fiscal year 2005 expenditure plan does not describe progress against commitments made in previous plans. Although this is the fourth US-VISIT expenditure plan, it does not describe progress against commitments made in the previous three plans. For example, the fiscal year 2004 plan committed to analyzing, field testing, and initiating deployment of alternative approaches for capturing biometrics during the exit process at air and sea ports of entry. However, while the fiscal year 2005 plan states that US-VISIT was to expand its exit pilot sites during the summer and fall of 2004 and deploy the exit solution during fiscal year 2005, it does not explain the reason for the change or its potential impact. Additionally, the fiscal year 2004 plan stated that $45 million in fiscal year 2004 was to be used for exit activities. However, the fiscal year 2005 plan states that $73 million in fiscal year 2004 funds were to be used for exit activities, but it does not highlight this difference or address the reason for the change in amounts. The exit capability alternatives are faced with a compressed time line, missed milestones, and potentially reduced scope. In January 2004, US- VISIT deployed an initial exit capability as a pilot to two ports of entry, while simultaneously developing other exit alternatives. The May 2004 Exit Pilot Evaluation Plan stated that all exit pilot evaluation tasks were to be completed by September 2004. The plan allotted about 3 months to conduct the evaluation and report the results. However, an October 2004 schedule indicated that all exit pilot evaluation tasks were to be completed between late October 2004 and December 2004, which is about a 2-month evaluation and reporting period. As of early November 2004, exit alternatives were deployed and operating in only 5 of the 15 ports of entry that were scheduled to be operational by November 1, 2004. According to program implementation officials, this was because of delays in DHS granting security clearances to the civilian employees who would operate the equipment at the ports of entry. Additionally, the Evaluation Execution Plan describes the sample size of outbound passengers required to be evaluated at each port. This sample size will produce a specified confidence level in the evaluation results. Because of the reduced evaluation time frame, the program still plans to collect the desired sample size at each port by adding more personnel to the evaluation teams if needed. These changing facts and circumstances surrounding the exit pilot introduce additional risk concerning US-VISIT’s delivery of promised capabilities and benefits on time and within budget. US-VISIT and ACE collaboration is moving slowly. In February 2003, we recognized the relationship between US-VISIT and ACE and recommended steps to promote close collaboration between these two programs. Since then, US-VISIT and ACE managers have met to identify potential areas for collaboration between the two programs and to clarify how the programs can best support the DHS mission and provide officers with the information and tools they need. However, explicit plans have not been developed nor actions taken to understand US- VISIT/ACE dependencies and relationships. Because both programs are making decisions on how to further define, design, develop, and implement these systems, it is important that they exploit their relationships to reduce rework that might be needed to integrate the programs. US-VISIT system capacity is being managed in a compartmentalized manner. Currently, DHS does not have a capacity management program. Instead, the US-VISIT IT Management Office relies on the respective performance management activities of the pre-existing systems, such as those managed by U.S. Customs and Border Protection and U.S. Immigration and Customs Enforcement. Until US-VISIT has developed a comprehensive performance management and capacity planning program, the program will continue to be reactive in its efforts to ensure that US-VISIT system resources are sufficient to meet current workloads, increasing the risk that it may not be able to adequately support mission needs. The cost estimating process used for Increment 2B did not follow some key best practices. The US-VISIT cost estimate did not fully satisfy most of the criteria called for in SEI guidance. For example, costs related to development and integration tasks for US-VISIT component systems are specified, but information about estimated software lines of code is not. Additionally, no one outside the US-VISIT program office reviewed and concurred with the cost estimating categories and methodology. Without reliable cost estimates, the ability to make informed investment decisions and effectively manage progress and performance is reduced. The fiscal year 2005 expenditure plan (with related program office documentation and representations) either partially satisfies or satisfies the legislative conditions imposed by Congress. Further, steps are planned, initiated, under way, or completed to address all of our open recommendations. However, overall progress in addressing the recommendations has been slow, leaving considerable work to be done. Given that most of these open recommendations are aimed at correcting fundamental limitations in DHS’s ability to manage the program in a way that ensures the delivery of (1) mission value commensurate with costs and (2) promised capabilities on time and within budget, it is important that DHS implement the recommendations quickly and completely through effective planning and continuous monitoring and reporting. Until this occurs, the program will be at high risk of not meeting its stated goals on time and within budget. To its credit, the program office now has its prime contractor on board to support both near-term increments and to plan for and deliver the yet-to-be- defined US-VISIT strategic solution. However, it is important to recognize that this accomplishment is a beginning and not an end. The challenge for DHS is now to effectively and efficiently work with the prime contractor in achieving desired mission outcomes. To accomplish this, it is important that DHS move swiftly in building its program management capacity, which is not yet in place, as shown by the status of our open recommendations and our recent observations about (1) economic justification of US-VISIT Increment 2B, (2) completion of the exit pilot evaluation, (3) collaboration with a closely related import/export processing and border security program, (4) system capacity management activities, and (5) cost estimating practices. Moreover, it is important that DHS improve its measurement and disclosure to its Appropriations Subcommittees of its progress against commitments made in prior expenditure plans, so that the Subcommittees’ ability to effectively oversee US-VISIT’s plans and progress is not unnecessarily constrained. Nevertheless, the fact remains that the program continues to invest hundreds of millions of dollars for a mission-critical capability under circumstances that introduce considerable risk that cost-effective mission outcomes will not be realized. At a minimum, it is incumbent upon DHS to fully disclose these risks, along with associated mitigation steps, to executive and congressional leaders so that timely and informed decisions about the program can be made. To better ensure that the US-VISIT program is worthy of investment and is managed effectively, we reiterate our prior recommendations and further recommend that the Secretary of Homeland Security direct the Under Secretary for Border and Transportation Security to ensure that the US- VISIT program director takes the following five actions: Fully and explicitly disclose in all future expenditure plans how well DHS is progressing against the commitments that it made in prior expenditure plans. Reassess its plans for deploying an exit capability to ensure that the scope of the exit pilot provides for adequate evaluation of alternative solutions and better ensures that the exit solution selected is in the best interest of the program. Develop and implement processes for managing the capacity of the US- VISIT system. Follow effective practices for estimating the costs of future increments. Make understanding the relationships and dependencies between the US-VISIT and ACE programs a priority matter, and report periodically to the Under Secretary on progress in doing so. In written comments on a draft of this report, signed by the Acting Director, Departmental GAO/IG Liaison Office (reprinted in app. II), DHS concurred with our findings and recommendations. DHS also stated that it appreciated the guidance that the report provides for future efforts and described actions taken and progress made in implementing the US-VISIT program. We are sending copies of this report to the Chairmen and Ranking Minority Members of other Senate and House committees and subcommittees that have authorization and oversight responsibilities for homeland security. We are also sending copies to the Secretary of Homeland Security, Secretary of State, and the Director of OMB. Copies of this report will also be available at no charge on our Web site at www.gao.gov. Should you or your offices have any questions on matters discussed in this report, please contact me at (202) 512-3439 or at hiter@gao.gov. Another contact and key contributors to this report are listed in appendix III. facilitate legitimate travel and trade, ensure the integrity of the U.S. immigration system, and protect the privacy of our visitors. meets the capital planning and investment control review requirements established by the Office of Management and Budget (OMB), including OMB Circular A-11, part 7;2 complies with DHS’s enterprise architecture; complies with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government; is reviewed and approved by the DHS Investment Review Board, the Secretary of Homeland Security, and OMB; and is reviewed by GAO. Pub. L. 108-334 (Oct. 18, 2004). OMB Circular A-11 establishes policy for planning, budgeting, acquisition, and management of federal capital assets. 1. determine whether the US-VISIT fiscal year 2005 expenditure plan satisfies the 2. determine the status of our US-VISIT open recommendations, and 3. provide any other observations about the expenditure plan and DHS’s management of US-VISIT. We conducted our work at US-VISIT offices in Rosslyn, Virginia, from June 2004 through November 2004, in accordance with generally accepted government auditing standards. Details of our scope and methodology are described in attachment 1 of this briefing. Satisfies or provides for satisfying many, but not all, key aspects of the condition that we reviewed. Satisfies or provides for satisfying every aspect of the condition that we reviewed. Actions are planned to implement the recommendation. Actions are under way to implement the recommendation. Actions have been taken that fully implement the recommendation. The Software Acquisition Capability Maturity Model (SA-CMM®) developed by Carnegie Mellon University’s Software Engineering Institute (SEI) defines acquisition process management controls for planning, managing, and controlling software-intensive system acquisitions. With respect to the fiscal year 2005 expenditure plan. The purpose of independent verification and validation is to provide an independent review of processes and products throughout the acquisition and deployment phase. Results in Brief: Objective 3 Observations The program office has acquired the services of a prime integration contractor to augment its ability to complete US-VISIT. The fiscal year 2005 Expenditure Plan does not describe progress against commitments (e.g., capabilities, schedule, cost, and benefits) made in previous plans. The exit capability alternatives evaluation is faced with a compressed time line, missed milestones, and potentially reduced scope. US-VISIT and Automated Commercial Environment (ACE)3 collaboration is moving slowly. US-VISIT system capacity is being managed in a compartmentalized manner. The cost estimating process used for Increment 2B did not follow some key best practices. ACE is a new trade processing system planned to support the movement of legitimate imports and exports and strengthen border security. Results in Brief: Objective 3 Observations To assist DHS in managing US-VISIT, we are making five recommendations to the Secretary of DHS. In their comments on a draft of this briefing, US-VISIT program officials stated that they generally agreed with our findings, conclusions, and recommendations. collecting, maintaining, and sharing information on certain foreign nationals who enter and exit the United States; identifying foreign nationals who (1) have overstayed or violated the terms of their visit; (2) can receive, extend, or adjust their immigration status; or (3) should be apprehended or detained by law enforcement officials; detecting fraudulent travel documents, verifying traveler identity, and determining traveler admissibility through the use of biometrics; and facilitating information sharing and coordination within the border management community. An indefinite-delivery/indefinite-quantity contract provides for an indefinite quantity, within stated limits, of supplies or services during a fixed period of time. The government schedules deliveries or performance by placing orders with the contractor. Accenture’s partners include, among others, Raytheon Company, the Titan Corporation, and SRA International, Inc. 8 C.F.R. 235.1(d)(1)(iv) and 215.8(a)(2) state that lasses of travelers that are not subject to US-VISIT are foreign nationals admitted on A-1, A-2, C-3 (except for attendants, servants, or personal employees of accredited officials), G-1, G-2, G-3, G-4, NATO-1, NATO-2, NATO-3, NATO-4, NATO-5, or NATO-6 visas; certain Taiwan officials who hold E-1 visas and members of their immediate families who hold E-1 visas, unless the Secretary of State and the Secretary of Homeland Security jointly determine that a class of such aliens should be subject to the rule; children under the age of 14; persons over the age of 79; classes of aliens to whom the Secretary of Homeland Security and the Secretary of State jointly determine it shall not apply; and an individual alien to whom the Secretary of Homeland Security, the Secretary of State, or the Director of Central Intelligence determines shall not be subject to the rule. At that time, the pilot employed a self-serve kiosk to capture biographic information and biometric data (two index fingerprints). The pilots are deployed to Miami Royal Caribbean seaport and the Baltimore/Washington International Airport. Chicago O’Hare International Airport, Denver International Airport, and Dallas/Ft. Worth International Airport. electronic fingerprints and photographs. The hybrid combines the enhanced kiosk, which is used to generate a receipt, with the mobile device, which scans the receipt and the electronic fingerprint of the traveler at the gate to verify exit. As of November 18, 2004, US-VISIT had processed about 13 million foreign nationals, including about 2 million from visa waiver countries. According to US- VISIT, it had positively matched over 1,500 persons against watch list databases. Pub. L. 108-299 (Aug. 9, 2004) extended the deadline from October 26, 2004, to October 26, 2005. Secondary inspection is used for more detailed inspections that may include checking more databases, conducting more intensive interviews, or both. As required by the Immigration and Naturalization Service Data Management Improvement Act of 2000, 8 U.S.C. 1365a(d)(2). The three sites are Laredo, Texas; Port Huron, Michigan; and Douglas, Arizona. Radio frequency (RF) technology relies on proximity cards and card readers. RF devices read the information contained on the card when the card is passed near the device and can also be used to verify the identity of the cardholder. As required by the Immigration and Naturalization Service Data Management Improvement Act of 2000, 8 U.S.C. 1365a(d)(3). maintains lookout (i.e., watch list) data,15 interfaces with other agencies’ databases, and is currently used by inspectors at POEs to verify traveler information and update traveler data. Within TECS are several databases, including the following: Advance Passenger Information System (APIS) includes arrival and departure manifest information provided by air and sea carriers. Crossing History includes information about individuals’ crossing histories. Lookout data sources include DHS’s Customs and Border Protection and Immigration and Customs Enforcement; the Federal Bureau of Investigation (FBI); legacy DHS systems; the U.S. Secret Service; the U.S. Coast Guard; the Internal Revenue Service; the Drug Enforcement Agency; the Bureau of Alcohol, Tobacco, & Firearms; the U.S. Marshals Service; the U.S. Office of Foreign Asset Control; the National Guard; the Treasury Inspector General; the U.S. Department of Agriculture; the Department of Defense Inspector General; the Royal Canadian Mounted Police; the U.S. State Department; Interpol; the Food and Drug Administration; the Financial Crimes Enforcement Network; the Bureau of Engraving and Printing; and the Department of Justice Office of Special Investigations. interest. Secondary includes the results of prior secondary inspections performed on an individual, including if the person was admitted or denied entry. US-VISIT Biometric Information File (BIF) includes keys or links to other databases in TECS, IDENT, and ADIS and includes such information as fingerprint identification numbers, name, and date of birth. Addresses includes addresses of individuals. I-94/Non-Immigrant Information System (NIIS) includes information from I- 94 forms. US-Visa (Datashare) includes Department of State records of visa applications, such as photographs, biographic information, and fingerprint identification number. arrival and departure data and that provides query and reporting functions. Automated Biometric Identification System (IDENT) is a system that collects and stores biometric data about foreign visitors.16 Student Exchange Visitor Information System (SEVIS) is a system that contains information on foreign students. Computer Linked Application Information Management System (CLAIMS 3) is a system that contains information on foreign nationals who request benefits, such as change of status or extension of stay. Consular Consolidated Database (CCD) is a system that includes information on whether a visa applicant has previously applied for a visa or currently has a valid U.S. visa. Includes data such as FBI information on all known and suspected terrorists, selected wanted persons (foreign-born, unknown place of birth, previously arrested by DHS), and previous criminal histories for high-risk countries; DHS Immigration and Customs Enforcement information on deported felons and sexual registrants; and DHS information on previous criminal histories and previous IDENT enrollments. Information from the FBI includes fingerprints from the Integrated Automated Fingerprint Identification System. The I-94 form is used to track the arrival and departure of nonimmigrants. It is divided into two parts. The first part is an arrival portion, which includes, for example, the nonimmigrant’s name, date of birth, and passport number. The second part is a departure portion, which includes the name, date of birth, and country of citizenship. At the kiosk, the traveler, guided by a WSA if needed, scans the machine- readable travel documents, provides electronic fingerprints, and has a digital photograph taken. A receipt is printed to provide documentation of compliance with the exit process and to assist in compliance on the traveler’s next attempted entry to the country. After the receipt prints, the traveler proceeds to his/her departure gate. At the conclusion of the transaction, the collected information is transmitted to IDENT. If the device is being operated by a WSA, the WSA provides a printed receipt to the traveler, and the traveler then boards the departure craft. If the mobile device is being operated by a law enforcement officer, the captured biographic and biometric information is checked in near real time against watch lists. Any potential match is returned to the device and displayed visually for the officer. If no match is found, the traveler boards the departure craft. scans the machine-readable travel documents, provides electronic fingerprints, and has a digital photograph taken. As with the enhanced kiosk alternative, a receipt is printed to provide documentation of compliance with the exit process and to assist in compliance on the traveler’s next attempted entry to the country. However, this receipt has biometrics (i.e., the traveler’s fingerprints and photograph) embedded on the receipt. At the conclusion of the transaction, the collected information is transmitted to IDENT. The traveler presents his or her receipt to the WSA or law enforcement officer at the gate or departure area, who scans the receipt using a mobile device. The traveler’s identity is verified against the biometric data embedded on the receipt. Once the traveler’s identity is verified, he/she is allowed to board the departure craft. The captured information is not transmitted in real time back to IDENT. Data collected on the mobile device are periodically uploaded through the kiosk to IDENT. the traveler arrives for inspection. Travelers subject to US-VISIT are to be processed at secondary inspection, rather than at primary inspection. Inspectors’ workstations are to use a single screen, which eliminates the need to switch between the TECS and IDENT screens. Datashare includes a data extract from State’s CCD system and includes the visa photograph, biographical data, and the fingerprint identification number assigned when a nonimmigrant applies for a visa. Chronology of US-VISIT Expenditure Plans Since November 2002, four US-VISIT expenditure plans have been submitted. On November 15, 2002, the Immigration and Naturalization Service (INS)21 submitted to its appropriations subcommittees its first expenditure plan, which outlined $13.3 million in expenditures for contract activities; design, development, and deployment of the Visa Waiver Support System; facilities assessments; biometric standards development; prototyping; IBIS support activities; travel; program office operations; and fingerprint scanner procurements. On June 5, 2003, the second expenditure plan outlined $375 million in expenditures for system enhancements and infrastructure upgrades, POE information technology (IT) and communication upgrades, facilities planning analysis and design, program management support, proof of concept demonstrations, operations and system sustainment, and training. Effective March 1, 2003, INS became part of DHS. On January 27, 2004, the third expenditure plan outlined $330 million in expenditures for exit pilots; capability to read biometrically enabled travel documents; land infrastructure upgrades; system development and testing; radio frequency technology deployment to the 50 busiest land POEs; technical infrastructure planning and development; program management; and operations and maintenance. The current and fourth expenditure plan, submitted on October 19, 2004, outlines $340 million in expenditures (see table, next slide). Background Review of Current Expenditure Plan Fiscal Year 2005 Expenditure Plan Summary (see next slides for descriptions) Objective 1: Legislative Conditions Condition 1 The US-VISIT expenditure plan satisfies or partially satisfies each of the legislative conditions. Condition 1. The plan, including related program documentation and program officials’ statements, partially satisfies the capital planning and investment control review requirements established by OMB, including OMB Circular A-11, part 7, which establishes policy for planning, budgeting, acquisition, and management of federal capital assets. The table that follows provides examples of the results of our analysis. Examples of A-11 conditions Provide justification and describe acquisition strategy. Results of our analysis US-VISIT has completed an Acquisition Plan, dated November 2003. The plan provides a high-level justification and description of the acquisition strategy for the system. US-VISIT completed a cost/benefit analysis for Increment 2B on June 11, 2004. Summarize life-cycle costs and cost/benefit analysis, including the return on investment. Provide performance goals and measures. Address security and privacy. Provide risk inventory and assessment. The plan includes benefits, but does not identify corresponding metrics. The plan states that performance measures are under development. US-VISIT has developed a security plan that partially satisfies OMB and the National Institute of Standards and Technology security guidance. US-VISIT has not yet conducted a security risk assessment on the overall US-VISIT program. While the plan states the intention to do the assessment, it does not specify when it will be completed. The US-VISIT program published a privacy policy and privacy impact assessment for Increment 2. US-VISIT has developed a risk management plan and process for developing, implementing, and institutionalizing a risk management program. Risks are currently tracked using a risk- tracking database. Objective 1: Legislative Conditions Condition 2 Condition 2. The plan, including related program documentation and program officials’ statements, partially satisfies the condition that it provide for compliance with DHS’s enterprise architecture (EA). DHS released version 1.0 of the architecture in September 2003.22 We reviewed the initial version of the architecture and found that it was missing, either partially or completely, all the key elements expected in a well-defined architecture, such as a description of business processes, information flows among these processes, and security rules associated with these information flows.23 Since we reviewed version 1.0, DHS has drafted version 2.0 of its EA. We have not reviewed this draft. Department of Homeland Security Enterprise Architecture Compendium Version 1.0 and Transitional Strategy. GAO, Homeland Security: Efforts Under Way to Develop Enterprise Architecture, but Much Work Remains, GAO-04-777 (Washington, D.C.: Aug. 6, 2004). Objective 1: Legislative Conditions Condition 2 According to officials from the Office of the Chief Strategist, concurrent with the development of the strategic vision, the US-VISIT program office has been working with the DHS EA program office in developing version 2.0 to ensure that US-VISIT is aligned with DHS’s evolving EA. According to these officials, US-VISIT representatives participate in both the DHS EA Center of Excellence and the DHS Enterprise Architecture Board.24 In July 2004, the Center of Excellence reviewed US-VISIT’s submission for architectural alignment with some EA components, but not all. Specifically, the submission included information intended to show compliance with business and data components, but not, for example, the application and technology components. According to the head of DHS’s EA Center of Excellence, the application and technical components were addressed by this center, which found that US-VISIT was in compliance. The Center of Excellence supports the Enterprise Architecture Board in reviewing component documentation. The purpose of the Board is to ensure that investments are aligned with the DHS EA. Objective 1: Legislative Conditions Condition 2 Based on its review, the DHS Enterprise Architecture Board recommended that the US-VISIT program be given conditional approval to proceed for investment, provided that the program resubmit its documentation upon completion of its strategic plan, which is anticipated in January 2005. DHS has not yet provided us with sufficient documentation to allow us to understand DHS architecture compliance methodology and criteria, or verifiable analysis justifying the conditional approval. Objective 1: Legislative Conditions Condition 3 Condition 3. The plan, including related program documentation and program officials’ statements, satisfies the condition that it comply with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government. The plan provides for satisfying this condition, in part, by describing efforts to develop Software Engineering Institute (SEI) Software Acquisition Capability Maturity Model (SA-CMM) key process areas, such as requirements development and management and contract tracking and oversight. The plan also states that the program intends to achieve SA-CMM Level 225 by establishing a process improvement program based on SEI-identified industry best practices. As part of establishing this program, US-VISIT has developed a draft process improvement plan that specifies process improvement goals, objectives, assumptions, and risks, and which describes a process improvement time line and phase methodology. If these processes are implemented effectively, they will help US-VISIT meet federal acquisition rules, requirements, and guidelines and comply with systems acquisition management practices. The SA-CMM ranks organizational maturity according to five levels. Maturity levels 2 through 5 require verifiable existence and use of certain key process areas. Objective 1: Legislative Conditions Condition 4 Condition 4. The plan, including related program documentation and program officials’ statements, partially satisfies the requirement that it be reviewed and approved by the DHS Investment Review Board (IRB), the Secretary of Homeland Security, and OMB. The DHS Under Secretary for Management26 reviewed and approved the fiscal year 2005 expenditure plan on October 14, 2004, and OMB approved the plan on October 15, 2004. According to the US-VISIT Budget and Finance Director, the IRB reviewed the fiscal year 2005 expenditure plan but did not approve it because DHS management determined that review of the expenditure plan was not in the scope of the IRB review process. According to DHS Delegation Number 0201.1, the Secretary of Homeland Security delegated authority to the Under Secretary for Management for, among other things, the budget, appropriations, and expenditure of funds. Objective 1: Legislative Conditions Condition 5 Condition 5. The plan satisfies the requirement that it be reviewed by GAO. Our review was completed on November 23, 2004. Objective 2: Open Recommendations Recommendation 1 Open Recommendation 1: Develop a system security plan and privacy impact assessment. Security Plan. US-VISIT has developed a security plan.27 OMB and the National Institute of Standards and Technology (NIST) have issued security planning guidance28 requiring, in part, the completion of system security plans that (1) provide an overview of the system security requirements, (2) include a description of the controls in place or planned for meeting the security requirements, and (3) delineate roles and responsibilities of all individuals who access the system. US-VISIT Program, Security Plan for US-VISIT Program Version 1.1 (Sept. 13, 2004). OMB Circular A-130, Revised (Transmittal Memorandum No. 4), Appendix III, Security of Federal Automated Information Resources (Nov. 28, 2000) and NIST, Guide for Developing Security Plans for Information Technology Systems, NIST Special Publication 800-18 (December 1998). Objective 2: Open Recommendations Recommendation 1 According to the guidance, the plan should also describe the methodology used to identify system threats and vulnerabilities and to assess risks, and it should include the date the assessment was conducted. If no system risk assessment has been completed, the plan is to include a milestone date for completion. The US-VISIT security plan provides an overview of the system security requirements, describes the controls in place or planned for meeting those requirements, and references the applicable documents that contain roles and responsibilities for the US-VISIT component systems. However, the plan states that although a security risk assessment on the US-VISIT program will be completed in accordance with NIST guidelines, it has not yet been completed, and the plan does not indicate a date for doing so. Objective 2: Open Recommendations Recommendation 1 Privacy Impact Assessment. The US-VISIT program has conducted a privacy impact assessment for Increment 2, and according to the US-VISIT Privacy Officer, a privacy impact assessment will be completed for the exit portion of Increment 1 in early 2005. According to OMB guidance,29 the depth and content of such an assessment should be appropriate for the nature of the information to be collected and the size and complexity of the system involved. The assessment should also, among other things, (1) be updated when a system change creates new privacy risk, (2) ensure that privacy is addressed in the documentation related to system development, (3) address the impact the system will have on an individual’s privacy, (4) analyze the consequences of collection and flow of information, and (5) analyze alternatives to collection and handling as designed. OMB, Guidance for Implementing the Privacy Provisions of the E-Government Act of 2002, OMB M-03-22 (Sept. 26, 2003). Objective 2: Open Recommendations Recommendation 1 The Increment 2 assessment satisfies some, but not all, of the above OMB guidance areas. To DHS’s credit, the assessment, which was completed in September 2004, states that the DHS Chief Privacy Officer directed that the assessment be updated as necessary to reflect future changes to Increment 2. The assessment also discusses the impact that Increment 2 will have on an individual’s privacy and analyzes the consequences of collection and flow of information. However, privacy is only partially addressed in the Increment 2 system documentation. For example, privacy is used in the Increment 2B cost-benefit analysis to evaluate the weighted risk of Increment 2B alternative solutions. Additionally, the ADIS functional requirements specify that access to information contained in the system, which is protected by the Privacy Act,30 must be limited to authorized users. However, the IDENT Server 2.0 requirements do not consider privacy at all. Additionally, the assessment’s only discussion of design is a statement that a major choice for US-VISIT was whether to develop an entirely new system, develop a largely new system, or build upon existing systems. The assessment does not analyze these options. The timing of the planned privacy impact assessment for the exit portion of Increment 1 is consistent with plans for completing the exit pilots. Objective 2: Open Recommendations Recommendation 2 Open Recommendation 2: Develop and implement a plan for satisfying key acquisition management controls—including acquisition planning, solicitation, requirements development and management, project management, contract tracking and oversight, evaluation, and transition to support—and implement the controls in accordance with SEI guidance. The US-VISIT program plans to achieve SEI SA-CMM Level 2 status in October 2006. According to SEI, a process improvement effort should involve building a process infrastructure, establishing current levels of process maturity, and completing an action plan. The plan should include, among other things, process improvement assumptions and risks, goals, objectives, and criteria for success. The US-VISIT Acquisition and Program Management Office (APMO) has initiated a process improvement program and drafted a process improvement plan. Objective 2: Open Recommendations Recommendation 2 The draft US-VISIT plan discusses assumptions, such as the improvement program being sponsored and supported by senior US-VISIT management, and risks, such as not meeting the process improvement time line if the process improvement effort is not fully staffed. The plan also lists both process improvement goals and short- and long-term objectives. However, the goals and objectives are generally not defined in measurable terms. For example, the plan identifies the following goal and objective: Goal: ensure that US-VISIT is in compliance with federal mandates, making future funding more likely. Objective: define a strategy for attaining SEI SA-CMM Level 2 as soon as possible within the existing constraints—limited contractor and government staff resources and centralized facility. The plan also does not address criteria for success. Objective 2: Open Recommendations Recommendation 2 APMO has developed processes or plans, some of which are approved and some of which are in draft, for all key process areas except “transition to support.”31 The Director of APMO could not say when APMO plans to develop the documentation for this key process area, but noted that US-VISIT is considering a transition from the SA-CMM to SEI’s Capability Maturity Model Integration (CMMI) model.32 No time line was provided as to when this decision might be made. The Director of APMO acknowledges that a transition to the CMMI will likely change the previously mentioned time line for CMM certification. The purpose of transition to support is to provide for the effective and efficient “handing off” of the acquired software products to the support organization responsible for software maintenance. CMU/SEI-2004-TR-001 (February 2004). Objective 2: Open Recommendations Recommendation 3 Open Recommendation 3: Ensure that future expenditure plans are provided to the DHS’s House and Senate Appropriations Subcommittees on Homeland Security in advance of US-VISIT funds being obligated. On October 18, 2004, the President signed the Department of Homeland Security Appropriations Act, 2005, which included $340 million in fiscal year 2005 funds for the US-VISIT program.33 The act states that $254 million of the $340 million is subject to the expenditure plan requirement. On October 19, 2004, DHS provided its fiscal year 2005 expenditure plan to the Senate and House Appropriations Subcommittees on Homeland Security. Department of Homeland Security Appropriations Act, 2005, Pub. L. 108-334 (Oct. 18, 2004). Objective 2: Open Recommendations Recommendation 4 Open Recommendation 4: Ensure that future expenditure plans fully disclose US- VISIT system capabilities, schedule, cost, and benefits to be delivered. The expenditure plan identifies high-level capabilities by increments. However, the capabilities are not consistently presented. For example, in one section of the plan, Increment 2B capabilities are identified as collect biometric data and verify identity at the 50 busiest land POEs, develop global enrollment system capability, and support facilities delivery. However, later in the plan, Increment 2B capabilities are identified as Increment 1 functionality at the top 50 land POEs, biometric data collection, and infrastructure upgrades. Objective 2: Open Recommendations Recommendation 4 Further, some of the capabilities are described in vague and ambiguous terms. For example, the plan describes such Increment 2C capabilities as integration of Border Crossing Cards with US-VISIT, test, model, and deploy technology to preposition biographic and biometric data of enrolled travelers, and desktop upgrades. The plan identifies specific milestones for some increments, but not for others. For example, it states that Increment 2B is to be implemented by December 31, 2004, and Increment 3 by December 31, 2005. However, it states that Increment 1 exit and Increment 2C are to be implemented in fiscal year 2005. Objective 2: Open Recommendations Recommendation 4 The plan identifies the amounts budgeted for each increment for fiscal years 2003 through 2005. For example, the plan states that US-VISIT plans to obligate $55 million in fiscal year 2005 funds for Increment 2C. However, the plan does not associate the $55 million with specific Increment 2C capabilities and benefits. Rather, it states that this amount will be used to support Increment 2C by funding the installation of technology in entry and exit lanes at land borders and supporting facility delivery. Further, the plan does not identify any estimated nongovernmental costs, such as the social costs associated with any potential economic impact at the border. Objective 2: Open Recommendations Recommendation 4 The plan identifies several benefits and associates these benefits with increments. For example, for Increment 1, the plan identifies such benefits as prevention of entry of high-threat or inadmissible individuals through improved and/or advanced access to data before the foreign national’s arrival, improved enforcement of immigration laws through improved data accuracy and completeness, reduction in foreign nationals remaining in the country under unauthorized circumstances, and reduced threat of terrorist attack and illegal immigration through improved identification of national security threats and inadmissible individuals. As we previously reported,34 these benefits were identified in the fiscal year 2004 expenditure plan, although they were not associated with Increment 1. GAO, Homeland Security: First Phase of Visitor and Immigration Status Program Operating, but Improvements Needed, GAO-04-586 (Washington, D.C.: May 11, 2004). Objective 2: Open Recommendations Recommendation 4 Further, the fiscal year 2004 plan included planned metrics for the first two benefits identified above and stated that US-VISIT was developing metrics for measuring the projected benefits, including baselines by which progress can be assessed. However, the fiscal year 2005 plan does not include any information on these metrics or on progress against any of the benefits. The fiscal year 2005 plan again states that performance measures are still under development. While the plan does not associate any measures with the defined benefits, it does identify several measures and links them to the US-VISIT processes—pre-entry, entry, status management, exit, and analysis. The plan also identifies examples of how US-VISIT is addressing its four stated goals. The examples, however, largely describe US-VISIT functions rather than measures of goal achievement. For example, in support of the stated goal of ensuring the integrity of our immigration system, the plan states that through US- VISIT, officers at primary inspection are able to instantly search databases of known criminals and known and suspected terrorists. It does not, however, identify how this ensures immigration system integrity. Objective 2: Open Recommendations Recommendation 5 Open Recommendation 5: Ensure that future expenditure plans fully disclose how the US-VISIT acquisition is being managed. The expenditure plan describes some activities being employed to manage the US- VISIT acquisition. For example, the plan describes the US-VISIT governance structure, as well as the program office organizational structure and staffing levels. The plan also describes certain management processes currently being used. For example, the plan states that US-VISIT program officials hold formal weekly meetings to discuss program risks/issues, schedule items, and critical path items. In addition, it states that formal points of contact for risk issues have been designated across the Increment Integrated Project teams, and the US-VISIT program organization and the plan states that US-VISIT is establishing a formal risk review board to review and manage risk. However, the plan does not describe how other important aspects of the program are being managed, several of which are discussed in this briefing. For example, it does not describe how testing, system capacity, and systems configuration are being managed. Objective 2: Open Recommendations Recommendation 6 Open Recommendation 6: Ensure that human capital and financial resources are provided to establish a fully functional and effective program office. DHS established the US-VISIT program office in July 2003 and determined the office’s staffing needs to be 115 government and 117 contractor personnel. As of October 2004, DHS had filled 59 of the 115 government positions. Of those positions that have not been filled, 5 have reassignments in progress and 51 have competitive announcements pending. According to US-VISIT, about half of these positions are to be filled when security clearances are completed. In addition, US-VISIT has changed its organizational structure, and some positions were moved to other offices within US-VISIT. For example, the number of positions in the Office of Mission Operations Management decreased from 23 to 18, and the number of positions in the Office of Chief Strategist increased from 10 to 14. Also, the number of positions in the Office of Administration and Management—now called the Office of Administration and Training—increased from 10 to 11. Objective 2: Open Recommendations Recommendation 6 The graphic on the next page shows the US-VISIT program office organization structure and functions, the number of positions needed by each office, and the number of positions filled. This graphic reflects the recent changes to the US-VISIT organizational structure. Objective 2: Open Recommendations Recommendation 6 In addition to the 115 government staff that were anticipated, the program anticipated 117 contractor support staff. As of November 2004, program officials told us they had filled 88 of these 117 positions. The expenditure plan also states that DHS has budgeted $83 million to maintain the program management structure and baseline operations, including, among other things, salaries and benefits for government full-time equivalents, personnel relocation costs, rent, and supplies. Objective 2: Open Recommendations Recommendation 7 Open Recommendation 7: Clarify the operational context in which US-VISIT is to operate. DHS is in the process of defining the operational context in which US-VISIT is to operate. In September 2003, DHS released version 1.0 of its enterprise architecture.35 We reviewed the initial version of the architecture and found that this architecture was missing, either partially or completely, all the key elements expected in a well-defined architecture, such as descriptions of business processes, information flows among these processes, and security rules associated with these information flows.36 Since we reviewed version 1.0, DHS has drafted version 2.0 of its architecture. We have not reviewed the draft, but DHS EA program officials told us this version focuses on departmental operations, and that later versions will incrementally focus on the national homeland security picture. This is important to the US-VISIT operational context because US-VISIT is a governmentwide program, including entities outside DHS, such as the Departments of State and Justice. Department of Homeland Security Enterprise Architecture Compendium Version 1.0 and Transitional Strategy. GAO, Homeland Security: Efforts Under Way to Develop Enterprise Architecture, but Much Work Remains, GAO-04-777 (Washington, D.C.: Aug. 6, 2004). Objective 2: Open Recommendations Recommendation 8 Open Recommendation 8: Determine whether proposed US-VISIT increments will produce mission value commensurate with cost and risks. US-VISIT developed a cost-benefit analysis (CBA) for Increment 2B, dated June 11, 2004. However, the CBA’s treatment of both benefits and costs raises several issues, making it unclear whether Increment 2B will produce mission value commensurate with cost and risks. First, the CBA primarily addresses government costs and is silent on some potential nongovernmental costs. For example, the CBA does not consider potential social costs like the economic impact on border communities. Objective 2: Open Recommendations Recommendation 8 operational performance benefits, such as improvement of traveler identification and validation of traveler documentation. Moreover, the CBA does not explain why these benefits cannot be quantified. Also, the CBA states that none of the proposed alternatives result in a positive net present value or return on investment, which it attributes to the limited scope of Increment 2B. Third, the CBA includes three alternatives and identifies alternative 3 as the preferred alternative. However, US-VISIT is not pursuing alternative 3, but rather is pursuing an alternative more aligned with alternative 2. According to the Program Director, this is because alternative 3 was considered too ambitious to meet the statutory requirement that US-VISIT be implemented at the 50 busiest land POEs by December 31, 2004. Objective 2: Open Recommendations Recommendation 9 Open Recommendation 9: Define US-VISIT program office positions, roles, and responsibilities. US-VISIT has developed descriptions for positions within each office. In addition, US-VISIT has worked with the Office of Personnel Management (OPM) to draft a set of core competencies that define the knowledge, skills, abilities, and other characteristics (competencies) needed for successful employee performance. According to US-VISIT’s draft Human Capital Plan, these core competencies will form the foundation for recruitment and selection, training and development, and employee performance evaluations. Currently, US-VISIT is using some of these draft core competencies in its employee performance appraisal process. Objective 2: Open Recommendations Recommendation 10 Open Recommendation 10: Develop and implement a human capital strategy for the US-VISIT program office that provides for staffing positions with individuals who have the appropriate knowledge, skills, and abilities. The US-VISIT program office awarded a contract to OPM to develop a draft Human Capital Plan. Our review of the draft plan showed that OPM developed a plan for US-VISIT that employed widely accepted human capital planning tools and principles. OPM’s recommendations to US-VISIT include the following: Develop and adopt a competency-based system and a corresponding human capital planning model that illustrate the alignment of US-VISIT’s mission with individual and organizational performance. Conduct a comprehensive workforce analysis to determine diversity trends, retirement and attrition rates, and mission-critical and leadership competency gaps. Objective 2: Open Recommendations Recommendation 10 Develop a leadership competency model and establish a formal leadership development program to ensure continuity of leadership. Link the competency-based human capital management system to all aspects of human resources, including recruitment, assessment, training and development, and performance. The draft human capital plan includes an action plan that identifies activities, proposed completion dates, and the office (OPM or US-VISIT) responsible for completing these activities. According to OPM, it has completed its work under the draft plan. As of October 2004, US-VISIT had completed some of the activities called for in the draft plan. For example, US-VISIT’s Office of Administration and Training has designated a liaison responsible for ensuring alignment between DHS and US-VISIT human capital policies. However, it remains to be seen how full implementation of the plan will impact the US-VISIT program office. For example, the workforce analysis called for in the draft plan could result in a change in the number and competencies of the staff needed to implement US-VISIT. Objective 2: Open Recommendations Recommendation 11 Open Recommendation 11: Develop a risk management plan and report all high risks and their status to the executive body on a regular basis. The US-VISIT program office has developed a risk management plan (dated June 2, 2004) and process (dated June 9, 2004). The plan addresses, among other things, the process for identifying, analyzing, mitigating, tracking, and controlling risks. As part of its process, US-VISIT has developed a risk management database. The database includes, among other things, a description of the risk, its priority (e.g., high, medium, low), and mitigation strategy. US-VISIT has also established the governance structure for managing risks. The governance structure includes three primary groups—the Risk Review Board, Risk Review Council, and Risk Management Team. Objective 2: Open Recommendations Recommendation 11 The Risk Review Board provides overall decision making, communication, and coordination in regard to risk activities. The board is composed of senior-level staff, such as the program director and functional area directors. The Risk Review Council reviews initially reported risks, validates their categorizations, and ensures that a mitigation approach has been developed. It also serves as a filter for the Board by deciding which risks can be mitigated without being elevated to the Board. The Risk Management Team provides risk management expertise and institutional knowledge. This group is staffed by APMO. According to the Director, APMO, US-VISIT has not reported high risks beyond the Review Board. Objective 2: Open Recommendations Recommendation 12 Open Recommendation 12: Define performance standards for each US-VISIT increment that are measurable and reflect the limitations imposed by relying on existing systems. Available documentation shows that some technical performance measures for Increments 1 and 2B have been defined. For example: Availability.37 The system will be available 99.5 percent of the time. Timeliness.38 Login, visa query, and TECS/NCIC default query will be less than 5 seconds; TECS optional queries will be less than 60 seconds; and IDENT watch list queries will be less than 10 seconds (matcher time only). Output quantity.39 70,000 primary inspection transactions per user, per day, with a maximum of 105,000 transactions during peak times. The time the system is operating satisfactorily, expressed as a percentage of time that the system is required to be operational. The time needed to perform a unit of work correctly and on time. The number of transactions processed. Objective 2: Open Recommendations Recommendation 12 However, other measures, such as reliability,40 resource utilization,41 and scalability,42 are not defined in the documentation. Further, the documentation does not contain sufficient information to determine the limitations imposed by US- VISIT’s reliance on existing systems that have less demanding performance requirements, such as TECS availability of 98.0 percent. Such information would include, for example, the processing sequencing and dependencies among the existing systems. The probability that a system, including all hardware, firmware, and software, will satisfactorily perform the task for which it was designed. A ratio representing the amount of time a system or component is busy divided by the time it is available. Ability of a system to function well when it is changed in size or volume. Objective 2: Open Recommendations Recommendation 13 Open Recommendation 13: Develop and approve test plans before testing begins. These test plans should (1) specify the test environment; (2) describe each test to be performed, including test controls, inputs, and expected outputs; (3) define the test procedures to be followed in conducting the tests; and (4) provide traceability between test cases and the requirements to be verified by the testing. According to the US-VISIT Systems Assurance Director, the Increment 2B system acceptance test (SAT) plan was approved during an October 15, 2004, test readiness review (TRR). However, no documentation was provided that explicitly indicated the approval of the plan, and the results of the TRR were not approved until October 28, 2004, which is 11 days after the date we were told that acceptance testing began. Objective 2: Open Recommendations Recommendation 13 The test plan does not fully address the test environment. For example, the plan does not describe the scope, complexity, and completeness of the test environment or identify necessary training. The plan does include generic descriptions of testing hardware, such as printers and card readers. The plan does not include descriptions of tests to be performed. However, officials from the IT Management Office provided us with other documentation describing the tests to be performed that included expected outputs, but it did not include inputs or controls. The plan does not provide test procedures to be followed in conducting the tests. Objective 2: Open Recommendations Recommendation 13 The plan does not provide traceability between test cases and the requirements to be verified by the testing. Our analysis of the 116 requirements identified in the consolidated requirements document showed that 39 requirements mapped to test cases that lacked sufficient detail to determine whether the test cases are testable, 15 requirements did not have test cases, 2 requirements were labeled “not testable,” and 1 requirement was identified as “TBD,” but was mapped to an actual test case. Open Recommendation 14: Ensure the independence of the Independent Verification and Validation (IV&V) contractor. According to the US-VISIT Program Director, the US-VISIT IT Management Office is developing high-level requirements for IV&V. In particular, it is developing a strategy and statement of work for acquiring an IV&V contractor. Objective 2: Open Recommendations Recommendation 15 Open Recommendation 15: Implement effective configuration management practices, including establishing a US-VISIT change control board to manage and oversee system changes. According to US-VISIT’s draft configuration management (CM) plan, dated July 2004, and US-VISIT officials, US-VISIT has not yet developed or implemented US- VISIT-level configuration management practices or a change control board. In the interim, for Increments 1, 2A and 2B, US-VISIT continues to follow relevant IDENT, ADIS, and TECS configuration management procedures, including applicable change control boards and system change databases. According to the US-VISIT System Assurance Director, for Increment 2B, US-VISIT is using the TECS change requests database for US-VISIT change requests, including those for IDENT and ADIS. Objective 2: Open Recommendations Recommendation 15 The draft configuration management plan describes key configuration activities that are to be defined and implemented, including (1) defining and identifying processes and products to be controlled; (2) evaluating, coordinating, and approving/rejecting changes to controlled items; (3) recording and monitoring changes to the controlled items; and (4) verifying that the controlled items meet their requirements and are accurately documented. The draft plan also proposes a governance structure, including change control boards. The proposed governance structure includes the following: A US-VISIT CM team is responsible for implementing, controlling, operating, and maintaining all aspects of configuration management and administration for US-VISIT. The team is to be composed of a CM manager, CM team staff, DHS system CM liaisons, prime integrator CM liaison, and testers and users. A change control board is to serve as the ultimate authority on changes to any US-VISIT system baseline, decide the content of system releases, and approve the schedule of releases. Objective 2: Open Recommendations Recommendation 16 Open Recommendation 16: Identify and disclose management reserve funding embedded in the fiscal year 2004 expenditure plan to the Appropriations Subcommittees. The US-VISIT program office reported the management reserve funding of $33 million for fiscal year 2004 to the Appropriations Subcommittees. According to the Deputy Program Manager, US-VISIT provided this information in a briefing to the Subcommittee staff. Open Recommendation 17: Ensure that all future US-VISIT expenditure plans identify and disclose management reserve funding. The fiscal year 2005 expenditure plan specified management reserve funding of $23 million. Objective 2: Open Recommendations Recommendation 18 Open Recommendation 18: Assess the full impact of Increment 2B on land POE workforce levels and facilities, including performing appropriate modeling exercises. US-VISIT conducted an Increment 2B baseline analysis to help determine the impact of Increment 2B on workforce and travelers. The analyses included three sites and addressed the Form I-94 issuance process and the Form I-94W43 process in secondary inspection. According to program officials, additional staff will not be needed to implement 2B at the border. Instead, US-VISIT has developed a plan to train existing Customs and Border Protection officers on the collection of traveler entry data, has completed the “train the trainer” classes at the training academy, and has begun training at three land POEs. I-94W is used for foreign nationals from visa waiver countries. Objective 2: Open Recommendations Recommendation 18 In addition, US-VISIT has conducted space utilization surveys at all of the 166 land POEs and completed survey reports at 16 of the 50 busiest land POEs. US-VISIT expects to have completed survey reports for the remaining 34 busiest land POEs during the fall of 2004. According to the 16 completed survey reports, existing traffic at most of these facilities was at or near capacity and the facilities had no room for expansion. However, US-VISIT officials said that Increment 2B will not require expansion at any facilities; rather, it will require mostly minor modifications, such as the installation of new or updated countertops and electrical power outlets to accommodate new equipment. Objective 2: Open Recommendations Recommendation 19 Open Recommendation 19: Develop a plan, including explicit tasks and milestones, for implementing all our open recommendations and periodically report to the DHS Secretary and Under Secretary on progress in implementing this plan; also report this progress, including reasons for delays, in all future US-VISIT expenditure plans. The US-VISIT program office has developed a report for tracking the status of our open recommendations. This report is shared with the program office director, but according to the Deputy Program Director, it is not shared with the Secretary and Under Secretary. In addition, he stated that the program office meets weekly with the Under Secretary, but the status of our recommendations are not discussed. The fiscal year 2005 expenditure plan summarizes our recommendations, but it does not identify tasks and milestones for implementing them or discuss progress in implementing them. Observation 1: The program office has acquired the services of a prime integration contractor to augment its ability to complete US-VISIT. DHS reported in its fiscal year 2004 US-VISIT expenditure plan that it had intended to award a contract by the end of May 2004 to a prime contractor for integrating existing and new business processes and technologies. US-VISIT awarded the contract on time. Specifically, on May 28, 2004, DHS awarded its prime contract to Accenture LLP and its related partners. Objective 3: Observations Progress Observation 2: The fiscal year 2005 Expenditure Plan does not describe progress against commitments (e.g., capabilities, schedule, cost, and benefits) made in previous plans. Given the immense importance of the US-VISIT program to the security of our nation’s borders and the need to acquire and implement it efficiently and effectively, the Congress has placed limitations on the use of appropriations for the US-VISIT program until DHS submits periodic expenditure plans. As we had previously reported,44 to permit meaningful congressional oversight, it is important that expenditure plans describe how well DHS is progressing against the commitments made in prior expenditure plans. GAO, Information Technology: Homeland Security Needs to Improve Entry Exit System Expenditure Planning, GAO-03-563 (Washington, D.C.: June 9, 2003). Objective 3: Observations Progress The fiscal year 2005 expenditure plan does not describe progress against commitments made in prior expenditure plans. For example, in its fiscal year 2004 expenditure plan, US-VISIT committed to, among other things, analyzing, field testing, and initiating deployment of alternative approaches for capturing biometrics during the exit process at air and sea POEs and implementing entry and exit capabilities at the 50 busiest land POEs by December 31, 2004, including delivering the capability to read radio frequency enabled documents at the 50 busiest land POEs for both entry and exit processes. The fiscal year 2005 plan does not address progress against these commitments. For example, the plan does not describe the status of the exit pilot testing or deployment, such as whether it has met its target schedule or whether the schedule has slipped. While the plan does state that US-VISIT will expand its pilot sites during the summer and fall of 2004 and deploy the exit solution during fiscal year 2005, it does not explain the reason for the change or its potential impact. The following graphic provides our analysis of the commitments made in the fiscal year 2003 and 2004 plans, compared with currently reported and planned progress. Objective 3: Observations Progress Further, the fiscal year 2004 plan states that $45 million in fiscal year 2004 funds were to be used for exit activities. However, the fiscal year 2005 plan states that $73 million in fiscal year 2004 funds were to be used for exit activities, but does not highlight this difference or address the reason for the change in budget amounts. Also, the fiscal year 2005 expenditure plan includes benefits stated in the fiscal year 2004 plan, but it does not provide progress in addressing those benefits, despite the fact that, in the fiscal year 2004 plan, US-VISIT stated that it was developing metrics for measuring the projected benefits, including baselines by which progress could be assessed. The fiscal year 2005 plan again states that performance measures are under development. This information is needed to allow meaningful congressional oversight of plans and progress. Objective 3: Observations Exit Deployment Observation 3: The exit capability alternatives are faced with a compressed time line, missed milestones, and potentially reduced scope. On January 5, 2004, US-VISIT deployed an initial exit capability in pilot status to two POEs. At that time, the Program Director stated that US-VISIT was developing other exit alternatives, along with criteria for evaluating and selecting one or more of the alternatives by December 31, 2004. Planned evaluation time line compressed In May 2004, US-VISIT issued an Exit Pilot Evaluation Execution Plan. This plan states that three alternative exit solutions are to be evaluated while deployed to a total of 15 air and sea POEs. The plan allotted about 3 months to conduct the evaluation and report the results. Specifically, the deployment was to be completed by August 1, 2004, and all exit pilot evaluation tasks were to be completed by September 30, 2004, with an evaluation report finished by October 28, 2004. Objective 3: Observations Exit Deployment However, according to the exit master schedule provided to us on October 26, 2004, the three alternatives were scheduled to be fully deployed by October 29, 2004, and all evaluation tasks are to be completed on December 6, 2004, with delivery of the evaluation report on December 30, 2004, which is about a 2-month evaluation and reporting period. The following graphic illustrates how the exit pilot schedule has been shortened from the originally planned 3 months to the currently planned 2 months and compares the original plan with the current plan. Objective 3: Observations Exit Deployment As of November 8, 2004, the three alternatives were deployed and operational in only 5 of the 15 POEs that were to be operational by November 1. According to the Exit Implementation Manager, all ports had received and installed the exit equipment. However, the requisite number of contract employees (WSAs) is not yet available to make all 15 POEs operational because of delays in DHS granting security clearances to the attendants. The manager stated that a recent meeting with DHS security officials has helped to improve the pace of finalized security clearances, but the manager did not know when the remaining 10 ports would become operational. Objective 3: Observations Exit Deployment The Evaluation Execution Plan describes the evaluation methodology that is to be employed for the three alternatives. An important element of that methodology is the targeted sample size per port. For each port, a targeted number of outbound passengers will be processed by the three alternatives and data gathered on these encounters. The plan’s specified sample sizes are described as sufficient to achieve a 95 percent confidence level with a margin of error of 5 percent. According to the Exit Implementation Manager, the desired sample size will be collected at each port, despite the compressed time frame for conducting the evaluations, by adding additional personnel to the evaluation teams if needed. These changing facts and circumstances surrounding the exit pilot introduce additional risk concerning US-VISIT’s delivery of promised capabilities and benefits on time and within budget. Objective 3: Observations Exit Deployment On November 12, 2004, US-VISIT issued a revised draft Exit Pilot Evaluation Plan. However, the plan does not address any of the concerns cited, in part because it does not include a planned completion date. Instead, the plan states that the evaluation period is planned for October 31, 2004, until completion. Without a planned completion date, it is not possible to determine the length of the evaluation period or any impact that the length of the evaluation may have on the evaluation’s scope. Observation 4: US-VISIT and Automated Commercial Environment (ACE) collaboration is moving slowly. The US-VISIT EA alignment analysis document describes a port of entry/exit management conceptual project that is to establish uniform processes at POEs and the capability to inspect and categorize people and goods and act upon the information collected. The document recognizes that both US-VISIT and ACE45 support this project because they have related missions and a planned presence at the borders, including the development and deployment of infrastructure and technology. We recognized the relationships between these two programs in February 2003,46 when we recommended that future ACE expenditure plans specifically address any proposals or plans, whether tentative or approved, for extending and using ACE infrastructure to support other homeland security applications. ACE is a new trade processing system planned to support the movement of legitimate imports and exports and strengthen border security. GAO, Customs Service Modernization: Automated Commercial Environment Progressing, but Further Acquisition Management Improvements Needed, GAO-03-406 (Washington D.C.: Feb. 28, 2003). Objective 3: Observations Collaboration people, processes, and technology, which includes establishing a team to review deployment schedules and establishing a team and process to review and normalize business requirements. In August 2004, the US-VISIT and ACE programs tasked their respective contractors to form collaboration teams to address the three areas. Nine teams have been formed: business; organizational change management; facilities; information and data; technology; privacy and security; deployment, operations, and maintenance; and program management. Objective 3: Observations Collaboration The teams met in September 2004 to develop team charters, identify specific collaboration opportunities, and develop time lines and next steps. In October 2004, US-VISIT and ACE contractors met US-VISIT and ACE management to present their preliminary results. According to a US-VISIT official, the team charters have not yet been formally approved. Since we recommended steps to promote close collaboration between these two programs, about 20 months have passed, and explicit plans have not been developed nor actions taken to understand US-VISIT/ACE dependencies and relationships so that these can be exploited to optimize border operations. During this time and in the near future, the management of both programs have been and will be making and acting on decisions to further define, design, develop, and implement their respective programs. The longer it takes for the programs to exploit their relationships, the more rework will be needed at a later date to integrate the two programs. According to the US-VISIT Program Director, the pace of collaboration activities has been affected by scheduling and priority conflicts, as well as staff availability. Observation 5: US-VISIT system capacity is being managed in a compartmentalized manner. Capacity management is intended to ensure that systems are properly designed and configured for efficient performance and have sufficient processing and storage capacity for current, future, and unpredictable workload requirements. Capacity management includes (1) demand forecasting, (2) capacity planning, and (3) performance management. Demand forecasting ensures that the future business requirement workloads are considered and planned. Capacity planning involves determining current and future resource requirements and ensuring that they are acquired and implemented in a timely and cost-effective manner. Performance management involves monitoring the performance of system resources to ensure required service levels are met. The US-VISIT system, as noted earlier, is actually a system made up of various pre-existing (or legacy) systems that are operated by different DHS organizational components and that have been enhanced and interfaced. Objective 3: Observations Capacity Management Currently, DHS does not have a capacity management program. Instead, the US- VISIT IT Management Office relies on the performance management activities of the respective pre-existing DHS systems. For example: A quarterly report provided by the Customs and Border Protection Systems Engineering Branch Performance Engineering Team tracks such system measures as transaction volume, central processing unit utilization, and workload growth. Immigration and Customs Enforcement tracks such system measures as hourly and daily transaction rates and response times. According to the program office, the system-of-systems nature of US-VISIT does not lend itself to easily tracking systemwide performance. Nevertheless, program officials told us that the US-VISIT program has tasked two of its contractors with developing a comprehensive performance management and capacity planning effort. Until this is developed, the program will continue to rely on component system performance management activities to ensure that US-VISIT system resources are sufficient to meet current US-VISIT workloads, which increases the risk that they may not be able to adequately support US-VISIT mission needs. Objective 3: Observations Cost Estimate Observation 6:The cost estimating process used for Increment 2B did not follow some key best practices. SEI recognizes the need for reliable cost-estimating processes in managing software-intensive system acquisitions. To this end, SEI has issued a checklist47 to help determine the reliability of cost estimates. Our analysis found that US-VISIT did not fully satisfy most of the criteria on SEI’s checklist. The US-VISIT Increment 2B estimate met two of the checklist items that we evaluated, partially met six, and did not meet five. For example, US-VISIT provided no evidence that Increment 2B was appropriately sized. Specifically, costs related to development and integration tasks for the TECS, IDENT, and ADIS systems are specified, but estimated software lines of code to be reused, modified, added, or deleted are not. As another example, no one outside the US-VISIT program office reviewed and concurred with the cost estimating categories and methodology. The table on the following slides summarizes our analysis of the extent to which US-VISIT’s cost-estimating process for Increment 2B met SEI’s criteria. Carnegie Mellon University Software Engineering Institute, A Manager’s Checklist for Validating Software Cost and Schedule Estimates, CMU/SEI-95-SR-004 (January 1995). Criterion 1. The objectives of the estimate are stated in writing. 2. The life cycle to which the estimate applies is clearly defined. 3. The task has been appropriately sized (e.g., software lines of code). 4. The estimated cost and schedule are consistent with demonstrated accomplishments on other projects. 5. A written summary of parameter values and their rationales accompanies the estimate. 6. Assumptions have been identified and explained. 7. A structured process such as a template or format has been used to ensure that key factors have not been overlooked. Criterion 8. Uncertainties in parameter values have been identified and quantified. 9. 10. If a dictated schedule has been imposed, an estimate of the normal schedule has been compared to the additional expenditures required to meet the dictated schedule. If more that one cost model or estimating approach has been used, any differences in results have been analyzed and explained. 11. Estimators independent of the performing organization concurred with the reasonableness of the parameter values and estimating methodology. 12. Estimates are current. 13. The results of the estimate have been integrated with project planning and tracking. Objective 3: Observations Cost Estimate Without reliable cost estimates, the ability to make informed investment decisions and effectively measure progress and performance is reduced. progressing against the commitments that it made in prior expenditure plans. Reassess its plans for deploying an exit capability to ensure that the scope of the exit pilot provides for adequate evaluation of alternative solutions, and better ensures that the exit solution selected is in the best interest of the program. Develop and implement processes for managing the capacity of the US-VISIT system. Follow effective practices for estimating the costs of future increments. VISIT and ACE programs a priority matter, and report periodically to the Under Secretary on progress in doing so. To accomplish our objectives, we performed the following tasks: We analyzed the expenditure plan against legislative conditions and other relevant federal requirements, guidance, and best practices to determine the extent to which the conditions were met. We analyzed key acquisition management controls documentation and interviewed program officials to determine the status of our open recommendations. We analyzed supporting documentation and interviewed DHS and US-VISIT program officials to determine capabilities in key program management areas, such as enterprise architecture and capacity management. We analyzed Increment 2B systems and software testing documentation and compared them with relevant guidance to determine completeness. We attended program working group meetings. We assessed the reliability of US-VISIT’s Increment 2B cost estimate by selecting 13 criteria from the SEI checklist48 that, in our professional judgment, represent the minimum set of criteria necessary to develop a reliable cost estimate. We analyzed the Increment 2B cost-benefit analysis and supporting documentation and interviewed program officials to determine how the estimate was derived. We then assessed each of the criteria as satisfied (US- VISIT provided substantiating evidence for the criterion), partially satisfied (US- VISIT provided partial evidence, including testimonial evidence, for the criterion), and not satisfied (no evidence was found for the criterion). We did not review the State Department’s implementation of machine- readable, tamper-resistant visas that use biometrics. For DHS-provided data that our reporting commitments did not permit us to substantiate, we have made appropriate attribution indicating the data’s source. Carnegie Mellon University Software Engineering Institute, A Manager’s Checklist for Validating Software Cost and Schedule Estimates, CMU/SEI-95-SR-004 (January 1995). We conducted our work at US-VISIT program offices in Rosslyn, Virginia, from June 2004 through November 2004, in accordance with generally accepted government auditing standards. Attachment 2 Recent US-VISIT Studies Border Security: State Department Rollout of Biometric Visas on Schedule, but Guidance Is Lagging. GAO-04-1001. Washington, D.C.: September 9, 2004. Border Security: Joint, Coordinated Actions by State and DHS Needed to Guide Biometric Visas and Related Programs. GAO-04-1080T. Washington, D.C.: September 9, 2004. Homeland Security: First Phase of Visitor and Immigration Status Program Operating, but Improvements Needed. GAO-04-586. Washington, D.C.: May 11, 2004. DHS Office of Inspector General. An Evaluation of the Security Implications of the Visa Waiver Program. OIG-04-26. Washington, D.C.: April 2004. Homeland Security: Risks Facing Key Border and Transportation Security Program Need to Be Addressed. GAO-04-569T. Washington, D.C.: March 18, 2004. Homeland Security: Risks Facing Key Border and Transportation Security Program Need to Be Addressed. GAO-03-1083. Washington, D.C.: September 19, 2003. Information Technology: Homeland Security Needs to Improve Entry Exit System Expenditure Planning. GAO-03-563. Washington, D.C.: June 9, 2003. In addition to the individual named above, Barbara Collier, Neil Doherty, David Hinchman, James Houtz, Carolyn Ikeda, Anh Le, John Mortin, David Noone, Karen Richey, Karl Seifert, and Randolph Tekeley made key contributions to this report. | The Department of Homeland Security (DHS) has established a program--the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT)--to collect, maintain, and share information, including biometric identifiers, on selected foreign nationals who travel to the United States. By congressional mandate, DHS is to develop and submit for approval an expenditure plan for US-VISIT that satisfies certain conditions, including being reviewed by GAO. Among other things, GAO was asked to determine whether the plan satisfied these conditions and to provide observations on the plan and DHS's program management. DHS's fiscal year 2005 expenditure plan and related documentation at least partially satisfied all conditions established by the Congress, including meeting the capital planning and investment control requirements of the Office of Management and Budget (OMB). For example, DHS has developed a plan and a process for developing, implementing, and institutionalizing a program to manage risk. In its observations about the expenditure plan and DHS's management of the program, GAO recognizes accomplishments to date and addresses the need for rigorous and disciplined program practices. For example, US-VISIT has acquired the services of a prime integration contractor to augment its ability to complete US-VISIT. However, DHS has not employed rigorous, disciplined processes typically associated with successful programs, such as tracking progress against commitments. More specifically, the fiscal year 2005 plan does not describe progress against commitments made in previous plans (e.g., capabilities, schedule, cost, and benefits). According to GAO's analysis, delays have occurred in delivering capability to track the entry and exit of persons entering the United States at air, land, and sea ports of entry. Such information is essential for oversight. Additionally, the effort to pilot alternatives for delivering the capability to track the departure of persons exiting the United States is faced with a compressed time line, missed milestones, and potentially reduced scope. In particular, the pilot evaluation period has been reduced from 3 to 2 months, and as of early November 2004, the alternatives were deployed and operating in only 5 of the 15 ports of entry scheduled to be operational by November 1, 2004. According to US-VISIT officials, this is largely due to delays in DHS granting security clearances to the civilian employees who would operate the equipment at the ports of entry. These changing facts and circumstances surrounding the pilot introduce additional risk concerning US-VISIT's delivery of promised capabilities and benefits on time and within budget. |
In October 1992, the Congress established SAMHSA to strengthen the nation’s health care delivery system for the prevention and treatment of substance abuse and mental illnesses. SAMHSA has three centers that carry out its programmatic activities: the Center for Mental Health Services, the Center for Substance Abuse Prevention, and the Center for Substance Abuse Treatment. (See table 1 for a description of each center’s purpose.) The centers receive support from SAMHSA’s Office of the Administrator; Office of Program Services; Office of Policy, Planning, and Budget; and Office of Applied Studies. The Office of Program Services oversees the grant review process and provides centralized administrative services for the agency; the Office of Policy, Planning, and Budget develops the agency’s policies, manages the agency’s budget formulation and execution, and manages agencywide strategic and program planning activities; and the Office of Applied Studies gathers, analyzes, and disseminates data on substance abuse practices in the United States, which includes administering the annual National Survey on Drug Use and Health—a primary source of information on the prevalence, patterns, and consequences of drug and alcohol use and abuse in the country. In fiscal year 2003, SAMHSA’s staff totaled 504 full-time-equivalent employees, a decrease from 563 in fiscal year 1999. Thirteen of the employees were in the Senior Executive Service, and the average grade of SAMHSA’s general schedule workforce was 12.5—up from 11.7 in fiscal year 1999. In addition, 25 of the employees were members of the U.S. Public Health Service Commissioned Corps. SAMHSA’s program staff are almost evenly divided among its three centers (see fig. 1), and all are located in the Washington, D.C., metropolitan area. SAMHSA’s budget increased from about $2 billion in fiscal year 1992 to about $3.1 billion in fiscal year 2003. SAMHSA uses most of its budget to fund grant programs that are managed by its three centers. (See fig. 2.) In fiscal year 2003, 68 percent of SAMHSA’s budget funded the Substance Abuse Prevention and Treatment Block Grant ($1.7 billion) and the Community Mental Health Services Block Grant ($437 million). The remaining portion of SAMHSA’s budget primarily funded other grants; $74 million (2.4 percent) of its fiscal year 2003 budget supported program management. SAMHSA’s major activity is to use its grant programs to help states and other public and private organizations provide substance abuse and mental health services. For example, the substance abuse block grant program gives all states a funding source for planning, carrying out, and evaluating substance abuse services. States use their substance abuse block grants to fund more than 10,500 community-based organizations. Similarly, the mental health block grant program supports a broad spectrum of community mental health services for adults with serious mental illness and children with serious emotional disorders. In December 2002, SAMHSA released for public comment its initial proposal for how it will transform the substance abuse and mental health block grants into performance partnership grants. In administering the block grants, the agency currently holds states accountable for complying with administrative and financial requirements, such as spending a specified percentage of funds on particular services or populations. According to SAMHSA’s proposal, the new grants will give states more flexibility to meet the needs of their population by removing certain spending requirements. At the same time, the grants will hold states accountable for achieving specific goals related to the availability and effectiveness of mental health and substance abuse services. For example, SAMHSA has proposed that it would waive the current requirement that a state use a certain percentage of its substance abuse block grant funds for HIV services if that state can show a reduction of HIV transmissions among the population with a substance abuse problem. The Children’s Health Act of 2000 required SAMHSA to submit a plan to the Congress by October 2002 describing the flexibility the performance partnership grants would give the states, the performance measures that SAMHSA would use to hold states accountable, the data that SAMHSA would collect from states, definitions of the data elements, obstacles to implementing the grants and ways to resolve them, the resources needed to implement the grants, and any federal legislative changes that would be necessary. In addition to the block grants that SAMHSA awards to all states, the agency awards grants on a competitive basis to a limited number of eligible applicants. These discretionary grants help public and private organizations develop, implement, and evaluate substance abuse and mental health services. In fiscal year 2003, the agency funded 73 discretionary grant programs, the largest of which was the $98.1 million Children’s Mental Health Services Program. This program helps grantees integrate and manage various social and medical services needed by children and adolescents with serious emotional disorders. Discretionary grant applications submitted to SAMHSA go through several stages of review. When SAMHSA initially receives grant applications, it screens them for adherence to specific formatting and other administrative requirements. Applications that are rejected—or screened out—at this stage receive no further review. Applications that move on are reviewed on the basis of their scientific and technical merit by an initial review group and then by one of SAMHSA’s national advisory councils. The councils, which ensure that the applications support the mission and priorities defined by SAMHSA or the specific center, must concur with the scores given to the applications by the initial review group. On the basis of the ranking of these scores given by the peer reviewers and on other criteria posted in the grant announcement, such as geographic location, SAMHSA program staff decide which grant applications receive funding. Center directors and grants management officers must approve award decisions that differ from the ranking of priority scores, and SAMHSA’s administrator approves all final award decisions. SAMHSA’s oversight of its block and discretionary grants consists primarily of reviews of independent audit reports, on-site reviews, and reviews of grant applications. SAMHSA’s Division of Grants Management provides grant oversight, which includes reviewing the results of grantees’ annual financial audits that are required by the Single Audit Act. In general, these audits are designed to determine whether a grantee’s financial statements are fairly presented and grant funds are managed in accordance with applicable laws and program requirements. Furthermore, SAMHSA is statutorily required to conduct on-site reviews to monitor block grant expenditures in at least 10 states each fiscal year. The reviews examine states’ fiscal monitoring of service providers and compliance with block grant requirements, such as requirements to maintain a certain level of state expenditures for drug abuse treatment and community mental health services—referred to as maintenance of effort. In addition, SAMHSA project officers—grantees’ main point of contact with SAMHSA—monitor states’ compliance with block grant requirements through their review of annual block grant applications. For example, in the substance abuse block grant application, states report how they spent funds made available during a previous fiscal year and how they intend to obligate funds being made available in the current fiscal year; project officers review this information to determine if states have complied with statutory requirements. For discretionary grants, project officers monitor grantees’ use of funds through several mechanisms, including quarterly reports, site visits, conference calls, and regular meetings. The purpose of monitoring both block and discretionary grants is to ensure that grantees achieve program goals and receive any technical assistance needed to improve their delivery of substance abuse and mental health services. SAMHSA has partnerships with every HHS agency and 12 federal departments and independent agencies that fund substance abuse and mental health programs and activities. For example, within HHS, the Centers for Disease Control and Prevention and the Health Resources and Services Administration have responsibility for improving the accessibility and delivery of mental health and substance abuse services, and the National Institutes of Health funds research on numerous topics related to substance abuse and mental health. The Departments of Education, Housing and Urban Development, Justice, and Veterans Affairs fund substance abuse and mental health initiatives to help specific populations, such as children and homeless people. In addition, the White House Office of National Drug Control Policy is responsible for overseeing and coordinating federal, state, and local drug control activities. Specifically, the office gives federal agencies guidance for preparing their annual budgets for activities related to reducing illicit drug use. It also develops substance abuse profiles of states and large cities, which contain statistics related to drug use and information on federal substance abuse prevention and treatment grants awarded to that state or city. SAMHSA has operated without a strategic plan since October 2002. Although agency officials are in the process of drafting a plan that covers fiscal years 2004 through 2009 and expect to have it ready for public comment in the fall of 2004, they do not know when they will issue a final strategic plan. As part of its strategic planning process, which began in fiscal year 2002, SAMHSA developed three long-term goals for the agency—promoting accountability, enhancing service capacity, and improving the effectiveness of substance abuse and mental health services. SAMHSA’s management has also identified 11 priority issues to guide the agency’s activities and resource allocation and 10 priority principles that agency officials are to consider when they develop policies and programs related to these issues. (See table 2 for a list of SAMHSA’s priority issues and priority principles.) For example, when SAMHSA develops grant programs to increase substance abuse treatment capacity—a priority issue—staff are to consider the priority principle of how the programs can be implemented in rural settings. To ensure that the priority issues play a central role in the work of its three centers, SAMHSA established work groups for all the priority issues that include representation from at least two centers. The work groups are to make recommendations to SAMHSA’s leadership about funding for specific programs and to develop cross- center initiatives. Although SAMHSA officials consider the agency’s set of priority issues and priority principles a valuable planning and management tool, it lacks important elements that a strategic plan would provide. For example, SAMHSA’s priorities do not identify the approaches and resources needed to achieve the long-term goals; the results expected from the agency’s grant programs and a timetable for achieving those results; and an assessment of key external factors, such as the actions of other federal agencies, that could affect SAMHSA’s ability to achieve its goals. Without a strategic plan that includes the expected results against which the agency’s efforts can be measured, it is unclear how the agency or the Congress will be able to assess the agency’s progress toward achieving its long-term goals or the adequacy and appropriateness of SAMHSA’s grant programs. Such assessments would help SAMHSA determine whether it needs to eliminate, create, or restructure any grant programs or activities. The priority issue work groups are developing multiyear action plans that could support SAMHSA’s strategic planning efforts, because the plans are expected to include measurable performance goals, action steps to meet those goals, and a description of external factors that could affect program results. SAMHSA officials expect to approve the action plans by June 30, 2004, and include them as a component of the draft strategic plan. SAMHSA’s strategic workforce planning efforts lack key strategies to ensure appropriate staff will be available to manage the agency’s programs. Specifically, SAMHSA has not developed a detailed succession strategy to prepare for the loss of essential expertise and to ensure that the agency can continue to fill key positions. In addition, the agency has not fully developed hiring and training strategies to ensure that its project officers can administer the proposed performance partnership grants. SAMHSA has, however, taken steps to improve project officers’ expertise for managing the current block grants and to increase staff effectiveness by improving the efficiency of its work processes. While SAMHSA recently implemented a performance management system that links staff expectations with the agency’s long-term goals, other aspects of the system do not reinforce individual accountability. SAMHSA’s strategic workforce planning lacks key elements to ensure that the agency has staff with the appropriate expertise to manage its programs. The goal of strategic workforce planning is to develop long-term strategies for acquiring, developing, and retaining staff needed to achieve an organization’s mission and programmatic goals. SAMHSA is implementing a strategic workforce plan—developed for fiscal years 2001 through 2005—that identifies the need to strategically and systematically recruit, hire, develop, and retain a workforce with the capacity and knowledge to achieve the agency’s mission. SAMHSA developed the plan to improve organizational effectiveness and make the agency an “employer of choice,” and the plan calls for development of an adequately skilled workforce and efficient work processes. (See app. II for additional information on SAMHSA’s strategic workforce plan.) The plan specifically outlines the need to engage in succession planning to prepare for the loss of essential expertise and to implement strategies to obtain and develop the competencies that the agency needs. SAMHSA did not include a succession strategy in its strategic workforce plan, and the agency has not yet developed such a strategy. As we have previously reported, succession planning is important for strengthening an agency’s workforce by ensuring an ongoing supply of successors for leadership and other key positions. SAMHSA officials told us the agency has begun to engage in succession planning. They also noted that recent retirement and attrition rates have been moderate—about 5 percent and 10 percent, respectively, in fiscal year 2003—and that the agency’s small size allows them to identify those likely to retire and to fill key vacancies as they occur. However, the proportion of SAMHSA’s workforce eligible to retire is expected to rise from 19 percent in fiscal year 2003 to 25 percent in fiscal year 2005, and careful planning could help SAMHSA prepare for the loss of essential expertise. Another shortcoming in SAMHSA’s strategic workforce planning is that the agency has not fully developed hiring and training strategies to ensure that its project officers will have the appropriate expertise to manage the proposed performance partnership grants. The changes in the block grant will alter the relationship between SAMHSA and the states, requiring project officers to negotiate specific performance goals and monitor states’ progress towards these goals. SAMHSA’s block grant reengineering team found that, to carry out these responsibilities, project officers will need training in performance management; elementary statistics; and negotiation, advocacy, and mediation. SAMHSA expected to have a training plan by late May 2004, but has not established a firm date by which the training will be provided. As SAMHSA develops the training plan, it will be important for the agency to consider how it will implement and evaluate the training, including how it will assess the effect of the training on staff’s development of needed skills and competencies. In addition, the reengineering team recommended that the agency use individualized staff development plans for project officers to ensure that they acquire necessary skills. SAMHSA expects to have the individual development plans in place by the end of fiscal year 2004. The team also recommended that the agency develop new job descriptions to recruit new staff. SAMHSA has developed job descriptions that identify the responsibilities all project officers will have to meet and is using those descriptions in its recruitment efforts. SAMHSA has initiated efforts to improve the ability of project officers to assist grantees with the current block grants. For example, SAMHSA officials told us that the agency has made an effort to hire more project officers with experience working in state mental health and substance abuse systems. The agency is also expanding project officers’ training on administrative policies and procedures and is planning to add a discussion of block grant procedures to its on-line policy manual. These efforts should help respond to the block grant reengineering team’s finding that project officers require additional training in substance abuse prevention and treatment and block grant program requirements. They should also help address the concerns of state officials who told us that project officers for the block grants have not always had sufficient background in mental health or substance abuse services or have provided confusing or incorrect information on grant requirements. For example, one state received conflicting information from its project officer about the percentage of its substance abuse block grant that it was required to spend for HIV/AIDS services. Similarly, according to another state official, a project officer provided unclear guidance on how to submit a request to waive the mental health block grant’s maintenance of effort requirement, which resulted in the state having to resubmit the request. To meet the goal in its workforce plan of increasing staff effectiveness, SAMHSA is taking steps to improve the agency’s work processes. For example, agency officials expect to reduce the amount of time and effort that staff devote to preparing grant announcements by issuing 4 standard grant announcements for its discretionary grant programs, instead of the 30 to 40 issued annually in previous years. SAMHSA officials estimate that the 4 standard announcements will encompass 75 to 80 percent of the agency’s discretionary grants and believe they will improve the efficiency of the grant award process. In addition, SAMHSA officials told us that while most new award decisions have been made at the end of the fiscal year, they expect that this consolidation will allow the agency to issue some awards earlier in the year. SAMHSA has adopted a new performance management system for its employees that is intended to hold staff accountable for results by aligning individual performance expectations with the agency’s goals—a practice that we have identified as key for effective performance management. SAMHSA is aligning the performance expectations of its administrator and senior executives with the agency’s long-term goals and priority issues and then linking those expectations with expectations for staff at lower levels. As a result, SAMHSA’s senior executives’ performance expectations are linked directly to the administrator’s objectives, and all other employees have at least one performance objective that can be linked to the administrator’s objectives. For example, objectives related to implementing the four new discretionary grant announcements are included in the 2003 performance plans of the appropriate center directors, branch chiefs, and project officers. In contrast, other aspects of SAMHSA’s performance management system do not reinforce individual accountability for results. SAMHSA’s performance management system does not make meaningful distinctions between acceptable and outstanding performance—an important practice in a results-oriented performance management system. Instead, staff ratings are limited to two categories, “meets or exceeds expectations” or “unacceptable.” SAMHSA managers told us that few staff receive an unacceptable rating and that using a pass/fail system can make it difficult to hold staff accountable for their performance. Moreover, this type of system may not give employees useful feedback to help them improve their performance, and it does not recognize employees who are performing at higher levels. In addition, SAMHSA’s performance management system does not assess staff performance in relation to specific competencies. Competencies define the skills and supporting behaviors that individuals are expected to exhibit in carrying out their work, and they can provide a fuller picture of an individual’s contributions to achieving the agency’s goals. SAMHSA’s strategic workforce plan includes a description of the competencies that staff need, including technical competencies related to data collection and analysis, co-occurring disorders, and service delivery. However, these competencies have not been incorporated into the agency’s performance management system to help reinforce behaviors and actions that support the agency’s goals. SAMHSA jointly funds grant programs with other federal agencies and departments, often through agreements that enable funds to be transferred between agencies. While these interagency agreements can streamline the grant-making process, SAMHSA’s lengthy procedures for approving them have delayed the awarding of grants. SAMHSA officials told us that they recently implemented policies to expedite the approval process. In addition to jointly funding programs, SAMHSA shares mental health and substance abuse expertise and information with other federal agencies and departments. Grantees with whom we spoke identified opportunities for SAMHSA to better coordinate with its federal partners to disseminate information about effective practices to states and community-based organizations. SAMHSA frequently collaborates with other federal agencies and departments to jointly fund grant programs that support a range of substance abuse and mental health services. (See table 3 for examples of jointly funded programs.) For example, for the $34.4 million Collaborative Initiative to Help End Chronic Homelessness, SAMHSA, the Health Resources and Services Administration, the Department of Housing and Urban Development, and the Department of Veterans Affairs provide funds or other resources related to their own programs and the populations they generally serve. SAMHSA’s funds are directed toward the provision of substance abuse and mental health services for homeless people. Many of SAMHSA’s joint funding arrangements use interagency agreements to transfer funds between agencies, which allow grantees to receive all of their grant funds from a single federal agency or department (see table 4). For example, Safe Schools, Healthy Students grantees receive all of their funds from the Department of Education, even though SAMHSA also supports this program. SAMHSA officials told us that interagency transfers create fewer funding streams and make the process less confusing to grantees. While transferring funds can streamline the grant process, SAMHSA’s system for approving interagency agreements has been inefficient. Before the funds are transferred, the agencies involved must approve an interagency agreement describing the amount of money being transferred and how it will be used. Officials from the Departments of Justice and Education told us that SAMHSA’s approval process was lengthy and resulted in agreements being completed at the last minute. The Department of Education found that it took SAMHSA more than 70 days to approve the 2003 Safe Schools, Healthy Students interagency agreement— a period that SAMHSA estimated was about 40 days longer than in previous years. SAMHSA officials told us that the approval process was complicated by the lack of a clear policy identifying the SAMHSA management officials who needed to review and approve the agreements. In March 2004, SAMHSA implemented new policies that clarify the process for reviewing and approving agreements and the responsibilities of specific SAMHSA officials. At that time, SAMHSA also began to track the time it takes for the agency to review and approve interagency agreements. It is too early to know how SAMHSA’s new policies will affect the efficiency of the approval process. SAMHSA provides its expertise and information on substance abuse and mental health to other federal agencies and departments and collaborates with them to share information with states and community-based organizations. For example, officials from the Health Resources and Services Administration told us that in coordinating health care and mental health services for people who are homeless, they use SAMHSA’s knowledge of community-based substance abuse and mental health providers who can work with primary care providers. Also, the Office of National Drug Control Policy uses data from SAMHSA’s National Survey on Drug Use and Health to determine the extent to which it has achieved its goals and objectives. This survey also provides data to support HHS’s Healthy People 2010’s substance abuse focus area. Several grantees told us that SAMHSA and the National Institutes of Health could better collaborate to ensure that providers have information about the most effective ways to deliver substance abuse and mental health services. Recognizing the importance of such a partnership, the two agencies recently initiated the Science to Service initiative, which is designed to better integrate the National Institutes of Health’s research on effective practices with the services funded by SAMHSA. For example, in fiscal year 2003, SAMHSA and the National Institutes of Health funded a grant to help states more readily integrate effective mental health practices into service delivery in their states. In addition, grantees recommended that SAMHSA better coordinate with the Departments of Education and Justice to disseminate information about effective practices to states and community-based organizations. For example, a state official told us that SAMHSA and the Department of Education do not ensure that their processes for evaluating substance abuse prevention programs result in comparable sets of model programs. The two agencies evaluate programs using different criteria and rate some prevention programs differently. SAMHSA reported that it may be appropriate for agencies to have different criteria because each agency must have the ability to tailor its criteria to meet the specific goals of its grant programs. A SAMHSA official acknowledged, however, that SAMHSA and the Departments of Education and Justice are discussing how they can refine their criteria for evaluating prevention programs and better communicate the results to grantees. Officials from state mental health and substance abuse agencies and community-based organizations identified opportunities for SAMHSA to better manage its block and discretionary grant programs. They cited concerns with SAMHSA’s grant application processes, site visits, and the availability of information on technical assistance. SAMHSA plans to transform its block grants into performance partnership grants in fiscal years 2005 and 2006, and the agency, along with the states, is preparing for the change. However, state officials are concerned that SAMHSA has not finalized the performance data that states would report under the proposed performance partnership grants. In addition, SAMHSA has not completed the plan it must send to the Congress identifying the data reporting requirements for the states and any legislative changes needed to implement the performance partnership grants. Officials from states and community-based organizations told us that SAMHSA could improve administration of its grant programs, citing concerns related to the agency’s grant application review processes, site visits to review states’ compliance with block grant requirements, and the availability of information on technical assistance opportunities. In some instances, SAMHSA has begun to respond to these issues. Grantees we talked to expressed concern that SAMHSA rejects discretionary grant applications without reviewing them for merit if they do not comply with administrative requirements. SAMHSA told us that of the 2,054 fiscal year 2003 applications it received after January 3, 2003, 393—19 percent—were rejected in this initial screening process. Of the 14 grantees we interviewed, 4 told us that SAMHSA rejected 1 of their 2003 grant applications without review and a fifth had 5 applications rejected. Grantees told us that this practice does not enable applicants to obtain substantive feedback on the content of their applications. They also said that SAMHSA’s practice of waiting to notify applicants of the rejection until it notifies all applicants of funding decisions—near the start of the next fiscal year—impedes their fiscal planning. In response to concerns over the number of grant applications it rejected on administrative grounds in fiscal year 2003, SAMHSA has changed the way it will screen fiscal year 2004 applications. On March 4, 2004, SAMHSA announced revised requirements that are intended to simplify and expedite the initial screening process for discretionary grants. For example, SAMHSA will no longer automatically screen out applicants because their application is missing a section, such as the table of contents. Instead, the agency will consider whether the application contains sufficient information for reviewers to consider the application’s merit. In addition, SAMHSA will allow applicants more flexibility in the format of their application. Instead of focusing exclusively on specific margin sizes or page limits, SAMHSA will consider the total amount of space used by the applicant to complete the narrative portion of the application. SAMHSA expects that under the new procedures it will screen out significantly fewer applications. However, some applications continue to be rejected for administrative reasons and will not receive a merit review. In another change, a SAMHSA official told us that it would begin to notify applicants within 30 days of the decision if their application is rejected. State officials told us that the length and complexity of the mental health and substance abuse block grant applications create difficulties for both states and project officers. They described the block grant applications as confusing, repetitive, and difficult to complete. Furthermore, officials in five states told us that SAMHSA project officers may not be using the information states provide in the block grant application as well as they could, especially the narrative portion. For example, one state official received questions from the project officer about the state’s substance abuse activities for women and children that could have been answered by reading the narrative section of the application. State officials suggested that project officers could more easily use the information states provided if the application were streamlined and included only the information most important to SAMHSA. They suggested that SAMHSA make these changes when it converts the block grants to performance partnership grants. SAMHSA officials told us they will not know whether the applications can be streamlined until they finalize the format of the performance partnership grants. To allow center staff to retrieve information more quickly from the current substance abuse block grant application, the Center for Substance Abuse Prevention and the Center for Substance Abuse Treatment began to use a Web-based application in spring 2003. The Web-based application allows the centers to retrieve information collected from the substance abuse block grant applications and more quickly develop reports analyzing data across states, such as the number of states in compliance with specific block grant requirements. State officials told us that SAMHSA’s site visits to review states’ compliance with block grant requirements do not always allow the agency to adequately review their programs. For example, officials in three states told us that the length of these visits—often 3 to 5 days—is too short for SAMHSA to fully understand conditions in the state that affect the provision of services. Officials in two of these states said 3-day site visits did not provide reviewers with enough time to visit mental health care providers in the more remote parts of the state and observe how they respond to local service delivery challenges. A SAMHSA official told us that 3-day site visits are generally adequate for most states, but states are able to request a longer visit. The official acknowledged that SAMHSA could better communicate this flexibility to states. Officials from eight states said the technical assistance they received from SAMHSA and its contractors was helpful; officials from five states told us that the agency could improve its dissemination of information about what assistance is available to grantees. For example, one state official suggested that SAMHSA provide more information on its Web site about what assistance is available or has been requested by other states. He said that making this information available is especially important because there is high staff turnover at the state level, and relatively new staff may have little knowledge about what SAMHSA offers. Several state mental health officials commented that SAMHSA’s substance abuse block grant has a more structured technical assistance program than the mental health block grant and is able to offer more assistance opportunities. SAMHSA officials noted that the substance abuse block grant program has more funds and staff to devote to the provision of technical assistance. SAMHSA’s Center for Substance Abuse Treatment, for example, has a separate program branch to manage technical assistance contracts. This center is in the process of creating a list of documents that grantees developed with the help of technical assistance contractors—such as a state strategic plan for providing substance abuse services—so that other states can use them as models. To prepare for the mental health and substance abuse performance partnership grants—which SAMHSA plans to implement in fiscal years 2005 and 2006, respectively—SAMHSA has worked with states to develop performance measures and improve states’ ability to report performance data. Specifically, SAMHSA identified outcomes for which states would be required to report performance data. SAMHSA asked states to voluntarily report on performance measures related to these outcomes in their fiscal year 2004 block grant applications and the agency provided states with funding to help them make needed changes to their data collection and reporting systems. Over fiscal years 2001 and 2002, SAMHSA awarded 3- year discretionary grants of about $100,000 per year to state mental health and substance abuse agencies to develop systems for collecting and reporting performance data. State officials told us they used the grants in a variety of ways, such as to train service providers to report performance data. Substance abuse and mental health agency officials we talked to told us that their states have made progress in preparing to report on performance measures, but that their states would need to make additional data system changes before they could report all of the data that SAMHSA has proposed for the performance partnership grants. For example, officials from three states told us that they were still unprepared to report data that would come from other state agencies—such as information on school attendance obtained from the state’s education system. In addition, several state officials told us they have been unable to complete their preparations because they are waiting for SAMHSA to finalize the data it will require states to report. For example, a state mental health director told us that the lack of final reporting requirements has contributed to a delay in the implementation of the state’s new information management system. Similarly, officials from a state substance abuse agency told us that without SAMHSA’s final requirements, the state agency is limited in its ability to require substance abuse treatment providers to change the way they report performance data. In addition, the Congress may need to make statutory changes before SAMHSA can implement the performance partnership grants, but SAMHSA has not given the Congress the information it sought on what changes are needed or on how the agency proposes to implement the grants— including the final data reporting requirements for the states. In 2000, the Congress directed SAMHSA to submit a plan containing this information by October 2002. SAMHSA submitted this plan to HHS for internal review on April 12, 2004, after which the plan must receive clearance from the Office of Management and Budget. SAMHSA could not tell us when it expects to submit the plan to the Congress. SAMHSA’s leaders are taking steps to improve the management of the agency, but key planning tools are not fully in place. SAMHSA has been slow to issue a strategic plan, which is essential to guide the agency’s efforts to increase program accountability and direct resources toward accomplishing its goals. Furthermore, while SAMHSA is in the process of implementing its strategic workforce plan, the agency’s workforce planning efforts lack important elements—such as a detailed succession strategy—to help SAMHSA prepare for future workforce needs. Because future retirements and attrition could leave the agency without the appropriate workforce to effectively carry out its programs, it would be prudent for SAMHSA to have a succession strategy to help it retain institutional knowledge, expertise, and leadership continuity. In addition, SAMHSA has not completed plans to ensure that its workforce has the appropriate expertise to manage the proposed performance partnership grants, which would represent a significant change in the way SAMHSA holds states accountable for achieving results. These grants would require new skills from SAMHSA’s workforce. Therefore, it is important for SAMHSA to complete hiring and training strategies to ensure that its workforce can effectively implement the grants. SAMHSA cannot convert the block grants to performance partnership grants until it gives the Congress its implementation plan, which was due in October 2002. The Congress needs the information in SAMHSA’s plan for its deliberations about legislative changes that may be needed to allow SAMHSA to implement the performance partnership grants. In addition, the plan’s information on the performance measures SAMHSA will use to hold states accountable is needed by the states as they prepare to report required performance data. If SAMHSA does not promptly submit this plan, states may not be ready to submit all needed data by the time SAMHSA has planned to implement the grants—in fiscal years 2005 and 2006—and SAMHSA may not have the legislative authority needed to make the mental health and substance abuse prevention and treatment block grant programs more accountable and flexible. Finally, as SAMHSA makes efforts to increase program accountability, it is in the agency’s interest to fund state and local programs that show the most promise for improving the quality and availability of prevention and treatment services. Although SAMHSA has made changes that should reduce the number of discretionary grant applications rejected solely for administrative reasons—such as exceeding the specified page limitation— some applications are still not reviewed for merit because of administrative errors. Allowing applicants to correct such errors and resubmit their application within an established time frame could help ensure that reviewers are able to assess the merits of the widest possible pool of applications and could increase the likelihood of SAMHSA’s funding the most effective mental health and substance abuse programs. We recommend that, to improve SAMHSA’s management of its programs, promote the effective use of its resources, and increase program accountability, the Administrator of SAMHSA take the following four actions: Develop a detailed succession strategy to ensure SAMHSA has the appropriate workforce to carry out the agency’s mission. Complete hiring and training strategies, and assess the results, to ensure that the agency’s workforce has the appropriate expertise to implement performance partnership grants. Expedite completion of its plan for the Congress providing information on the agency’s proposal for implementing the performance partnership grants and any legislative changes that must precede their implementation. Develop a procedure that gives applicants whose discretionary grant application contains administrative errors an opportunity to revise and resubmit their application within an established time frame. We provided a draft of this report to SAMHSA for comment. Overall, SAMHSA generally agreed with the findings of the report. (SAMHSA’s comments are reprinted in app. III.) SAMHSA said that it already has efforts under way to address each of the report’s key findings and recommendations, and that it endorses the value the report places on strategic planning, workforce planning, and collaboration with federal, state, and community partners. SAMHSA indicated that it will continue to engage in a strategic planning process and said that its priority issues and principles are central to this process. As we had noted in the draft report, SAMHSA commented that it expects to complete and approve the action plans developed by each of its priority issue work groups by June 30, 2004. SAMHSA also said that it would update its draft strategic plan to include summaries of the action plans, and then disseminate the draft for public comment, submit it to HHS for clearance, and publish the final plan. Our draft report stated that SAMHSA did not want to issue its strategic plan before HHS issued the new departmental strategic plan. In its comments, SAMHSA noted that HHS published its strategic plan in April 2004 and that this was no longer an issue affecting SAMHSA’s schedule for publishing its plan. In its comments, SAMHSA also stated that it places a high priority on the development of a succession plan. SAMHSA said that it is preparing for an anticipated increase in the agency’s attrition rate over the next several years and is reviewing the pool of staff eligible to retire to identify the skills and expertise that could be lost to the organization. While SAMHSA is beginning to engage in succession planning, it has not developed a detailed succession strategy. We have made our recommendation more specific to communicate the need for SAMHSA to develop such a strategy. In response to our recommendation that SAMHSA complete hiring and training strategies to ensure that the agency’s workforce has the appropriate expertise to implement performance partnership grants, SAMHSA said that it is addressing the need for its workforce to have the appropriate expertise. For example, SAMHSA indicated that it has initiated efforts to identify training needed by current staff and to ensure that new staff have needed skills. However, we believe it is important for SAMHSA to fully develop both hiring and training strategies to ensure that it has the appropriate workforce in place when it implements performance partnership grants. In response to our recommendation to develop a procedure to allow applicants to correct administrative errors in discretionary grant applications, SAMHSA commented that its new screening procedures have yielded a substantial increase in the percentage of applications that will be reviewed for merit. As a result, SAMHSA believes our recommendation is premature and said that it plans to evaluate the results of the revised procedures before making any additional changes. While early evidence indicates that the new procedures are reducing the proportion of applications rejected for administrative reasons, these procedures have not eliminated such rejections. Because it is important for reviewers to be able to assess the merits of the widest possible pool of applications, we believe it would be beneficial for SAMHSA to develop the procedure we are recommending without delay. Finally, in response to the report’s discussion of the performance partnership grants, SAMHSA commented that it will continue its efforts to increase accountability in its block grant and discretionary grant programs. SAMHSA said that the proposed fiscal year 2005 mental health and substance abuse block grant applications contain outcome measures that the agency expects to use to monitor grant performance. However, these applications have not been finalized, and the draft applications indicate that several of the performance measures are still being developed. It is important for SAMHSA to give the Congress its plan for implementing the performance partnership grants so that the Congress can consider any legislative changes that might be necessary to implement the grants and SAMHSA can more fully hold states accountable for achieving specific results. SAMHSA also provided technical comments. We revised our report to reflect SAMHSA’s comments where appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. We are sending copies of this report to the Secretary of Health and Human Services, the Administrator of SAMHSA, appropriate congressional committees, and other interested parties. We will also make copies available to others who are interested upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (312) 220-7600 or Helene Toiv, Assistant Director, at (202) 512-7162. Janina Austin, William Hadley, and Krister Friday also made major contributions to this report. In performing our work, we obtained documents and interviewed officials from the Substance Abuse and Mental Health Services Administration (SAMHSA). While we reviewed documents related to SAMHSA’s strategic planning and to its performance management system, we did not perform a comprehensive evaluation of SAMHSA’s management practices. We also reviewed the policies and procedures the agency uses to oversee states’ and other grantees’ use of block and discretionary grant funds. We interviewed officials from SAMHSA’s Office of the Administrator; Office of Policy, Planning, and Budget; Office of Program Services; Office of Applied Studies; Center for Mental Health Services; Center for Substance Abuse Prevention; and Center for Substance Abuse Treatment. To determine how SAMHSA collaborates with other federal agencies and departments, we interviewed officials from the Department of Education, the Department of Justice, and the Department of Health and Human Services’ Centers for Disease Control and Prevention, Health Resources and Services Administration, and National Institutes of Health. After reviewing lists of collaborative efforts provided by SAMHSA’s centers, we selected these agencies because each one is involved in a collaborative effort with each of SAMHSA’s three centers. Within these agencies, we identified collaborative initiatives that involve interagency committees, data sharing, interagency agreements, and other joint funding arrangements. We interviewed and obtained documentation related to these initiatives from federal agency officials who were directly involved in them. We also interviewed officials from the Centers for Medicare & Medicaid Services because Medicaid is the largest public payer of mental health services and officials from the Indian Health Service, which provides substance abuse and mental health services to tribal communities. We interviewed officials from the White House Office of National Drug Control Policy, which coordinates federal antidrug efforts. To determine how SAMHSA collaborates with state grantees, we interviewed officials from state mental health and substance abuse agencies. We interviewed mental health agency officials in California, Colorado, Connecticut, Mississippi, and South Dakota, and substance abuse agency officials in Iowa, Massachusetts, Montana, Texas, and Virginia. We selected these states on the basis of variation in their geographic location, the size of their fiscal year 2003 mental health or substance abuse block grant award, the number of discretionary grant awards they received in fiscal year 2002, and their involvement in SAMHSA initiatives to improve states’ ability to report mental health and substance abuse data. To gain a better understanding of SAMHSA’s collaborative efforts, we interviewed officials from community-based organizations that received discretionary grants from each of SAMHSA’s centers. We selected the largest discretionary grant programs available to community-based organizations from the Center for Substance Abuse Treatment (the Targeted Capacity Expansion: HIV Program) and the Center for Mental Health Services (the Child Traumatic Stress Initiative). We selected the Center for Substance Abuse Prevention’s Best Practices: Community- Initiated Prevention Intervention Studies—the center’s second largest discretionary grant program available to community-based organizations—to provide a variety of SAMHSA’s priority issues. We also selected one grant that was jointly funded by SAMHSA and the Health Resources and Services Administration (the Collaboration to Link Health Care for the Homeless Programs and Community Mental Health Agencies). (See table 5.) For each of the four grant programs, we selected one community-based organization that received grant funds in fiscal year 2001 or 2002 and that was located in 1 of the 10 states we selected. To obtain additional information about SAMHSA’s collaboration with state agencies and other grantees, we interviewed representatives of the National Association of State Alcohol and Drug Abuse Directors, the National Association of State Mental Health Program Directors, and the Community Anti-Drug Coalitions of America. These organizations represent, respectively, state substance abuse agencies, state mental health agencies, and community-based substance abuse prevention organizations. We also interviewed representatives of the National Alliance for the Mentally Ill and the National Council on Alcoholism and Drug Dependence, because those organizations represent consumers of mental health services and substance abuse services, respectively. We conducted our work from July 2003 through May 2004 in accordance with generally accepted government auditing standards. SAMHSA has a strong leadership and management capacity, a clearly defined role as a national leader in substance abuse and mental health services, and a well-structured organization to support its mission. SAMHSA has effective and efficient processes and methods for accomplishing its mission and optimizing its workforce. SAMHSA strategically invests in its workforce by putting the right people in the right place at the right time. SAMHSA systematically recruits, selects, and hires talented employees and continuously re- recruits them by creating a great place to work and by developing the competencies needed to achieve its mission. Strategies Ensure that SAMHSA has a cross- functional executive leadership team that works together to guide the organization toward achieving its mission. Improve the development, review, and management of discretionary grants. Change the size, scope, and distribution of the workforce of SAMHSA. Improve the publication clearance process. Anticipate competency needs and strategically close competency gaps where needed. Develop a clear and compelling multiyear strategy that is dynamic, aligned with the organizational mission, and linked to the performance of each organizational component and employee. Examine the block and formula grants process to create a more efficient and streamlined process. Continue to enhance a systematic approach to recruiting skilled talent in a tight labor market. Establish a new system for responding to external requests. Continue to enhance a systematic approach to retaining existing expertise. Create an organizational structure that maintains the strengths of the current system, focuses on quality, and increases flexibility and capacity. Continue to enhance customer- focused and effective infrastructure at SAMHSA. Enhance the design and implementation of a systematic approach to developing the workforce. Develop a systematic performance management system to align individual effort with strategic imperatives. Implement a technology tool to provide SAMHSA with workforce profile data for managing its workforce. | The Substance Abuse and Mental Health Services Administration (SAMHSA) is the lead federal agency responsible for improving the quality and availability of prevention and treatment services for substance abuse and mental illness. The upcoming reauthorization review of SAMHSA will enable the Congress to examine the agency's management of its grant programs and plans for converting its block grants to performance partnership grants, which will hold states more accountable for results. GAO was asked to provide the Congress with information about SAMHSA's (1) strategic planning efforts, (2) efforts to manage its workforce, and (3) partnerships with state and community-based grantees. SAMHSA has not completed key planning efforts to ensure that it can effectively manage its programs. The agency has operated without a strategic plan since October 2002, and although SAMHSA officials are drafting a plan, they do not know when it will be completed. SAMHSA developed long-term goals and a set of priority issues that provide some guidance for the agency's activities, but they are not a substitute for a strategic plan. In particular, they do not identify the approaches and resources needed to achieve the agency's long-term goals and the desired results against which the agency's programs can be measured. SAMHSA also has not fully developed strategies to ensure it has the appropriate staff to manage the agency's programs. Although the proportion of SAMHSA's staff eligible to retire is increasing, the agency has not developed a detailed succession strategy to prepare for the loss of essential expertise and to ensure that the agency continues to have the ability to fill key positions. In addition, the proposed performance partnership grants will change the way SAMHSA administers its largest grant programs, but the agency has not completed hiring and training strategies to ensure that its workforce will have the skills needed to administer the grants. Finally, SAMHSA's system for evaluating staff performance does not distinguish between acceptable and outstanding performance, and the agency does not assess staff performance in relation to specific competencies--practices that would help reinforce individual accountability for results. SAMHSA has opportunities to improve its partnerships with state and community-based grantees. For example, grantees objected to SAMHSA's practice of rejecting discretionary grant applications that do not comply with administrative requirements--such as those that exceed page limitations--without reviewing them for merit. Rejecting applications solely on administrative grounds potentially prevents SAMHSA from supporting the most effective programs. SAMHSA's recent changes to the review process should reduce such rejections, but have not eliminated them. State officials are also concerned that SAMHSA has not finalized the performance data that states would be required to report under the proposed performance partnership grants. To comply, states will need to change their data systems, but they cannot complete these changes until SAMHSA finalizes the requirements. The Congress directed SAMHSA to submit a plan by October 2002 describing the final data reporting requirements and any legislative changes needed to implement the grants, but SAMHSA has not yet completed the plan. This delay could prevent the agency from meeting its current timetable for implementing the mental health and substance abuse performance partnership grants in fiscal years 2005 and 2006, respectively. |
To assess IRS’s 2008 filing season performance in the key filing season activities compared to goals and past performance, report on the effect of ESP, and highlight areas where IRS could expand use of MEA, we reviewed and analyzed IRS reports, testimonies, budget submissions, and other documents and data, including workload data, data related to IRS’s performance measures and goals, and data on taxpayer usage of services and other statistics such as the number of paid preparers; observed operations at IRS’s Atlanta, Georgia, paper submission processing center, the Atlanta call site, the Joint Operations Center (which manages IRS’s telephone services) in Atlanta, and IRS’s walk-in locations in Atlanta and Baltimore, Maryland, and selected these offices for a variety of reasons, including the location of key IRS managers, such as those responsible for telephone and walk-in site services; tested for statistically significant differences between annual performance measures based on IRS sample data; analyzed staffing data for paper and electronic filing, and telephone assistance; reviewed information from organizations, such as Keynote Systems, which evaluate Internet performance; interviewed representatives of some of the larger private and nonprofit organizations that prepare tax returns, such as H&R Block, and trade organizations that represent both individual paid preparers, tax preparation companies, and professional associations, including the American Institute of Certified Public Accountants; reviewed TIGTA reports and interviewed TIGTA officials about IRS’s performance and initiatives; interviewed IRS officials about current operations, performance relative to 2008 performance goals and prior filing season performance, trends, significant factors, and initiatives that affected or were intended to improve performance, monitoring, and oversight of paid tax preparers; reviewed IRS data on math errors, information on IRS’s process for identifying and correcting math errors, legislation providing IRS with math error authority; interviewed IRS officials, including those from the EITC office, about MEA; reviewed reports on MEA from the National Taxpayer Advocate (NTA) and TIGTA; and reviewed our prior reports and followed up on our recommendations made in those reports. This report discusses numerous filing season performance measures and data that cover the quality, accessibility, and timeliness of IRS’s services. We assessed the reliability of data used in this report by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purpose of this report. To the extent possible, we corroborated information from interviews with documentation and data and where not possible, we attribute the information to IRS officials. We reviewed IRS documentation, interviewed IRS officials about computer systems and data limitations, and compared those results to our standards of data reliability. Data limitations are discussed where appropriate. We conducted our work primarily at IRS headquarters in Washington, D.C., walk-in sites offices at Baltimore, Maryland, the Wage and Investment Division headquarters and walk-in sites in Atlanta, Georgia. IRS’s filing season is an enormous and critical undertaking and consists of two primary activities, processing well over 100 million individual income tax returns and refunds and providing millions of taxpayers with telephone, Web site, and face-to-face assistance. The following describes the processing of tax returns. Most taxpayers file their individual income tax returns electronically, although millions still mail paper returns. Electronic filing is faster, which allows taxpayers to receive refunds faster, and is less prone to transcription and other errors, and provides IRS with significant cost savings. Taxpayers below an income ceiling can access the Free File program offered through IRS’s Web site by a consortium of 19 tax preparation companies that offer free on-line tax preparation and filing services for qualifying taxpayers. IRS continues to develop and deliver a number of Business Systems Modernization programs, including releases of the Customer Account Data Engine (CADE). CADE is intended to eventually replace IRS’s antiquated Master File legacy processing system and facilitates faster refund processing and provides IRS with more up-to-date account information. IRS provides a variety of taxpayer services, including the following. IRS has toll-free assistance telephone lines which taxpayers can call with questions about tax law or their refunds. Depending on how taxpayers respond to menu choices, questions are answered by automated messages or calls are routed to telephone assistors located at 24 call sites around the country. IRS offers assistance through its Web site, including “Where’s My Refund?”, which enables taxpayers to use the Web site to determine whether the agency received their tax returns and processed their refunds. Taxpayers can also download forms, instructions, and publications, research their own tax law issues through Frequently Asked Questions or Tax Topics, and receive help with specific tax law questions and procedural questions via e-mail. IRS offers assistance at nearly 12,000 volunteer sites, which helps IRS serve traditionally underserved taxpayer segments, including elderly, low-income, and disabled taxpayers, and taxpayers with limited- English proficiency. Additionally, IRS has 401 walk-in sites where taxpayers can receive basic tax law and account assistance from IRS staff and have returns prepared if their annual income is $40,000 or less. In addition to depending on IRS, taxpayers depend heavily on paid tax return preparers or commercial tax prepartion software during the filing season. Paid preparers are a critical part of the nation’s tax administration system because of the wide variety of assistance they offer taxpayers, including help understanding tax obligations, return preparation and electronic filing, and providing forms and publications. In 2007, 59 percent of individual taxpayers had their return prepared and filed by paid preparers and about 16 percent prepared their return using commercial tax preparation software. Late tax law changes create filing season challenges for IRS. This can be particularly true if, like ESP, the changes are complex and significantly increase filing season workload (for a detailed description of IRS’s implementation of ESP, see app. II). ESP contained complex rules about eligibility, amount of the rebate, and filing requirements. Over the years, the Congress granted IRS legal authority to cover 11 areas so that the agency could correct return errors during processing, including calculation errors and entries that are inconsistent or exceed statutory limits, without having to issue the taxpayer a statutory notice of deficiency (see app. III for further details on the MEA process and authorities). We have reported that prompt compliance checks such as these are important because as unpaid taxes age, the likelihood of collecting all or part of the amount owed decreases, due in part to continued accrual of interest and penalties on the outstanding federal taxes. MEA also provides a taxpayer service as it allows taxpayers to receive their refunds faster, corrects return errors before interest is accrued, and can avoid time consuming interaction with IRS while the error is being resolved. MEA can also help ensure taxpayers receive the tax benefits for which they are eligible. For example, between tax years 2003 and 2006, using its MEA, IRS identified over 500,000 occurrences of taxpayers under-claiming the EITC with the result that an additional $130 million was provided to those claimants. Without MEA, most of these errors would not likely have been identified and thousands of low income taxpayers would likely have received less than they were entitled. Because math error checks are automated and require little contact with taxpayers, they provide IRS with an enforcement tool that is low cost and less intrusive and burdensome to taxpayers than audits. By using its MEA, IRS reported preventing $460 million from reaching ineligible taxpayers in fiscal year 2006. The NTA—who heads the program that helps resolve taxpayers’ tax problems with IRS and recommends changes to mitigate taxpayer problems—noted some concerns about IRS’s use of MEA in its 2006 report to Congress. NTA reported that, because MEA allows IRS to summarily assess the tax before a taxpayer has the opportunity to challenge the assessment, it should be used in specific and narrow circumstances. Because taxpayer rights are different under MEA, NTA expressed concern that math error notices did not provide enough detail for the recipient to understand what had occurred, their rights, and how they could request abatement. NTA also reported that IRS was using math error checks in some limited areas where it was not clear that IRS had the authority. In its response to the NTA’s 2006 report, IRS reported that it resolved the issues raised by NTA. While the NTA acknowledges that the IRS has made significant improvements to the math error notice process, it still notes that IRS could improve how it is administering MEA. As of September 12, 2008, IRS processed 150 million individual income tax returns, including almost 9 million ESP-only returns, and processed 105 million tax refunds totaling $246 billion. IRS processed 16 million Form 1040As on paper, a 92 percent increase from last filing season, largely due to ESP. According to IRS officials, this increase was especially burdensome not only because paper processing is more labor-intensive and expensive then electronic processing, but also because claimants made many errors on ESP-only returns, which IRS had to manually correct. As of September 12, IRS had processed 116 million stimulus payments totaling almost $94 billion. As shown in table 1, the percentage of electronically filed returns is similar to last year’s when ESP-only returns are included. Excluding the almost 9 million ESP-only returns, most of which were filed on paper, the number of taxpayers who filed electronically increased from 59 percent to 62 percent of total returns. The increase in electronic filing is due, in part, to the Free File program. The number of taxpayers who filed electronically through Free File increased 24 percent, after a 2 percent decline in participation last year, to 3.8 million. According to IRS officials, this increase is due to greater taxpayer awareness and enhanced marketing. As of August 8, CADE posted 30.5 million returns, slightly exceeding IRS’s goal of 30 million and significantly up from the 11 million returns CADE posted as of last year. As we have previously reported, a major benefit of CADE is that direct deposit refunds are issued by CADE 1 to 5 business days faster than the current legacy system, and paper check refunds are issued 4 to 8 business days faster because CADE posts information daily. CADE also was successful in processing 24 million stimulus payments and contributed to IRS’s early distribution of some stimulus payments. ESP- only payments processed through CADE, rather than the legacy Master file, were issued up to 4 days earlier than the first scheduled disbursement date. Despite the additional volume, IRS met or exceeded goals for seven out of eight of the processing measures (see app. IV for details). The one measure where performance was significantly below IRS’s goal and last year’s level was the refund error rate, i.e., the percentage of refunds with IRS-cause errors issued to taxpayers by IRS. According to IRS officials, this performance was due to a programming change and did not delay the processing of returns or refunds. We have previously reported that IRS’s ability to maintain or improve taxpayer service will likely depend on its continued ability to find efficiencies, particularly through increased electronic filing. Consistent with this approach, IRS released the “Advancing E-File Study” in November 2008, which is the foundation for the agency’s strategy and planned actions to reach the congressionally-set goal of 80 percent of individual returns filed electronically. In recent reports, we suggested two options for increasing electronic filing or reducing paper processing. One would be for Congress to mandate electronic filing by large paid tax return preparers. Currently, many returns prepared by preparers on computers are printed and mailed to IRS, which then have to be transcribed. Our other option is to require bar coding on paper returns that were prepared by taxpayers using commercial tax preparation software. Bar coding would eliminate IRS’s transcription costs and errors. IRS agreed to study this recommendation. Paid preparers prepared over 81 million returns, which was 59 percent of all individual tax returns in the 2007 filing season. Because IRS has limited ability to identify paid preparers and match them with the returns they prepared, we recently reported that IRS has limited information about the accuracy of these returns and recommended that IRS develop a plan to require a single identification number for paid preparers, which could facilitate research on paid preparers’ influence on taxpayer compliance. IRS agreed to explore the use of a single identification number to enhance IRS’s ability to identify paid preparers. Taxpayers’ access to IRS’s telephone assistors was substantially lower than last year’s because, according to IRS, it received an unprecedented number of calls primarily due to ESP-related questions. As table 2 shows, through June 30 of this year, IRS received over 118 million calls, more than double the total number of calls from last year. However, compared to last year, more than twice as many callers abandoned their calls and nearly 10 times as many were disconnected by IRS. On the basis of its experience with the stimulus rebates in 2001 and 2003, IRS took steps to facilitate telephone service, including establishing a dedicated toll-free Rebate Hotline for callers with ESP-related questions in February. As of June 30, IRS answered 37 million calls made to its Rebate Hotline. However, the unprecedented call volume strained IRS telephone resources and contributed to a significant decline in the telephone level of service. Calls to IRS spiked in the first three weeks of May around the same time that the first rounds of stimulus checks were issued (see fig. 1). According to IRS officials, taxpayers’ confusion over when they would receive their payment, or how much they received contributed to the high call volume. During these weeks, the percentage of callers waiting for an assistor and getting through fell below 50 percent and subsequently did not surpass that mark. The increased call volume also contributed to a decline in related performance indicators and measures. As previously noted, the number of callers who received busy signals or were disconnected from IRS increased as did the caller abandon rate. Further, as shown in table 3, the average speed of answer—the length of time taxpayers wait to get their calls answered—nearly doubled from last year. In addition to the Rebate Hotline, service on other toll-free lines, such as tax law and account assistance, was affected as well. For example, IRS estimated that from early May, when IRS began issuing ESP payments, through the end of June about 30 percent of callers to other toll-free lines asked ESP-related questions in addition to their primary question. When the caller asked an ESP-related question, the assistor was instructed not to transfer the caller to the rebate hotline, but instead to answer the question. As a result, the level of service on some other telephone applications not related to the Rebate Hotline also declined this year, according to IRS officials. Even with the increased volume of calls, the accuracy of the telephone assistors’ responses to tax law and account questions was comparable to the same time period last year and met IRS’s goals (see table 4). Since 2005, IRS has maintained a level of accuracy of about 90 percent. During the course of our filing season work, we identified some opportunities to reduce ESP-related calls. On May 21, 2008, we sent a letter to the IRS Commissioner that suggested four options for his consideration. IRS implemented three of the four options. IRS re-ordered the Frequently Asked Questions page on IRS’s Web site to put the most common questions addressed by assistors first. IRS added text to the payment schedule page on its Web site and a message for telephone assistors that provided more detail as to when various types of taxpayers should expect to receive their refunds. IRS modified the Rebate Hotline script to include an automated “Most Frequently Asked Questions” (FAQs) option by adding the FAQs to the automated messaging in the call waiting queue. The option IRS did not implement was to expand outreach efforts in the press about the timing and calculation of stimulus payments. IRS officials concluded that this information might unintentionally create more calls. Although we believe that additional and more accurate outreach would have been useful for reducing public confusion, we acknowledge IRS’s position that many of the calls have required specific explanations that could not have been handled in mass public outreach. IRS provided tools and information through its Web site on ESP and adjusted the information quickly. For example, on the day the legislation was passed, IRS posted information on IRS.gov with details about ESP. Within a month, IRS launched the Stimulus Payment Calculator. IRS also provided a “Where’s My Stimulus Payment?” feature, launched May 1, 2008 (see app. II for more details on ESP timeline). These ESP-related features resulted in a dramatic increase in the volume of taxpayer visits, as shown in table 5; visits to IRS’s Web site increased 74 percent; however, if taxpayer visits to ESP-related features were excluded, visits were slightly down from last year. Although difficult to quantify, considering the large number of visits to ESP-related features on IRS’s Web site, it is likely that ESP information and features on IRS.gov provided sufficient information to divert taxpayers’ questions from IRS’s Rebate Hotline. One measure of the quality of IRS’s Web site is its ranking in the Keynote Systems top 40 government Web sites. During the 2008 filing season, IRS.gov ranked first or second in response time out of the top 40 government Web sites in the Keynote Government Index weekly ratings, compared to ranging between third and sixth place last filing season. Volunteer partners prepared 3.2 million returns, which is a 31 percent increase compared to last year, with virtually the same number of sites. IRS officials attribute this growth mainly to increased word-of-mouth promotion of sites and ESP outreach efforts of volunteer partners. In contrast, the total number of taxpayer contacts at IRS’s 401 walk-in sites declined slightly in the 2008 filing season compared to previous years. In order to assess the quality of the assistance at volunteer sites, IRS conducts mystery shopping, site, and tax return reviews. This filing season, IRS officials conducted 85 mystery shopping reviews for which the accuracy rate for return preparation averaged 75 percent at volunteer sites. While this is an improvement from last year, we are still concerned that, because of the low number of mystery shopping reviews conducted, the quality of volunteer-prepared returns remains largely unknown and information as a whole is unavailable. IRS is in the process of improving how it measures the efficacy of its outreach efforts. In response to our recommendation, IRS has hired a contractor to conduct surveys and focus groups to assess the IRS partners’, such as the AARP, ability to reach their target populations, e.g., the elderly and limited English proficiency and rural populations, and measure the effectiveness and quality of that outreach. In response to another one of our recommendations, to further improve its quality assessment at walk-in sites, IRS has expanded contact recording— a system IRS uses to record and assess the quality of other interactions between its employees and taxpayers—at walk-in sessions to include return preparation. According to IRS officials, the agency hired adequate staff to review the recorded sessions, which are considerably longer for return preparation assistance than for tax law or account assistance. Further, IRS officials report that by the start of the 2009 filing season, contact recording of return preparation will be operational at 306 sites. The accuracy of account assistance at IRS walk-in sites was 85 percent, similar to last year. However, the accuracy of tax law assistance declined significantly to 68 percent, down from 80 percent last year. According to IRS officials, this decline was, in part, due to the timing for hiring and training hundreds of new staff, which was dependent on when IRS received funding. Further, those staff had to be trained on a new IRS’s interactive tax law assistance guide at the beginning of the filing season. IRS officials expect tax law accuracy to improve for the 2009 filing season as walk-in site staff gain experience with the new guide. Based on IRS data, the estimated costs and foregone revenue of implementing ESP will reach up to $960 million. This includes $202 million IRS received in a supplemental appropriation in fiscal year 2008, plus funding transferred from the Financial Management Service and IRS’s user fee accounts (see table 6). Because IRS anticipates the need for continued funding associated with the implementation of ESP, it requested an amendment to its fiscal year 2009 budget to receive an additional $68 million, $29 million of which it has already received under a continuing resolution. This funding would be used, in part, to cover telephone demand, which IRS expects to remain well above normal in 2009 due in part to ESP. Because of the timing of ESP, IRS officials said that they did not have time to hire, conduct background checks on, and train additional staff to handle the increased telephone volume. Instead, IRS shifted hundreds of Automated Collection System (ACS) staff to answer calls to the Rebate Hotline from March through August 2008. As of August, IRS reported $655 million in foregone revenue because of ACS staff being taken off their collections work to answer the unprecedented volume of calls related to ESP. As we previously reported, IRS considered alternatives to shifting ACS staff, including contracting out, using other staff, or using Social Security Administration (SSA) staff, but decided the alternatives were not feasible. According to IRS officials, while IRS expected some foregone revenue and paper backlog associated with the use of collections and other staff, it determined that delivering and supporting ESP was its highest priority, after the filing season. According to IRS officials, volunteer partners helped IRS mitigate the costs of delivering ESP to the targeted population of benefits recipients. These partners played a key role in funding and carrying out outreach to inform targeted groups, e.g., elderly, limited English proficiency, disabled, about their eligibility for the economic stimulus. For example, AARP supplied economic stimulus information on its Web site and in its monthly bulletin, and IRS officials reported that AARP paid for one of the mailings to people over 65 who may have been eligible for a stimulus payment, but had not yet filed a tax return. According to IRS officials, partly because of the efforts of the volunteer partners, 82 percent of the targeted individuals eligible for the economic stimulus participated in the program. This level of participation is relatively high compared to some other programs. For example, we have previously reported that participation rates in entitlement programs generally range from about 47 percent for the Food Stamp Program, to 75 percent for the EITC program. Consistent with best practices for government organizations, IRS is compiling a report on ESP that will summarize the costs to implement ESP and the effects on filing season operations. IRS is working to verify the cost information associated with ESP. As part of its report, IRS hired a contractor to conduct a lessons-learned study, which is due to be issued in December 2008. IRS has authority to disallow child and dependent credit claims on returns with the filing status of “Married Filing Separately;” however, if taxpayers use this filing status, they would not be eligible for the credit. Under IRS’s procedures, it processes all returns with child and dependent care claims and issues refunds as appropriate. IRS then audits the taxpayer who made the claims using “Married Filing Separately” as appropriate. IRS audited about 6,000 of these cases in 2005. After verifying the claim is ineligible, IRS would be left trying to collect the money from taxpayers, who may have spent the money and now owe back taxes, plus penalties and interest. For lower-income taxpayers, this may represent a substantial financial burden. Some taxpayers claiming child and dependent care credits may file as “Married Filing Separately” by mistake. They could be eligible for the credit if they file using a different status, such as “Single,” “Married Filing Jointly,” “Head of Household,” or “Qualifying Widow(er) with a Dependent Child. Using audits to correct such mistakes is labor intensive for IRS. IRS officials confirmed that, if IRS used its MEA in these cases, then it could inform the taxpayer of their potential eligibility in the associated math error notices. A second area where IRS has MEA but does not use it is to disallow EITC clams is when taxpayers are listed as the noncustodial parent in the FCR. Noncustodial parents are generally ineligible for EITC, because in most but not all cases, the child have not lived with the noncustodial parent for more than the required 6 months of the tax year to meet EITC eligibility. According to EITC officials, IRS does not use its MEA in these cases because the child may have lived with the non-custodial parent for more than 6 months, meeting the residency requirements for eligibility; however, the FCR may not reflect this living arrangement. Instead of using its MEA, IRS audits less than 2 percent of these cases; however, it does not audit more cases because audits are labor intensive. As a consequence, most of the more than 41,000 claims for $91 million in 2006 by noncustodial parents were allowed without verification. IRS does not know how many of these claims were incorrect, but for the claims it audited, the error rate was about 91 percent of claims for 2006. IRS officials reported that, as part of their 2009 EITC Research Plan, they plan to review the reliability and applicability of linking the custodial information in the FCR to EITC eligibility. They stated that if the FCR is found to be sufficiently reliable, IRS could use its MEA to automatically identify and correct ineligible EITC claims. By focusing on FCR data alone, IRS’s review may be missing another way to use MEA to verify EITC claims by noncustodial parents and minimize the chance of disallowing eligible claims. IRS could combine FCR data with factors it currently uses to select returns for audits, factors such as taxpayer characteristics like filing status. In a limited sample of returns from tax year 2006, audit data show that the FCR combined with audit selection factors such as filing status allowed IRS to accurately identify ineligible EITC claims by noncustodial parents in 98 percent of cases. Without including a more thorough assessment of this approach in its FCR review, IRS will not know the extent to which more accurate eligibility decisions could be made. Because the agency has MEA to use the FCR, the agency does not need additional authority to use FCR together with the audit selection factors. We identified two areas related to IRA contributions where IRS officials reported that they have the technical ability to accurately identify and correct math errors and did so as recently as 2006. However, at that time, IRS Chief Counsel determined that the Congress would need to grant IRS additional MEA to use age-based information to automatically disallow certain ineligible IRA contributions and thus IRS discontinued the use of math error checks in these areas. First, some taxpayers under the age of 50 claim IRA contribution amounts that are more than allowed. These taxpayers incorrectly claim the amount intended for people over 50 making IRA “catch-up” contributions. In 2004, IRS identified 24,000 incidences of these IRA contribution overclaims resulting in $23.2 million in underreported taxes. IRS used age data from the SSA database to check for age eligibility of these contributions. IRS counsel determined that the agency does not have MEA to use age-based data in this manner, although IRS has and does use age data from the SSA database for other math error types, such as EITC eligibility. Second, some taxpayers over the age of 70-½ claim contributions to a traditional IRA, which they are not entitled to make. By law, taxpayers over the age of 70-½ cannot make contributions to traditional IRAs. TIGTA found that in 2006, 1,826 taxpayers over the age of 70-½ improperly claimed $4 million in IRA deductions for an estimated loss of revenue of $601,000. IRS used age data from the SSA database to check for age eligibility of these contributions. IRS officials believe that this problem is likely to become significantly larger as the population ages. However, IRS currently lacks MEA to identify and correct ineligible claims. Despite the challenges of the economic stimulus program, IRS was generally successful providing service to taxpayers during the 2008 filing season. The major exception was telephone service, where the large, unanticipated increase in call volume caused by ESP significantly affected performance. Although IRS focuses on taxpayer service during the filing season, it also uses MEA to conduct important compliance checks while processing tax returns. There are two areas where IRS could use its existing authority more fully and where the agency could improve service through notices informing taxpayers of potential eligibility for child and dependent care credits. With additional MEA, IRS could further increase compliance, improve taxpayer service, and gain additional efficiencies. We recommend that the Commissioner of Internal Revenue direct the appropriate officials to (1) use IRS’s existing MEA to identify and correct child and dependent care credit claims on “Married Filing Separately” returns; (2) include information on math error notices to inform taxpayers that they may be eligible for the child and dependent care credit if they file under a different status, such as “Single,” “Married Filing Jointly,” “Head of Household,” or “Qualifying Widow(er) with a dependent child”; and (3) assess the effectiveness of combining FCR and other data on taxpayer characteristics to verify the eligibility of EITC claims from noncustodial parents. Given the potential for improving compliance now and in the future, the Congress should provide IRS with the authority to use math error checks to identify and correct returns with ineligible (1) IRA “catch-up” contributions, and (2) contributions to traditional IRAs from taxpayers over age 70-½. The Deputy Commissioner of Internal Revenue provided written comments in a December 4, 2008 letter in which she agreed with all our recommendations and outlined IRS’s actions to address those recommendations. With respect to child and dependent care credit claims on “Married Filing Separately” returns, IRS plans to make programming changes that will allow the agency to use math error checks to identify and correct claims, and create notices to inform taxpayers of their possible eligibility for the child and dependent care credit if they file under a different status. For the recommendation to assess the effectiveness of combining FCR with other data on taxpayer characteristics, IRS plans to do so as part of its study on the reliability and applicability of non- custodial information in the FCR. The Deputy Commissioner also supported our suggestions for congressional consideration to provide IRS with legal authority to automatically correct returns for individual retirement account contributions that violate the dollar or age limits. She further stated that they could increase compliance and improve taxpayer service. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of the report until 30 days after its date. At that time, we will send copies of this report to the Secretary of the Treasury, the Commissioner of Internal Revenue; the Director, Office of Management and Budget; relevant congressional committees; and other interested parties. This report is available at no charge on GAO’s web site at http://www.gao.gov. For further information regarding this report, please contact James R. White, Director, Strategic Issues, on 202-512-9110 or whitej@gao.gov. Contacts for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report include Joanna Stamatiades, Assistant Director; Shea Bader; Julia Jebo; Karen O’Conor; Cheryl Peterson; and Neil Pinney. In the math error program, IRS uses computer programmed comparisons and calculations to systemically identify math errors during tax return processing (see fig. 3) for correction. These corrections involve mathematical calculation errors, incorrect taxpayer identification numbers, filing status and dependents, and missing schedules or forms. Once the computer programming detects an error, it forwards the tax return account to the Error Resolution System, where an IRS employee takes the appropriate action to resolve the error condition and provide the information needed to generate a notice to the taxpayer. As early as the first codification of the Internal Revenue law in 1926, the Congress granted IRS math error authority (MEA) so that IRS does not have to provide the taxpayer with a statutory notice of deficiency for math errors. In general, these are errors that must be corrected in order for IRS to process the tax return. A 1976 statutory revision defined the authority to include not only mathematical errors, but other obvious errors such as omissions of data needed to substantiate an item on a return, and provided a statutory right to file a request for abatement of the assessment within 60 days after the notice is sent. In the 1990s, the Congress extended the authority five times to help determine eligibility for certain tax exemptions and credits. Table 7 summarizes the legislative authority on math error provisions for individual tax returns. Despite the added individual return volume due to the Economic Stimulus legislation, as of June 30, 2008, IRS met or exceeded seven out of the eight processing performance goals. As shown in table 8, IRS met or exceeded its goals for the percentage of errors included in deposits and correspondence (which was separated into letter and notice errors in previous years); deposit and refund timeliness (i.e., interest foregone by untimely deposits); and productivity and Individual Master File efficiency. The one measure where performance was below IRS’s goal and last year’s level was the refund error rate, i.e., the percentage of refunds with IRS- cause errors issued to taxpayers by IRS. According to IRS officials, the increase in the refund error rate is attributed to a programming change that resulted in IRS catching, for example, changes of address or name changes, as it began transcribing data from paper returns. According to IRS officials, because the changes did not result in significant additional transcription, the increased error rate did not result in a delay of refunds or a delay in processing of returns. | The tax filing season is when the Internal Revenue Service (IRS) has most of its contacts with taxpayers, answering questions and processing returns and refunds. The 2008 filing season was particularly challenging due to the unanticipated mandate to make economic stimulus payments. The filing season is also the start of IRS's efforts to ensure the newly filed returns are compliant with the tax laws. GAO was asked to assess IRS's performance, describe the costs and foregone revenue of administering the economic stimulus payments, and identify any opportunities for improving filing season compliance checks. GAO analyzed IRS performance data, reviewed IRS operations, and interviewed IRS officials. IRS successfully processed150 million returns and issued 105million refundsfor $246 billion as of September 12, 2008. In addition, IRS issued 116 million stimulus payments totaling $94 billion. However, taxpayers' access to IRS's telephone assistors was substantially lower than last year because of an unanticipated increase in telephone call volume. Calls to IRS more than doubled to 118 million as many taxpayers had questions about the amount of their stimulus payment or its timing. IRS acted to answer the calls including shifting hundreds of staff from collection cases to telephone assistance. IRS took other actions such as adding features to its Web site that answered stimulus-related questions and likely diverted some calls. Regardless, taxpayers' ability to get through to IRS telephone assistors declined from about 81 percent of waiting callers getting through last year to about 57 percent from January through June 30 of this year. IRS expects the costs and foregone revenue associated with issuing the economic stimulus package (ESP) payments to reach about $960 million, of which $655 million is revenue foregone due to the shift of collections staff to telephone service. There are two areas where IRS does not use its current legal authority to automatically correct errors when processing tax returns. One involves eligibility for the child and dependent care credit by taxpayers who are "Married Filing Separately." Taxpayers in this filing status are not eligible for the credit, but IRS allows the credit to be claimed, issues refunds, and then audits taxpayers to try to recover the money. The second is that IRS does not use its existing legislative authority to verify earned income tax credit claims by noncustodial parents--in 2006, $91 million of claims were unverified. IRS has plans to study one option for verifying these claims, but is not planning to study another option, combining federal data on noncustodial parents and other taxpayer characteristics to automatically determine eligibility. Finally, there are two areas where IRS lacks legal authority, but has the technical ability to use automated error checks. IRS could prevent (1) individuals from deducting contributions to individual retirement accounts above the allowable limit and (2) individuals from violating the age requirements for such contributions. |
The U.S. air transportation structure is dominated by “hub-and-spoke” networks and by agreements between major airlines and their regional affiliates. Since the deregulation of U.S. commercial aviation in 1978, most major airlines have developed hub-and-spoke systems. For example, Northwest Airlines (Northwest) has hubs in Minneapolis, Detroit, and Memphis. United Airlines (United) has hubs in Chicago (at O’Hare International Airport), Denver, Los Angeles, San Francisco, and Washington, D.C. (at Dulles International Airport). Major airlines provide nonstop service to many “spoke” cities from their hubs, ranging from large cities like Portland, Oregon, to smaller communities such as Des Moines, Iowa, and Lincoln, Nebraska. Depending on the size of those markets (i.e., the number of passengers flying nonstop between the hub and the “spoke” community), the major airlines may operate their own large jets on those routes or use regional affiliates to provide service to other communities, usually with regional jet and turboprop aircraft. The airports in small “spoke” communities are the smallest in the nation’s commercial air system. The Federal Aviation Administration (FAA) and federal law categorize the nation’s commercial airports into four main groups based on the number of passenger enplanements—large hubs, medium hubs, small hubs, and nonhubs. Generally, airports in major metropolitan areas—such as Chicago, New York, Tampa, and Los Angeles—are “large hubs.” There are 31 large hubs, and they serve the majority of airline traffic, accounting for nearly 70 percent of U.S. passenger enplanements in 1999. At the opposite end of the spectrum are the nonhub airports—the airports for the communities that are the focus of this study. In all, 404 airports were categorized as nonhubs in 1999. As a group, they enplaned only about 3 percent of passengers in 1999. Of those airports, we analyzed 267, which are generally the largest airports within this group. These included 202 small community airports in the continental United States and 65 in Alaska and Hawaii. Table 1 provides more information about the four airport categories, along with an illustration of each type. The typical community of the 202 small communities in our analysis had nine departing flights per day in October 2000, and most had no jet service. They were typically served by two airlines, though 41 percent had service from only one airline. Individually, however, they varied considerably, from having no more than 1 or 2 daily departures to having more than 60. As a group, their limited level of service is related to their small populations. As individual communities, the varied levels of service reflect differences in other factors such as the level of local economic activity and proximity to nearby airports. In October 2000, the typical small community among the 202 we analyzed had the following levels of service: Service from two different airlines or their regional affiliates, each providing service to a different hub where passengers could make connections to other flights in the airline’s hub-and-spoke system. However, a substantial minority of the communities—41 percent—had service from only one airline. Nine departing flights a day, most if not all of them turboprops rather than jets. In all, only 67 of the 202 communities had any jet service. The level of service varied significantly from community to community. At the higher end were airports serving resort destinations like Key West, Florida, where five different carriers operated 44 average daily departures to six nonstop destinations, and communities such as Fayetteville and Bentonville, Arkansas (the headquarters for Wal-Mart Stores, Inc.), near the Northwest Arkansas Regional Airport, where five air carriers scheduled 42 average daily jet and turboprop departures to seven nonstop destinations. The highest number of daily departures—62—was for Nantucket, Massachusetts, a resort community served by turboprop and even smaller piston aircraft. At the other extreme were communities such as Hattiesburg, Mississippi, and Thief River Falls, Minnesota, with an average of 3 and 1 daily departures, respectively. In total, the 10 small communities with the most air service typically had more than 38 scheduled departures per day, while the 10 small communities with the least air service typically had fewer than 3 scheduled departures per day.Table 2 summarizes the range of air service that was available at the 202 small community airports in October 2000. For purposes of comparison, appendix II provides additional information on key differences in the scope of air service that airlines scheduled at nonhub and small hub communities. Small hub airports tend to serve somewhat larger communities and have significantly more commercial air service than do the 202 nonhub airports in the continental United States. The most obvious reason for the generally limited level of service at these small communities is their small size. As a whole, the 202 airports served a small portion of the U.S. population and geographic territory. In 2000, the median population of the 202 nonhub airport communities in our analysis was about 120,000, and the median number of daily passenger enplanements in 1999 was about 150. However, airports typically serve populations and businesses in a larger surrounding area—typically referred to as a “catchment area.” An airport’s catchment area is the potential geographic area for drawing passengers. The geographic size of a catchment area varies from airport to airport depending on such factors as how close an airport is to other airports and whether the airport is served by a low-fare airline (and, therefore, attractive to passengers from farther away). Catchment area size estimates provided by the airport directors we surveyed showed that these airports potentially serve a total population of about 35 million (about 12 percent of the continental U.S. population).Figure 1 shows the location and catchments areas of the 202 airports in our analysis as estimated by airport directors responding to our survey and our estimates of catchment areas for those who did not respond to the survey. The small size of these markets greatly affects the level of service because airlines’ decisions about offering service are motivated primarily by the need to maintain profitability. An airline’s profitability generally depends on its ability to generate revenues in a given market while managing its costs. The airline industry is capital- and labor-intensive and is largely dependent on passenger traffic for its revenues. The airlines use sophisticated computer models to help them identify whether certain markets can be served profitably. The limited amount of passenger traffic from many of these communities limits the number of flights airlines provide. It is also not surprising that turboprops have typically provided most service to these communities, because turboprop aircraft are generally the least expensive type of aircraft to buy and operate. The role of size in limiting a small community’s service can be seen by stratifying the small communities into population groups. As part of our analysis, we separated the 202 small communities into three groups— those smaller than 100,000, those with populations of 100,000 to 249,999, and those with populations of 250,000 and greater. As figure 2 shows, the “smallest of the small” typically had lower median levels of service as measured by such indicators as number of daily departures (both turboprop and jet) and service from more than one airline. Besides population, a variety of other factors may influence how much service an individual community receives. In our analysis, two such factors stood out. One of these was the level of economic activity. The airline industry is highly sensitive to the business cycle, and its economic performance is strongly correlated with fluctuations in personal disposable income and gross domestic product. When the economy is growing, the demand for air transportation grows, allowing carriers to raise yields (prices) and profitability. When the economy falls into recession, unemployment grows, individuals postpone discretionary travel, and airline yields and profitability decline. In particular, a key element to the profitability of an airline’s operation in a given location is the availability of high-yield passenger traffic—that is, business travelers who are more likely to pay higher airfares than leisure travelers. Communities with greater amounts of local business activity may have more (and different) air service than communities with less economic activity. Of course, the reverse may also be true—that local economic activity cannot improve without enhanced air service. Thus, the presence or absence of air service may also positively or negatively affect local economic activity, rather than local economic activity dictating the amount and type of air service. Our analysis showed a statistically valid relationship between the economic characteristics of small communities and the amount of air service that they received. Economic principles led us to expect that passenger demand for air service would be greater in communities with more jobs and higher incomes. Our results were consistent with these expectations. Larger communities with more income and “regional product” had service from more major carriers and had more weekly departures. For example, for every additional 25,000 jobs in a county, a community received 4.3 more jet departures per week and 4.8 more turboprop departures per week. Similarly, for every additional $5,000 in per capita income, a community received 3.3 more jet departures per week and 12.7 more turboprop departures per week. In other words, if two small communities, A and B, were similar except that Community A had $5,000 more in income per capita than Community B, Community A would have had 16 more departures per week than Community B. A third main factor that stood out as helping to explain the variation in service levels between small communities, in addition to relative size and economic activity, was the community’s relative proximity to larger airports. If a small community is located within relatively close driving distance of another commercial airport, passengers may drive to the other airport, rather than fly to or from the local community airport. This tendency to lose passengers to other airports is referred to as “leakage.” Of the 202 small communities in our study, 94 (47 percent) are within 100 miles of an airport that is served by a low-fare airline or that serves as a hub for a major airline. Figure 3 shows circles with 100-mile radiuses around those hub or low-fare carrier airports, and thus the number of small communities that are within 100 miles of those alternative airports. As figure 3 also shows, the concentration of nonhub airports that are close to larger airports is much greater east of the Mississippi River than west of the river. Over 70 percent of the eastern small communities are within 100 miles of a hub or low-fare airport, compared with 26 percent of the western small communities. Our survey of small community airport officials confirmed the likely effect of being close to alternative airports. When asked whether they believed local residents drove to another airport for airline service (prior to September 11), over half of them said that they believed this occurred to a great or very great extent. Eighty-one percent of them attributed the leakage to the availability of lower fares from a major airline at the alternative airport. According to the results of our survey of airport directors, more small community airports that are closer to other larger airports experience a greater extent of passenger leakage than small community airports that are farther away from other larger airports. We were not able to obtain and analyze certain key data that might explain in detail why passengers might opt to use different airports rather than their local facility. In particular, we were not able to obtain information on the differences in airfares among competing airports. However, prior GAO reports indicate that fares at small community airports tend to be higher than fares at larger airports. While choosing to drive to other airports in the vicinity that offer service from other airlines may allow passengers to gain the flight options and fares they want or need—a clear benefit to the individual traveler—it likely affects the local community’s ability to attract or retain other competitive air service. In addition, there may be other factors that influence the amount and type of service that air carriers can provide at small communities. As agreed, we intend to examine possible approaches to enhancing air service to these communities in a subsequent report. Between October 2000 and October 2001 (revised), the number of total daily departures in small communities dropped by 19 percent. Airlines planned part of these decreases before September 11 (a 6 percent reduction) but made even steeper reductions (13 percent) afterward. In 36 communities, at least one of the airlines providing service withdrew entirely from the market, with most of these withdrawals coming before September 11. The number of communities with service from only one airline grew by 12, raising the percentage of communities with one-airline service to 47 percent. While many communities lost service, carriers initiated service at 14 communities. Nearly all of these gains occurred prior to September 11. Airlines substantially reduced total scheduled departures at the 202 small communities we reviewed between October 2000 and October 2001. As figure 4 shows, airlines scheduled an average of 2,406 departures daily during the week of October 15–21, 2000. In their original schedules for the week of October 15–21, 2001 (that is, the schedules prepared before September 11), airlines had planned to operate an average of 2,257 departures per day, a reduction of 6 percent from the October 2000 level. Airlines made further—and sharper—service reductions following September 11. According to our analysis of the airlines’ revised schedules for October 2001, the average number of scheduled daily departures from smaller communities dropped to 1,937, or about 320 (13 percent) fewer departures than originally planned. Combined, these schedule changes amounted to a total reduction of about 19 percent from October 2000’s flight schedule. The median community in our group of 202 had nine daily departures in October 2000. After the combined drop that was planned both before and after September 11, the median community had six daily departures in October 2001. October 2001 (original) October 2001 (revised) Other industry data regarding service decreases was consistent with the decreases identified in our 202 small communities. According to one industry analysis, the changes in daily scheduled seats from U.S. airports were generally comparable across airports of all sizes. Small hubs experienced a greater relative decrease in service (-15.5 percent) compared to nonhubs (-13.5 percent). Large hubs had the greatest relative decrease in total available seats (-16.3 percent), and medium hubs had a smaller decrease (-12.4 percent). The 19-percent drop in average daily departures came almost exclusively on turboprop flights. In October 2000, 67 of the 202 communities had some jet service. Airlines tended not to reduce jet flights in those locations where they already were in place. Overall, there were slightly more jet departures in October 2001 than in October 2000. As table 3 shows, median daily turboprop departures dropped from 8 to 6 before September 11 and from 6 to 5 afterward. On other service measures—number of airlines providing service and number of nonstop departures—the median for these communities remained the same. While the typical small community had the same number of airlines— two—providing service both in October 2000 and October 2001, a number of communities gained or lost a carrier. In the aggregate, the movement was downward, with 36 communities experiencing a net decline in the number of airlines providing service and 14 communities experiencing a net increase. For the 36 communities that lost airlines, most lost them as a result of airline decisions made prior to September 11. Likewise, communities that experienced net gains in the number of carriers did so primarily as a result of airline decisions made before September 11. Among communities that lost airlines, two (Cumberland, Maryland, and Rockford, Illinois) lost service altogether. The overall effect of these gains and losses was a decrease in the number of communities served by four, three, and two airlines and an increase in the number of communities served by only one airline (see fig. 5). In all, the number of communities served by only one airline increased from 83 (41 percent) to 95 (47 percent) of the 202 communities in our review. At a minimum, communities that lost an airline were at risk of losing connecting service to some destinations. Of the 36 communities that lost service from an airline, 5 lost the services of a carrier but did not lose access to other destinations, but 31 lost connecting service to other destinations. For example, when Abilene, Texas, lost its service from one of the two airlines that had been providing service, it lost one-stop connections to 14 destinations. Excluding the 2 communities that lost all service, the other 34 communities lost an average of 12 one-stop connections when one of the carriers discontinued flight operations there. Service changes at Lake Charles, Louisiana, illustrate what happens when an airline withdraws from a market. In October 2000, Continental Express and American Eagle both served Lake Charles Regional Airport. Continental Express was the dominant carrier, providing 40 weekly flights (57 percent of the total capacity, as measured by the number of available seats on departing flights). After September 11, American Eagle discontinued its 27 weekly flights from Lake Charles to Dallas-Fort Worth. Continental Express continued to fly, offering 38 weekly flights (2 fewer) to Houston. The loss of American Eagle’s service also meant that Lake Charles’s passengers could no longer reach 13 other destinations via one- stop connections at Dallas—destinations that Continental did not serve. It is difficult to assess the effect of losing a carrier on competition at a particular community. For one thing, the number of carriers providing service to a small community is an imperfect measure of competition. In the airline industry, competition is normally defined in terms of the number of different carriers serving the same city-pair market—that is, the route between the same two cities. Most small communities that received service from two or more carriers had nonstop flights to two or more airlines’ hubs. In the nonstop markets between the small community and those hubs, there was probably little direct competition initially; passengers, therefore, experienced little if any loss of competition on those routes if one of the carriers discontinued service. However, if the passenger’s final destination was not an airline hub city, then different airlines may compete directly in offering connecting service to the same destination city but through their different respective hubs. In such cases, the loss of an airline’s service at a small community means the loss of a competitive choice. Where competition is lost, the risk that consumers may be subject to higher airfares increases. Two primary external events that occurred since October 2000—the economic decline that began in early 2001 and the collapse of airline passenger traffic after September 11—significantly affected carriers’ financial conditions and thus influenced decisions about service throughout their networks, including service to small communities. As the nation’s economic performance declined, fewer passengers opted to fly. Consequently, airline revenues dropped, and airlines sought ways to control costs. They did so, in part, by reducing scheduled operations. In many small communities, they reduced the number of flights they were providing, and in some communities where they had a small portion of the market, they pulled out altogether. After September 11, passenger traffic and revenue plummeted, exacerbating the situation. Beyond their reactions to the economic slowdown and the events of September 11, airlines also made some changes on the basis of long-range decisions about the composition and deployment of their fleets—decisions, generally, to reduce turboprop operations and increase regional jet service. Some communities lost service as carriers retired certain types of small aircraft. Nationally, the U.S. economy slowed down during 2001 and moved into its first recession (as defined by the National Bureau of Economic Research) since 1991. This change in the national economy is reflected in airline passenger and revenue data. In the latter parts of 2000, monthly airline passenger traffic and revenue were still growing compared with the same periods in 1999. But beginning in February 2001, passenger traffic generally declined. Additionally, reflecting a drop in high-yield business traffic, total passenger revenues decreased at a steeper rate than passenger traffic, as shown in figure 6. For the U.S. airline industry as a whole, data from the Bureau of Transportation Statistics (BTS) indicate that airlines’ net income turned negative in the second quarter of 2001. Simultaneously across the industry, airline costs were rising. Carriers began efforts to control costs, in part by reducing service. As the economy slowed down, industry analysts projected that U.S. commercial airlines would lose over $2 billion in 2001. The events of September 11 accelerated and aggravated negative financial trends already evident in the airline industry. In response to significant losses experienced by the carriers stemming from the temporary shutdown of the nation’s airspace and the drop in passenger traffic, the president signed the Air Transportation Safety and Stabilization Act, which provided up to $4.5 billion in emergency assistance to compensate the nation’s passenger air carriers for these losses. The change in the airlines’ financial condition may be attributable to both the continued deterioration of passenger revenues and the inability of airlines to cut their expenses proportionately. Figure 7 shows the significant drop in passenger traffic after September 11. Data from BTS indicate that passenger enplanements between September 2000 and September 2001 on large air carriers dropped by over 34 percent nationally. As passenger traffic and revenues plummeted, carriers’ efforts to control costs included significant reductions in total capacity—in other words, service reductions. These reductions were dramatic. According to data from BTS, carriers flew 20 percent fewer departures in September 2001 than in September 2000. Different airlines approached such cost-cutting in different ways. For example, US Airways retired 111 older aircraft from its fleet, eliminating its Boeing 737-200s, MD-80s, and Fokker F-100s. Some carriers also replaced service from their large mainline jets with smaller aircraft operated by regional affiliates to better match capacity with passenger demand, as United did in some markets. In addition, United reduced the total number of departures in its system from about 2,400 before September 11 to 1,654 by October 31, in part by reducing early morning and late evening flights. Service to smaller communities was affected as part of the overall decrease in operations. These two factors—the economic downturn and aftermath of September 11—played out in small communities as well as in larger markets. As with the nation as a whole, small communities saw dramatic decreases in passenger traffic. According to our survey of airport officials, passenger traffic at small communities fell by 32 percent between September 2000 and September 2001—about the same percentage that, according to BTS data, passenger traffic decreased throughout the country. Over 80 percent of the airport managers we surveyed reported that passenger fear (that is, general apprehension related to the events of September 11, 2001) was a key factor in decreased enplanements at their airport since September 11. Airport directors also reported that passenger enplanements dropped because of air carrier service changes (e.g., fewer departures, smaller aircraft, or fewer carriers). In addition, managers indicated that basic economic conditions and post-September 11 airport security requirements reduced enplanements. Thus, the general reductions in service that occurred at small communities can be seen as reflecting airlines’ overall response to these factors. Another way that these factors can be seen at work in small communities is in the decisions airlines made to withdraw from a community. In most cases, when an airline withdrew entirely from a community, it was a community in which the airline was competing with other airlines and had only a limited market share. More specifically, of the 36 small communities that lost a carrier between October 2000 and October 2001, there were only six instances in which the carrier that discontinued operations was the largest service provider at the community. The effect of these decisions to withdraw from multiple-carrier markets can be seen in one characteristic we observed in the airline schedule data we analyzed: Among the 202 communities we analyzed, service reductions tended to be greater in those communities with populations above 100,000 than in communities with populations below 100,000. This was true across several types of service indicators, such as number of carriers, total number of daily departures, and number of nonstop flights to more than one destination. Across all these indicators, communities with populations below 100,000 typically had lower levels of service than their larger counterparts both in October 2000 and October 2001, but compared with these larger communities, they lost less of that service during the 1-year period we measured. One reason may be that over half of small communities with populations less than 100,000 were served by only one airline, both in October 2000 and in October 2001. Thus, airlines’ decisions to withdraw from multiple-carrier markets had little effect on them. While the economic downturn and the events of September 11 were potent factors in shaping airline service to small communities, some of the changes that were occurring reflected airline efforts on other fronts. The number of departures or available seating capacity at some small community airports changed when some major airlines directed their regional affiliates to shift some of their aircraft fleets to operate at different hubs in their systems. Similarly, changes in the number of departures or available seating capacity at some small community airports reflected strategic decisions that carriers had made about the composition and deployment of their fleets—decisions to replace their turboprop aircraft with regional jet aircraft. These decisions were made with the concurrence of the regional carriers’ mainline partners. Three examples illustrate how such restructurings often affected service to some small communities. According to a Northwest official, the carrier began restructuring parts of its Northwest Airlink regional fleet in 2002. Northwest began retiring turboprops at its wholly-owned affiliate, Express Airlines I, while increasing the number of regional jets in that carrier’s fleet and deploying them at all three of its hubs—Detroit, Minneapolis/St. Paul, and Memphis. Northwest decided that its other regional carrier, Mesaba Airlines, would become the sole operator of turboprops at its hubs beginning in February 2002. Mesaba also operates 69-seat regional jets. According to our analysis, between October 2000 and October 2001, Express Airlines I and Mesaba altered service at 60 small communities. Overall, more small communities lost service than gained service from these carriers during this period. A total of 49 small communities lost some capacity (e.g., through a reduction in flight frequency or use of smaller aircraft) from these carriers, with four of them losing service from Express Airlines I and Mesaba entirely because, according to a Northwest official, they were no longer profitable. On the other hand, 11 communities gained service—9 of them gaining additional flights or extra capacity through larger aircraft, and 2 gaining start-up service from the two airlines. Appendix VI provides more information about the small community service changes made by Express Airlines I and Mesaba between 2000 and 2001. Service changed at some communities when United renegotiated the contract with one of its regional carriers—Great Lakes Aviation. In 2000, both Great Lakes and Air Wisconsin served as United Express carriers operating between United’s Chicago and Denver hubs. However, beginning in May 2001 under a revised contract, Great Lakes no longer operated as a United Express carrier and instead continued in a “codesharing”relationship with United. Under this new arrangement, Great Lakes could decide which markets it served, but United was free to decide whether or not to codeshare on those routes. Furthermore, United expanded the amount of service Air Wisconsin (as a United Express carrier) provided to many of these communities. Of the 202 communities in our study, 16 were served by Great Lakes, 15 by Air Wisconsin, and 9 by both. Between October 2000 and October 2001, United’s changes altered air service between 39 of the 40 communities served by one of these carriers. Of the 39 communities with service changes, Great Lakes pulled out completely from 4. Either Great Lakes or Air Wisconsin decreased capacity at another 30 communities. Five communities gained new service or additional capacity. Appendix VII provides more information about the communities and how they were affected. According to industry sources, some of the decline in turboprop flights and gain in jet flights can be attributed to strategies that some carriers adopted in recent years to phase out turboprop aircraft and replace them with regional jets. For example, Atlantic Coast Airlines, Inc., which operates as a United Express and as a Delta Connection carrier, is planning to become an “all-jet” carrier by the end of 2003. In October 2000, Atlantic Coast operated 87 aircraft, including 34 regional jets and 53 turboprops, to 51 destinations from Washington Dulles and Chicago O’Hare. Of those 51 markets, 21 were served exclusively with turboprops. In October 2001, Atlantic Coast operated 117 aircraft, including 81 regional jets and 36 turboprops, to 60 destinations. Of those 60 markets, 15 were served exclusively with turboprops. By December 2001, Atlantic Coast had retired all of its 19-seat turboprop aircraft and ended service to two small communities—Lynchburg and Shenandoah Valley, Virginia—when it did so. Other regional carriers, such as American Eagle and Continental Express, have also decided to become “all-jet” carriers. It is not surprising that most small communities have fewer carrier options and less competition than larger communities. The economics of airline operations—that is, the need to cover the cost of operating turboprop or jet service with sufficient passenger revenue—mean that small communities that generate relatively little passenger traffic make profitable operations difficult. Because small communities generate relatively little passenger traffic (especially high-fare business traffic), they tend to have more limited air service than larger communities. As a result, passengers who use these communities’ airports often have less service: fewer nonstop flights to fewer destinations. The declines in air service at small communities in 2001 generally paralleled declines at larger airports. However, because small community airports had much more limited service initially, such decreases may subject passengers to or from those communities to significant effects. For example, when small communities lose a competitive air carrier choice, they may lose access to many destinations through one-stop connecting service. Similarly, although we were unable to analyze how airfares changed when the number of carriers serving a community changed, travelers to or from those communities that lost service from one or more carriers may be more vulnerable to noncompetitive pricing and service patterns. The number of communities subject to this vulnerability increased during 2001. Because of the relationship between economic activity and air service, airlines may restore some air service at small communities when local economic conditions improve. However, trends in the industry— such as the replacement of some turboprop aircraft with regional jets— may make it increasingly difficult for air carriers to operate competitive and profitable air service to some small communities. We provided a copy of the draft report to DOT for review and formal comment. We also provided sections of our draft report for technical comment to Northwest Airlines, United Airlines, Great Lakes Aviation, and Air Wisconsin. Officials with DOT and the airlines offered only technical comments, which we incorporated into the report, as appropriate. We are sending copies of this report to the Honorable Norman Y. Mineta, secretary of transportation; United Airlines; Northwest Airlines; the Regional Airline Association; and other interested parties. We will also send copies to others upon request. If you or your staffs have any questions about this report, please contact me, HeckerJ@gao.gov, or Steve Martin at (202) 512-2834, MartinS@gao.gov. Other key contributors to this report are listed in appendix VIII. This report examines the changing air service conditions in small communities. Our work focused on three objectives: (1) describing the overall level of air service at the nation’s small communities in 2000 and the main factors that contributed to that service level; (2) examining how the nature and extent of air service changed among the nation’s small communities in 2001, including a specific accounting for how service changed after the September 11 terrorist attacks; and (3) identifying key factors that have influenced these air service changes. To analyze the overall level of service in 2000 and how the nature and extent of air service at small communities changed in 2001, we first defined the universe of small communities. We began by including all nonhub and small hub airports, which various statutes define as small communities. We then narrowed that definition by including only those nonhub and small hub airports included on the Air Carrier Activity Information System (ACAIS) that supports the Federal Aviation Administration’s (FAA) Airport Improvement Program (AIP) entitlement activities. The ACAIS database contains data on cargo volume and passenger enplanements submitted by air carriers to the Department of Transportation (DOT). The ACAIS database categorizes airports by the number of annual enplanements. According to a DOT official, there are three categories: Primary: Public airports with scheduled, commercial air service with at least 10,000 annual enplanements. These airports are eligible for a minimum entitlement AIP funding of between $650,000 and $1 million. Nonprimary: Public airports with scheduled, commercial air service with annual enplanements between 2,500 and 9,999. These airports are not eligible for AIP entitlement funds, but are eligible for “commercial service funds,” which are discretionary AIP funds. Other: Airports that have scheduled service, but not necessarily commercial service and have less than 2,500 enplanements. To limit the scope of our research, we included only those airports that had more than 2,500 annual enplanements (approximately 7 passengers enplaning per day) in 1999. From this list, we eliminated airports that were located in territories, those at which commercial service was subsidized through DOT’s Essential Air Service (EAS) program as of July, 2001,those for which our data indicated that carriers had scheduled no service at any time between June 2001 and July 2002, and those nonhub airports that were located in metropolitan areas with populations of one million or greater (e.g., Meigs Field in Chicago). We eliminated the latter group of airports because travelers in those metropolitan areas are not limited to air service from the small airport; rather, they have a choice of other larger airports in the immediate area. We then compared various aspects of air service at the nonhub and small hub airports to see if there was a significant difference between the two. Based on that analysis and agreement with the requesters’ staffs, we defined small communities as those served by nonhub airports that met the above-mentioned conditions. Table 4 summarizes the number of nonhub airports affected by each of these filters. As part of our analysis, we also grouped the nonhub airports based on the size of the surrounding areas’ populations. Because many of these airports are within metropolitan statistical areas (MSAs), we used those population totals. If an airport was not located within an MSA, we used the county population. To determine what overall level of service airlines provided at the nation’s small communities in 2000, we examined air service schedules published by the airlines for the week of October 15–21, 2000. As with our previous reports on changes in air fares and service, the types of service we focused our analysis on were: the number of carriers serving the airport, if the airport was dominated by a single carrier, the number of nonstop destinations served out of the airport, the number of hubs served out of the airport, the number of turboprop and jet departures per week out of the airport, and the types of aircraft serving the airport. We determined these air service dimensions using airline flight schedule information submitted by all U.S. airlines that we purchased from the Kiehl Hendrickson Group, an aviation consulting firm. We did not independently assess the reliability of the Kiehl Hendrickson Group’s data, which it purchases from another vendor, Innovata, LLC. According to the Kiehl Hendrickson Group, Innovata employs numerous proprietary quality assurance edit checks to ensure data integrity. To determine factors associated with those service levels, we reviewed available literature on air service and local economic development, and we interviewed industry officials, consultants, academic experts, and airport officials. Based on that information, we identified a number of factors that relate air service levels with various aspects of small communities. Among the elements identified were the population of those small communities and the proximity of small community airports to other larger airports, many of which served either as a hub for a major airline or which was served by a low-fare carrier. We obtained community population data from the U.S. Bureau of the Census. In addition, we asked the airport directors at the small community airports to estimate the size of their airport’s “catchment area.” An airport’s catchment area is the geographic area from which it draws passengers. For those who did not respond to the survey, we estimated the size of their catchment areas based on the average size of the catchment area for other small community airports in the same geographic region. We then calculated the total population living within the catchment areas using 2000 census tract population data. For each small community airport, we also identified the nearest major airline hub facility and nearest airport served by a low-fare carrier and determined the distance between those airports to the small community airport. We statistically analyzed the extent to which some of the identified factors contributed to overall service levels. That analysis is described in greater detail in appendix V. To determine how air service has changed at small communities over time, we analyzed changes in scheduled air service for different time periods. We used our analysis of air service for the week of October 15–21, 2000 as a baseline for comparison. To minimize the possible effects of seasonality in air service, we then examined air service schedules for the week of October 15–21, 2001. To identify the service changes at small community airports that might be separately attributable to the 2001 economic downturn and the September 11 terrorist attacks, we examined two different sets of airline schedule data for October 15–21, 2001: those that airlines had published prior to September 11, 2001, and those published by the airlines following September 11, 2001. The first October 2001 schedule dataset reflected the schedule as of August 30, 2001, and is, therefore, not reflective of the airline industry’s reaction to the events of September 11. The second schedule dataset for October 15–21 reflected the schedule as of October 12, 2001. Finally, to determine if airlines continued to make substantial changes to their scheduled service, we also analyzed their schedules for the week of November 1-7, 2001. We recognize that airlines make frequent changes to their service schedules, and that service at these communities may have changed since then. We analyzed the same service elements as for the week of October 15–21, 2000. To determine factors associated with the changes in service at small community airports, we surveyed airport directors at nonhub and small hub airports. We also interviewed officials from major and commuter airlines, FAA and DOT, and industry experts. The survey responses helped us to identify the individual airport perspectives on how their service has changed and the impact of those changes, as well as the major factors affecting the service changes. We interviewed airline officials to understand how and why the major airlines were reducing and/or transferring small community airport routes to commuter carriers and how the different types of contractual relationships affect the route changes. In addition, airline officials described why many airlines are moving away from turboprop to regional jets. FAA and DOT officials, and industry experts provided further information on the state of the airline industry, particularly the vulnerability of small community airports. To collect information on the operational activities of small and nonhub airports, and the opinions of their managers on a variety of issues, we conducted a Web questionnaire survey of 280 U.S. airports from December 10, 2001, through January 29, 2002. Using data from the FAA, Kiehl Hendrickson Group, American Association of Airport Executives (AAAE), and the State of Alaska, we developed a sample of 280 small and nonhub airports. We did not survey the airport directors of all nonhub and small hub airports. Because of the special circumstances (e.g., unique remoteness) of smaller Alaska airports, we included another criterion for incorporating them into our database: We only included Alaska small and nonhubs that, in addition to meeting the prior criteria, were Part 139 certified. This additional criterion resulted in a total of 20 Alaska airports being included in our survey. We developed our survey instrument in consultation with AAAE officials, who reviewed our draft questionnaire and made suggestions about content and format. We also pretested the draft questionnaire at four airports in our study population. These airports were Bellingham International and Spokane International (Washington) and Hagerstown, Maryland, and Richmond, Virginia. We chose these airports because they represented—in terms of annual enplanements, location, and airport type—the kinds of airports that would be asked to complete our final questionnaire. We incorporated changes into our survey instrument as a result of these pretests. The final questionnaire was posted on the World Wide Web for respondents to complete. A complete reproduction of the Web survey can be viewed in Adobe Acrobat pdf format at www.gao.gov/special.pubs/d02432sv.pdf. We sent e-mail messages or otherwise contacted airports in our survey database in late November 2001 to notify them of our survey. We then sent each airport representative a unique username and password and invited them to fill out an automated questionnaire posted on the World Wide Web in early December of 2001. About 12 percent of the airport representatives completed a paper version of the questionnaire in lieu of completing the survey on-line. During the survey fieldwork period, we made at least three follow-up contacts with each airport that had not yet responded to ask them to participate. We used all completed responses received by January 29, 2002, in the analysis for this report. We received responses from 207 airports in which the respondent had indicated that they had completed their questionnaire and that GAO could use the data (a 74 percent response rate). Response rates did not vary appreciably across small hub and nonhub airports and results of the follow-up efforts showed no evidence that our survey results were not representative of the actual study population. Some questions in our survey instrument were not answered by all of the airports completing a useable questionnaire, but this rate of item nonresponse was generally low. In addition to any systematic bias or random error in our survey results that may have been caused by our inability to obtain answers from all of the airports in the population on all of our questions (nonresponse error), estimates from questionnaire surveys may be subject to other sources of error. We took steps to limit these errors. We checked our sample list of airports against other sources to help ensure its completeness, we pretested our questionnaire and had experts review it, and we checked our analysis for programming errors. We did not, however, verify the answers reported by airport directors. Other important issues may be relevant to an analysis of the service changes at small community airports. However, a lack of detailed information on these factors limited the scope of this review. For example, we were not able to obtain information on the differences in airfares at small communities and at competing airports. Airfare data for the quarter including October 2001 would not be available from the Bureau of Transportation Statistics until late February or early March 2002. Additionally, there is a lack of complete and representative fare data for small communities, especially for local passengers who do not connect to large carrier services. This is because public data on airfares is developed from a 10 percent sample of tickets collected from large air carriers, which comprises DOT’s “Passenger Origin-Destination Survey” (O&D Survey). Small certificated air carriers and commuter carriers do not participate in the O&D Survey. Thus, there are inherent statistical sampling limitations in the O&D Survey data. In addition, airlines’ decisions about profitability of operations in certain markets are proprietary confidential. We conducted our work from April 2001 through March 2002 in accordance with generally accepted government auditing standards. Small hub airports, the closest point of comparison to nonhubs, tend to serve somewhat larger communities and have significantly more commercial air service than do the 202 nonhub airports in the continental United States included in our analysis. The median population for small hub communities included in our analysis was about 417,000. As table 5 shows, only about 2 percent of small hub communities had service from two or fewer major carriers; the other 98 percent had service from more carriers. Additionally, only about 2 percent of small hub communities had service to three or fewer nonstop destinations; the other 98 percent had nonstop service to more locations. In addition, almost two-thirds of their nonstop destinations were into major airline hubs, and the majority of their flights were on jet aircraft (as opposed to turboprop or piston aircraft). Compared to small communities with nonhub airports, the communities with small hub airports had much greater daily air service. As table 6 shows, on average, small hubs had significantly more carriers, more jet and turboprop flights, and more nonstop destination options. For example, the median small hub airport community was served by six airlines, compared with two airlines for the small communities in our analysis, and the median number of daily departures was 45, compared with 9 for small communities. Table 6 provides additional information regarding small hubs and compares key differences in the scope of air service that airlines scheduled at these two airport categories. Compared to the experience of small communities, small hub airports saw relatively little change in their airline schedules during the period we analyzed. For example, there was little change in the number of small hubs that had service from three or more carriers (see figs. 8 and 9). In addition, the number of small hubs with service to more than two nonstop destinations did not change, and the number of small hubs dominated by a single carrier declined slightly. According to other data on changes in daily scheduled seats from U.S. airports, airports of all sizes experienced generally comparable decreases in total service. Small hubs experienced a greater relative decrease in service (-15.5 percent) compared to nonhubs (-13.5 percent). Large hubs had the greatest relative decrease in total available seats (-16.3 percent), and medium hubs had a smaller decrease (-12.4 percent). Unique conditions affecting air service in Alaska and Hawaii required us to look at these two states separately from the rest of the United States. Both states have distinctive geographies: they are both located outside the continental United States and both have unique topographies that require air service to be used as a major source of intrastate travel. We examined air service at 63 nonhub airports in Alaska and 2 in Hawaii. All of the Alaska airports were located in communities with less than 100,000 population; the median population was 7,208. The median passenger enplanements in 1999 was 5,176 (about 14 per day). The two Hawaii airports were located in larger communities; the average population was 128,094. Their median enplanements in 1999 were 108,258 (about 297 per day). There was little change in air service at the small community airports in Alaska and Hawaii between October 2000 and October 2001 (revised), as the median level of service represented by the indicators below show (see table 9). None of these airports had nonstop service to a major airline network’s hub. (The major U.S. airlines do not operate hubs in either state.) From October 2000 to October 2001 (revised), the number of these airports that were dominated by a single airline increased slightly, from 65 percent to 69 percent. There are two communities that are categorized as small hubs in Alaska— Juneau and Fairbanks—and three in Hawaii—Hilo, Kailua/Kona, and Lihue. The Alaska airports were all located in communities with populations less than 100,000 and had no service to an airline hub. Hawaii’s small hubs were in communities with populations of less than 250,000. Two of those communities had service to two airline hubs (San Francisco International and Los Angeles International). Small hub airports in Alaska and Hawaii have notably more passenger traffic and air service than the states’ nonhubs. The median passenger enplanements in the two Alaska airports in 1999 were 385,470 (about 1,056 per day), and the median passenger enplanements in Hawaii were 1,271,744 (about 3,484 per day). Typically, Alaska and Hawaii small hubs received service from four major or independent carriers with service to seven nonstop destinations. In addition, small hubs typically had 213 jet departures per week (30 jet departures per day) in October 2000. Generally, the overall amount of service for these small hub airports declined between October 2000 and October 2001. Specifically, airlines scheduled 50 fewer weekly jet departures (eight per day), and added 8 additional weekly turboprop departures (one per day). See table 10. To examine the factors associated with air service in small communities in October 2000, we statistically analyzed certain economic characteristics of these communities. Our process and the outcomes of our analysis are outlined below. For this study, we used regression analysis to explore which factors, called independent variables, explain differences in the level of service, called the dependent variable, in small communities in October 2000. A regression model is a statistical tool that enables researchers to investigate relationships between the dependent variable and the independent variables. To examine the factors associated with the level of air service provided to small communities in October 2000, we used an ordinary least squares regression model. We developed several models, looking at the contribution of each independent variable to the predictive ability of the models, and the overall explanatory power of the models as measured by the coefficient of determination, or r-squared. R-squared is a measure of the proportion of the total variation in the dependent variable that can be explained by the independent variables in that particular model. Economic principles indicate that as income, market population, and the price of substitute service increase, demand for a service will increase. Under these conditions, within a competitive marketplace, as passenger demand increases, the supply of air service will increase to meet that demand. We, therefore, expect that communities with greater levels of income and gross regional product and larger populations and employment levels will experience more substantial air service. Likewise, we expect that communities that are farther from an airport with a low- fare carrier will realize better service. We obtained the economic data used in the regression analysis from the Regional Economic Information System database produced by the Bureau of Economic Analysis. The data were collected for October 1999 at the county level. We then created a dataset containing variables for each county, including population, total employment, manufacturing earnings, and per capita income. We merged this dataset with the data on air service and the distance between airports to create a final working dataset for this analysis. Table 11 summarizes the descriptive statistics of the economic variables and other factors for the 202 small communities in our analysis. We used employment and population to represent the size of a community and per capita income as a measure of income. We expect that a community with a larger manufacturing sector will have a greater demand for business travel. However, data on business travel and regional exports were unavailable for this study. In addition, it is difficult to obtain data on gross regional product (a measure similar to gross domestic product that is applied at the regional level). Therefore, for the purposes of our analysis, we used manufacturing earnings to represent the level of export activity from a region and, hence, as an indicator of the possible demand for business travel. Using the regression to explain variation in air service, we focused primarily on modeling the number of weekly departures (jet and turboprop) from a small community. Multiple univariate and multivariate models of jet and turboprop departures were specified as a function of the independent variables to examine the consistency and robustness of the findings. The results of a final model are discussed below, in which jet and turboprop departures are specified as a function of employment (or population), manufacturing earnings, minimum distance to a low-fare carrier, and per capita income. The results of our regression models indicate that, as expected, employment (or population), manufacturing earnings, minimum distance from a low-fare carrier, and per capita income had a positive effect on the level of air service received by a small community. Below are quantitative statistics from specific models. After controlling for distance to a low-fare carrier, manufacturing earnings, and population, we found that for every additional $5,000 in per capita income, a community received 3.3 and 12.7 more jet and turbopropdepartures per week, respectively. In other words, if two small communities, A and B, were identical in every way except that Community A had $5,000 more in per capita income than Community B, then Community A had roughly 16 more total departures per week than Community B. This difference in the number of total departures was attributable to the difference in per capita income. After controlling for distance to a low-fare carrier, manufacturing earnings, and per capita income, we found that a community received 4.3 and 4.8 more jet and turboprop departures per week respectively for every additional 25,000 jobs in the community. After controlling for distance to a low-fare carrier, population, and per capita income, we found that a community with $250,000 more in manufacturing earnings received 4.8 more jet departures per week than an otherwise similar community. After controlling for manufacturing earnings, per capita income, and employment, we found that a community received 4.7 more jet departures per week for every additional 50 miles separating the airport from a low- fare carrier. According to our analysis, between October 2000 and October 2001, Express Airlines I and Mesaba altered service at 60 small communities. Overall, more small communities lost service than gained service from these carriers during this period. A total of 49 small communities lost service, 4 of which (Bismarck, North Dakota; Columbus, Georgia; Dothan, Alabama; and Rockford, Illinois) lost all nonstop service from Express Airlines I and Mesaba. On the other hand, 11 communities gained service. Nine gained additional flights or extra capacity (i.e., number of seats available for purchase) through larger aircraft, and two (Charlottesville, Virginia, and Springfield, Illinois) gained start-up service from the two airlines. Express Airlines I made service changes at 27 small communities between 2000 and 2001. Of these 27 communities, Express Airlines I reduced service at 13, increased service at 13, and took mixed actions at 1 other (reducing the number of daily departures but adding more available seating capacity by using larger aircraft). Mesaba altered its weekly service at 44 small communities between 2000 and 2001. Mesaba ended all service to 2 communities. At 37 other communities, Mesaba’s service reductions averaged two departures per day per community. On the other hand, Mesaba increased service at 3 small communities and launched new service at another. At Sioux City, Iowa, Mesaba decreased average daily departures but increased total seating capacity by substituting larger aircraft. Eleven communities were served by both—Mesaba and Express Airlines I. Service reductions that Mesaba made at 8 of the 11 were offset by service additions from Express Airlines I, often with new regional jet service. Table 12 summarizes the small community service changes made by Express Airlines I and Mesaba between October 2000 and October 2001. Of the 202 small communities in our study, Great Lakes Aviation served 16, Air Wisconsin served 15, and both airlines served 9. Both airlines served United’s Chicago (O’Hare) and Denver hubs. Between October 2000 and October 2001, Great Lakes and Air Wisconsin altered air service in 39 communities. Of the 39 communities with service changes, 4 lost all of their air service (all of which was provided by Great Lakes). A total of 30 communities saw reductions in their service (i.e., capacity, either through a reduction in departures or by using smaller aircraft) by Great Lakes or Air Wisconsin. Five of the communities in our analysis gained either new service or capacity (i.e., number of seats available for purchase). Great Lakes altered its weekly capacity at 16 small communities between 2000 and 2001. Of these communities, 4 of them (Dubuque, Iowa; Lafayette, Indiana; Rhinelander, Wisconsin; and Salina, Kansas) lost all of their service. Furthermore, Great Lakes reduced service at 11 communities. Only one community—Telluride, Colorado—gained capacity from Great Lakes. Air Wisconsin reduced service at 11 communities and added either new service or additional capacity in 3 communities. Great Lakes and Air Wisconsin both served 9 communities in our analysis. Between October 2000 and October 2001, in 3 of those communities (Traverse City, Michigan; Springfield, Illinois; and Eagle, Colorado), Air Wisconsin replaced Great Lakes service, and in one community (Cody, Wyoming) Great Lakes replaced Air Wisconsin service. Both Great Lakes and Air Wisconsin provided service to Grand Junction and Durango, Colorado. Great Lakes discontinued service by October 2001. Casper, Wyoming; Hayden and Gunnison, Colorado all were receiving service from both Great Lakes and Air Wisconsin in 2000. By October 2001, Air Wisconsin had discontinued all of its service. Table 13 summarizes the changes in service at small communities served by Great Lakes and Air Wisconsin between October 2000 and October 2001. In addition to those individuals named above, Triana Bash, Curtis Groves, Dawn Hoff, David Hooper, Sara Ann Moessbauer, John Mingus, Ryan Petitte, Carl Ramirez, Sharon Silas, Stan Stenersen, and Pamela Vines made key contributions to this report. Financial Management: Assessment of the Airline Industry’s Estimated Losses Arising From the Events of September 11. GAO-02-133R. Washington, D.C.: October 5, 2001. Commercial Aviation: A Framework for Considering Federal Financial Assistance. GAO-01-1163T. Washington, D.C.: September 20, 2001. Aviation Competition: Restricting Airline Ticketing Rules Unlikely to Help Consumers. GAO-01-831. Washington, D.C.: July 31, 2001. Aviation Competition: Regional Jet Service Yet to Reach Many Small Communities. GAO-01-344. Washington, D.C.: February 14, 2001. Essential Air Service: Changes in Subsidy Levels, Air Carrier Costs, and Passenger Traffic. GAO/RCED-00-34. Washington, D.C.: April 14, 2000. Airline Deregulation: Changes in Airfares, Service Quality, and Barriers to Entry. GAO/RCED-99-92. Washington, D.C.: March 4, 1999. Aviation Competition: Effects on Consumers From Domestic Airline Alliances Vary. GAO/RCED-99-37. Washington, D.C.: January 15, 1999. | Most major U.S. airlines began realizing net operating losses early in the 2001, and all of the major U.S. passenger carriers except Southwest Airlines reported losses for the year. Travelers throughout the nation shared in the difficulties. In October 2000, the typical or median small community that GAO analyzed had service from two airlines, with a total of nine daily departing flights. Forty-one percent of the communities were served by only one airline with size being the most obvious factor for service limitations. However, the level of service also varied by the level of local economic activity. The total number of daily departures from these small communities declined by 19 percent between October 2000 and October 2001. Although carriers had reduced total departure levels at small communities before September 11th, airlines made even more reductions after that date. Because profitability is so critical to airline decisions about what markets to serve and how to serve them, the changes in service levels in small communities can be traced to economic factors. Two such factors--the economic decline that began in early 2001 and the collapse of airline passenger traffic after September 11--are widely acknowledged as the main contributors to declining profitability in the industry. |
Each year, several million passengers travel on foreign airlines that have established code-share arrangements with U.S. air carriers. Code-sharing is a marketing arrangement in which an airline places its designator code on a flight operated by another airline and sells and issues tickets for that flight. On foreign code-share routes, U.S. airlines and their foreign partners each place their respective designator code on flights operated by the other airline. Passengers can purchase one ticket from a U.S. airline that can include flight segments covered by one or more foreign partner airlines. Air carriers throughout the world form code-share alliances to strengthen or expand their market presence or ability to compete. Through code- sharing, U.S. airlines can offer seamless service to additional international destinations without incurring the expense of establishing their own operations to those locations. Moreover, airline officials said that code- share arrangements with foreign airlines have become important sources of revenue. According to FAA, international markets are viewed as more attractive growth markets by mainline carriers because of more limited competition from low-cost carriers and greater profitability. In recent years, U.S. airlines have established an increasing number of code-share arrangements with foreign carriers to expand their service markets. As of May 2005, eight U.S. airlines had established 108 arrangements to place their designator codes on 85 different foreign carriers, up from six U.S. airlines that had established 39 arrangements to place their designator codes on 38 different foreign carriers in fiscal year 2000. As shown in figure 1, the majority of U.S. airlines’ code-share arrangements are with European airlines, representing over half, followed by airlines from Asia and the Pacific, accounting for nearly a quarter of the arrangements. Appendix II lists the U.S. carriers and their foreign code- share partners. In 1998, SwissAir Flight 111, which was a code-share flight with U.S.-based Delta Air Lines, crashed off the shores of Nova Scotia, killing 229 passengers, including 53 Americans. Following that accident, the DOT Inspector General reviewed aviation safety under international code-share agreements and issued a report in 1999 recommending, among other things, that DOT develop and implement procedures requiring U.S. airlines to conduct safety audits of foreign carriers as a condition of authorization of code-share passenger services. Also in 1999, legislation was introduced in the House of Representatives that would have statutorily required U.S. airlines to audit the safety of their foreign code-share partners. Although that legislation was not enacted, in 2000, DOT’s Office of the Secretary and FAA established the Code-Share Safety Program, which included the development of guidelines for U.S. carriers to follow in auditing the safety of their foreign code-share partners as a condition of DOT’s authorization of code-share passenger services. DOD’s safety audit program, called the Commercial Air Transportation Quality and Safety Review Program, expanded another program that DOD established in 1986 to check the safety of charter aircraft transporting its personnel, after an Arrow Air charter airplane transporting U.S. military personnel crashed in 1985, killing 256 passengers and crew. In 1986, Congress passed Public Law 99-661, which created a Commercial Airlift Review Board and prohibits DOD from contracting with an air carrier unless it meets certain safety standards and submits to a technical safety evaluation. A 1999 memorandum of understanding between DOD and the Air Transport Association, a U.S. airline industry association, allows DOD to review the safety audits that U.S. airlines have conducted of their foreign airline partners. DOD is a major customer of airlines that have established code-share arrangements through its participation in the General Services Administration’s (GSA) city-pairs program, under which the government negotiates service contracts for all federal government employees, including military personnel, to save the government money on air travel. The program requires federal employees and military personnel to fly with carriers under such contracts when they travel on government business. DOD is required to review the safety of all airlines that provide scheduled service to its personnel under the GSA city-pairs program, which include U.S. airlines’ foreign code-share partners. DOD’s program also has the effect of having the airlines comply with DOD requirements if they want to maintain the GSA contracts. The safety of foreign carriers is also a concern because aviation accident rates vary considerably from one region of the world to another. According to data compiled by IATA, an international airline association, during 2004, the North American region had the lowest aviation accident rate (0.29 hull losses per million flight segments), while the Middle East had the highest (5.32 hull losses per million flight segments). Africa had the second highest rate, followed by South America, the Asia-Pacific region, and Europe. These accident rates are shown in figure 2. DOT’s Office of International Aviation within the Office of the Secretary of Transportation authorizes U.S. airlines’ code-share arrangements with foreign airlines after considering, among other things, safety and security information from FAA and TSA. FAA provides DOT’s Office of International Aviation with a memorandum recording its “objection” or “no objection” to the foreign code-share partners of U.S. airlines. This memorandum is based on FAA’s assessments of foreign civil aviation authorities and reviews of safety audits conducted by U.S. airlines of foreign carriers. TSA assesses the security of foreign airlines that provide service to the United States and its territories and certain foreign airports. DOT also considers the competitive and antitrust implications of code-share arrangements. For its program, DOD reviews many of the same safety audit reports on foreign airlines that FAA reviews for the Code-Share Safety Program. To authorize a code-share arrangement between a U.S. and a foreign airline, DOT must find that the arrangement is in the public interest. Under the DOT guidelines, this public interest finding includes a determination of the foreign carrier’s level of safety and the economic impact of the arrangement. Before authorizing a code-share arrangement, DOT’s Office of International Aviation obtains (1) a memorandum of “no objection” from FAA, based on its review of the safety audits and other safety information available to FAA; (2) a clearance from DOT’s Office of Policy on aspects of security involving the foreign carrier, including information from TSA; and (3) a clearance from DOT’s Office of Aviation Analysis and Office of the General Counsel concerning the code-share arrangement’s competitive impact on the airline industry. The Office of International Aviation also obtains advice from the Department of Justice on potential antitrust issues. According to DOT officials, on 270 occasions, from February 2000 through the end of fiscal year 2004, DOT authorized or reauthorized U.S. airlines to establish or maintain code-share arrangements with foreign carriers and did not suspend any arrangements during that time. However, FAA officials also said that U.S. airlines have occasionally decided not to pursue code-share arrangements with foreign airlines because they expected FAA would object. Code-share arrangements may be periodically reauthorized based on the terms of the initial authorization. Figure 3 shows the DOT code-share authorization process. DOT’s Office of International Aviation solicits the views of FAA on the safety aspect of its code-share authorization decision because of FAA’s technical expertise in that area. FAA reviews reports of the safety audits that U.S. carriers have conducted on the foreign carriers and other safety information available to FAA, including its assessments of the capabilities of the relevant foreign civil aviation authorities. FAA provided DOT’s Office of International Aviation with memorandums of “no objection” on all foreign airlines being considered for code-share authorization during fiscal years 2000 through 2004. According to FAA officials, if FAA has safety concerns, it puts a hold on its review of the proposed code-share arrangement, allowing time for the safety issues to be resolved; and on four occasions, from February 2000 through September 2004, U.S. airlines suspended their code-share arrangements with foreign carriers because FAA was questioning the capabilities of the civil aviation authorities under which the foreign carriers were operating. Figure 4 shows FAA’s process for providing information to DOT’s Office of International Aviation on U.S. airlines’ applications to establish code-share arrangements with foreign carriers. Under the Code-Share Safety Program guidelines, DOT authorizes a U.S. airline’s code-share arrangement with a foreign carrier only if the foreign airline is from a country that is compliant with applicable international aviation safety standards under FAA’s International Aviation Safety Assessment (IASA) program. Under IASA, FAA reviews the capabilities of foreign civil aviation authorities by checking their compliance with standards established by the International Civil Aviation Organization (ICAO), a United Nations aviation organization. Under IASA, FAA assigns countries’ civil aviation authorities either a category 1 rating—meets ICAO standards—or a category 2 rating—does not meet ICAO standards. During the IASA process, FAA personnel, typically from various international field offices, conduct on-site assessments of civil aviation authorities for compliance with ICAO standards in eight areas: (1) primary aviation legislation, (2) aviation regulations, (3) organization of the civil aviation authority, (4) adequacy of the technical personnel, (5) technical guidance, (6) licensing and certification, (7) records of continuing inspection and surveillance, and (8) resolution of safety issues. Each country with carriers serving, or wishing to serve, the United States in their own right or as part of a code-share arrangement with a U.S. airline must first have an assessment under the IASA program. Although FAA’s plan is to reassess the category for each foreign civil aviation authority every 2 years, FAA officials said that this activity occurred less frequently because of a larger-than-anticipated number of reassessments and constraints on the agency’s resources. FAA data indicate that 67 of the 100 foreign civil aviation authorities in the IASA program, or about two-thirds, have not been assessed within the last 4 years. According to FAA, some countries were not assessed within the last 4 years because available data indicated that their rating categorization remained valid. FAA data also show that from January 1, 2000, through May 1, 2005, FAA assessed or reassessed—because of safety oversight concerns—the capabilities of 33 foreign civil aviation authorities, 6 of which were assessed more than once. Of the 42 countries’ civil aviation authorities under which the foreign code-share partners of U.S. airlines are operating, 16 have required an IASA assessment or reassessment since 2000 and 26 have not. IASA results, along with the safety audits that U.S. airlines conduct of their foreign code-share partners, are FAA’s principle measures of the level of safety of the foreign carriers. According to the guidelines, the level of oversight and regulation that an airline receives from its regulatory authority is an important factor in assessing its safety. For this reason, DOT authorizes U.S. airlines’ code-share arrangements only with foreign airlines that are from IASA category 1 countries. As of May 2005, FAA had assigned IASA category 1 ratings to 71 countries’ civil aviation authorities and IASA category 2 ratings to 28; 94 other countries had not yet been categorized, generally because no carriers from those countries had applied to provide direct service to the United States. DOT’s Office of International Aviation will not authorize a code-share application, and FAA will not review the safety audit report if flights that are intended to carry a U.S. carrier’s designator code would be operated by a foreign carrier from a country with an IASA category 2 rating. If a U.S. airline is seeking to establish a code-share arrangement with a foreign carrier that is from a country that does not have an IASA rating, FAA normally conducts the assessment before DOT’s Office of International Aviation considers the application. When FAA lowers a country’s IASA rating from category 1 to category 2, DOT’s Office of International Aviation contacts any U.S. airline that has a code-share partnership with an airline from that country to advise the U.S. airline of the lowered IASA rating so that the U.S. carrier can promptly remove its code from any passenger flights operated by that airline, according to agency officials. While DOT indicated that it could, at its option, order the removal of U.S. airlines’ designator codes under these circumstances, in practice, DOT has not needed to pursue that option because, when the airlines have learned about an IASA category change affecting their service, they have removed their operating codes from the foreign carrier. On four occasions since 2000, U.S. airlines have suspended their code-share arrangements with foreign airlines because FAA was questioning the capabilities of the civil aviation authorities under which the foreign airlines were operating. The program guidelines allow DOT to consider, on a case-by-case basis, continuing to authorize a U.S. airline’s code-share arrangement with a foreign carrier that is from a country with an IASA rating that has been lowered from category 1 to category 2. According to FAA, this case-by-case language was included to enable DOT’s Office of International Aviation to accord U.S. airlines a limited degree of flexibility needed to effectuate an orderly shutdown of their code-share services. However, DOT officials told us that they will not authorize the continuation of a code-share arrangement beyond the needs of such an orderly shutdown. FAA will not review a U.S. airline’s safety audit report on a foreign carrier until FAA has reviewed and accepted the airline’s audit methodology. According to the program guidelines, the U.S. airlines’ safety audit methodologies should incorporate ICAO standards on personnel licensing, aircraft operations, aircraft airworthiness, and security. The guidelines also describe how the U.S. airlines should conduct their safety audits, including what qualifications the auditors should possess, how the system for reporting and correcting findings should be devised, what audit results are satisfactory, how a safety monitoring system should be established, and how frequently audits should be conducted. At the same time, FAA officials said they provide the airlines with some flexibility in designing their audit programs, as long as the programs address all of the relevant ICAO standards. FAA reviewed and accepted an audit program for each of the eight U.S. airlines to participate in the Code-Share Safety Program. In designing their audit methodologies, some U.S. airlines include other standards and best practices, such as ones developed by DOD, in addition to the ICAO standards and recommended practices in the DOT program guidelines. Moreover, to audit the safety of their foreign code-share partners, six U.S. airlines have begun using standards from a new international safety audit program developed by IATA called the IATA Operational Safety Audit (IOSA), which incorporates the ICAO standards, plus many additional industry best practices. IOSA was developed by IATA to improve global airline safety and promote audit efficiency by reducing redundant audits. In 2004, FAA accepted the IOSA program as a methodology that would meet the Code-Share Safety Program guidelines. Under the Code-Share Safety Program guidelines, after the U.S. airlines have completed the audits and the foreign airlines have taken all corrective actions, the U.S. airlines’ safety directors (or similar officials) should provide written statements to FAA, known as compliance statements, affirming that the audits were conducted in accordance with the guidelines and that the foreign carriers meet the applicable ICAO standards. According to an FAA official, U.S. airlines filed compliance statements for all of the audit reports that FAA reviewed on foreign carriers. The guidelines also indicate that to maintain their continued code-share authorizations, U.S. airlines should audit the safety of their foreign code- share partners and submit compliance statements to FAA every 2 years. We found that, for 12 out of 256 audit reports that FAA reviewed from February 2000 through the end of fiscal year 2004, FAA granted the U.S. airlines extensions of time to submit compliance statements because delays had resulted from the outbreak of Severe Acute Respiratory Syndrome (SARS), the U.S. airline planned to cancel the code-share arrangement, or the foreign carrier needed more time to implement corrective actions. FAA generally granted the extensions for between 1 and 3 months, during which time the code-share arrangements continued. Since 2000, DOT’s Office of Intelligence and Security and Office of Policy have provided security clearances to DOT’s Office of International Aviation for all U.S. airlines’ proposed code-share arrangements with foreign airlines. DOT’s Office of Policy receives security information on certain foreign carriers and foreign airports from TSA, which assesses the security of foreign airlines that provide direct service to the United States and its territories, as well as to certain foreign airports. TSA provided security clearances for all proposed code-share arrangements, from fiscal years 2000 through 2004, for which it had information on the foreign carriers. Because it lacks the authority, TSA does not assess the security of other foreign carriers that do not provide direct service to the United States and its territories. Twenty-nine, or about one-third, of the 85 foreign code-share partners of U.S. airlines do not provide service to the United States and its territories and therefore have not been assessed for security by TSA. DOT has also authorized U.S. airlines’ code-share arrangements with foreign airlines that serve many foreign airports that TSA has not assessed for security. As a result, passengers traveling on foreign code-share partners of U.S. airlines may be traveling to certain foreign airports that could have security risks. TSA has the authority to assess the security of a foreign airport (1) served by U.S. airlines, (2) from which a foreign carrier serves the United States and its territories, or (3) that “poses a high risk of introducing danger to international air travel.” Also, TSA can assess “other foreign airports the Secretary of Homeland Security considers appropriate.” TSA has assessed the security of the foreign airports from which domestic and foreign airlines provide direct service to the United States and its territories. However, in addition to the foreign airports that provide direct service to the United States and its territories, the foreign code-share partners of U.S. airlines serve other foreign airports. TSA officials indicated they have begun to assess the security of other foreign airports. DOT has not always had comprehensive data on which foreign airports are being served by the foreign code-share partners, so we were unable to determine how many foreign airports have not undergone TSA security assessments. For one U.S. airline for which we had complete foreign code-share route information, we determined that the foreign partners served 128 foreign airports that did not provide direct service to the United States and its territories, and some of these 128 had yet to undergo TSA security assessments. In assessing the security of foreign airports, TSA rates them in categories and assesses airports in those categories as appropriate. DOT’s Office of International Aviation, which receives TSA’s security ratings through DOT’s Office of Policy, authorizes code-share arrangements for U.S. airlines with foreign carriers that serve foreign airports. According to DOT security officials, it is not a problem to authorize code-share arrangements with foreign airlines regardless of category because all airports must meet ICAO security standards and are assessed appropriately. Moreover, officials from TSA and DOT noted that both U.S. and foreign airlines can be required to implement additional security measures at those airports. For example, the TSA officials described an instance in which a bombing in a Middle Eastern country resulted in the implementation of additional security measures at an airport in that country. TSA officials said that because that airport met ICAO security standards, TSA had to rely on increased security measures voluntarily implemented by the carriers to help mitigate the threat in that area. While not involved in DOT’s code-share authorization process, DOD reviews the safety of certain foreign airlines, thereby providing an additional layer of federal oversight. The DOD Commercial Air Transportation Quality and Safety Review Program is focused on ensuring that the airlines DOD contracts with—to transport DOD personnel—meet applicable safety standards. DOD requires U.S. airlines to audit the safety of their foreign code-share partners every 2 years, on the basis of ICAO standards, and monitor the safety of their foreign partners between safety audits. In addition, DOD considers FAA’s IASA ratings of foreign civil aviation authorities in determining whether to allow foreign carriers to fly on GSA city-pair routes. DOD requires that foreign airlines be assessed on the basis of standards that DOD developed called Quality and Safety Requirements, which are focused on system safety processes. According to a DOD official, these DOD standards include safety processes that are not ICAO requirements, which form the basis of the DOT program. A DOD official said, for example, that DOD requires airlines to have a safety audit program that analyzes and assesses trends of safety information, including feedback from crew members, for the purpose of enhancing safety, which is not an ICAO standard. Although DOT can suspend code-share authorizations for safety reasons, DOD can cancel, at any time, contracts with airlines that transport DOD personnel if it determines that they are not sufficiently safe. Between audits, DOD takes certain steps to monitor the safety of foreign carriers that FAA does not take, such as conducting semi-annual evaluations that include requiring foreign carriers that DOD contracts with to complete questionnaires about their safety. DOD does not consider TSA’s security assessments of foreign airports in its review. DOD officials said that they were unaware of TSA’s foreign airport assessments and would like TSA to provide the information for DOD to consider as part of its reviews. We found that DOD and FAA review many of the same safety audit reports on foreign airlines. During fiscal years 2001 through 2004, DOD and FAA reviewed 203 of the same reports of safety audits that U.S. airlines had conducted of their foreign code-share partners. In reviewing these same reports, DOD and FAA reached the same conclusions about the safety of the foreign carriers involved. Because DOD and FAA are reviewing many of the same audit reports, the DOT and DOD safety programs are duplicating some efforts. In its 1999 report, the DOT Inspector General recommended that, in establishing a safety program on foreign code-share partners of U.S. airlines, FAA and DOT’s Office of the Secretary work closely with DOD to maximize the use of limited resources, avoid duplication, and establish protocols for exchanging information about the carriers’ safety assessments. A DOD official said that he communicates frequently with FAA Code-Share Safety Program officials, and that DOD has a full-time liaison in FAA’s Flight Standards Service, who meets weekly with FAA officials. However, FAA officials said that although DOD requests IASA reports on certain countries, FAA does not routinely communicate with DOD on its safety audit reviews of foreign carriers, and no set criteria spell out the circumstances under which FAA and DOD should communicate information on the safety of U.S. airlines’ foreign code-share partners. When we discussed the possibility of reducing duplicative safety reviews with FAA and DOD officials, an FAA official said he did not consider their reviews to be duplicative because FAA and DOD have different objectives. The FAA official said that FAA is reviewing the reports from the perspective of a regulator, focusing on the carriers’ compliance with ICAO standards. Furthermore, the FAA official questioned whether FAA or DOD could assume each others’ responsibilities and report to different departments. A DOD official also said the potential for duplication should be considered from the perspective of DOD’s and FAA’s different objectives in conducting their reviews. The DOD official said that DOD’s objective is to ensure that its requirements for transporting DOD personnel are being met. Another DOD official said that FAA and DOD are not duplicating their efforts because neither agency has the expertise to conduct its reviews from the other agency’s perspective. The Code-Share Safety Program incorporates selected government auditing standards involving independence, professional judgment, and competence. According to FAA officials, FAA and DOT’s Office of the Secretary worked with the airline industry to recommend that the Code- Share Safety Program guidelines incorporate these standards. Government auditing standards provide an overall framework for ensuring that auditors be independent and exercise judgment, competence, and quality control and assurance in planning, conducting, and reporting on their work. However, FAA’s management of the program did not incorporate certain internal controls, which the Office of Management and Budget requires federal managers to use in assessing the effectiveness and efficiency of operations. These controls are related to establishing reviewers’ qualifications, documenting the closure of safety audit findings, verifying corrective actions taken in response to the findings, and documenting reviews. The Code-Share Safety Program guidelines recommend that the airlines incorporate certain government auditing standards in their safety audit reviews. FAA has reviewed the methodologies that the U.S. airlines follow in auditing the safety of their foreign code-share partners, which incorporate these auditing standards. Ensuring independence is critical, for example, because the U.S. airlines generally audit the safety of their foreign code-share partners themselves. Although we did not assess the airlines’ compliance with the independence standard, U.S. airline officials told us that they ensure independence by separating their safety and marketing departments organizationally to prevent any possible influence from the marketing staff on the safety audit results. In addition, safety officials at the U.S. airlines participating in the Code-Share Safety Program indicated that other airline departments do not have any input into their safety audit results. Moreover, some airline safety officials said they were not aware of the specific financial arrangements involved in their airlines’ code-share partnerships. The program guidelines allow the U.S. airlines to employ personnel or hire outside experts as consultants (contractors) to conduct the safety audits. FAA officials said they are not concerned about allowing the U.S. airlines to use their own employees to conduct the safety audits because of the importance to the airlines of conducting sound safety audits to limit the liability associated with establishing code-share arrangements with foreign airlines. Table 1 lists the program guidelines that incorporate the auditing standards. We found that FAA’s reviews of the safety audit reports lacked certain management controls—including establishing reviewers’ qualifications, verifying corrective actions, and documenting the reviews—but did employ some management controls for monitoring and measuring performance. Management controls are the continuous processes and sanctions that federal agencies are required to use to provide reasonable assurance that their goals, objectives, and missions are being met. These controls should be an integral part of an agency’s operations and include a continuous commitment to identifying and analyzing risks associated with achieving the agency’s objectives, establishing program goals and evaluating outcomes, and creating and maintaining related records. Effective management controls require that personnel possess and maintain a level of competence that allows them to accomplish their assigned duties. In addition, management must identify the knowledge and skills needed for various jobs, provide needed training, and obtain a workforce that has the skills that match those necessary to achieve organizational goals. However, we found that FAA has not established competence criteria and qualifications for the personnel who review the airlines’ safety audit reports. As a result, the FAA staff who are reviewing the audit reports have different backgrounds and training, which may lead to differing interpretations of the standards. The FAA headquarters official who has reviewed a large number of the safety audit reports has aviation experience as a military pilot and is trained as an ISO 9000 auditor but is not trained as an FAA inspector and was hired in an administrative capacity. Two other FAA headquarters staff who review the audit reports have been trained as aviation safety inspectors. Furthermore, five FAA field inspectors who are conducting many of the reviews have not had training in IOSA, which six U.S. airlines in the Code-Share Safety Program are now using as standards to audit the safety of their foreign code-share partners. As a result of inspectors not having this training, this could impede FAA’s review of the safety audits based on those standards. Moreover, the Code- Share Safety Program manager was transferred to a new position in February 2005, leaving the position vacant since that time. As of June 2005, FAA had not authorized this position to be filled and has denied a request for another full-time staff position dedicated to the program. Since the program manager’s departure, other staff in FAA’s International Programs and Policy Office, which administers the Code-Share Safety Program, have reviewed the safety audit reports in addition to performing their regular duties. FAA program officials said that since the program manager was transferred to another position, U.S. airlines must wait 3 to 4 weeks for FAA to review their safety audits of foreign carriers, compared with waiting 1 day to 2 weeks before his transfer, and that U.S. airlines now must bring all of their safety audit reports to FAA in Washington, D.C., for review—a change that could hinder FAA’s review of documentation, such as safety monitoring systems, that may be located at the airlines’ facilities. An FAA management official said that because FAA’s Flight Standards Service, of which the Code-Share Safety Program is a part, imposed a hiring freeze in January 2005 for budgetary reasons, only critical positions are being replaced. The official said that because the vacant position for the Code- Share Safety Program was not considered to be critical, it was not filled. Effective management controls also require the establishment of policies and procedures to verify that corrective actions have been taken in response to identified problems. According to FAA and airline officials, FAA staff review each audit report for about 2 to 4 hours, identifying any areas that need further clarification or resolution. Although FAA staff review the reports of all audits that U.S. airlines have conducted of their foreign code-share partners, normally they only spot check whether findings that were identified during the audit were resolved. According to an FAA safety official, FAA relies on the U.S. airlines’ compliance statements, signed by the airlines’ safety directors, which affirm that the audits were conducted in accordance with the guidelines and that the foreign carriers met the applicable ICAO standards, as proof that all findings have been resolved. However, FAA’s reliance on the compliance statements may not provide an effective management control to ensure that corrective actions have been taken in response to audit findings. For example, we found that FAA provided a memorandum of no objection to DOT’s Office of International Aviation about a foreign code-share partner that, according to an official from its U.S. partner, had not implemented all of the corrective actions needed to resolve the findings. The safety audit identified dozens of findings, many of which were also found in a second audit 2 years later and, according to the airline, subsequently corrected. Furthermore, because FAA has not provided its reviewers or the airlines with a standard definition of “safety-critical” findings that must be corrected before the audit can be closed, it is unknown whether these open findings were safety critical. Moreover, the reasonableness of leaving open dozens of safety audit findings is questionable, as is FAA’s reliance on the airlines’ compliance statements as proof that all corrective actions have been made. Although the U.S. carrier temporarily suspended the code- share arrangement with this foreign carrier, FAA officials said the suspension occurred because of FAA’s concern about the safety oversight of that foreign airline’s civil aviation authority, not because of the number of audit findings or their lack of closure. FAA uses compliance statements, which are based on the safety audit results, as reasonable assurance that the foreign airlines meet ICAO safety standards. However, FAA’s reliance on compliance statements may not provide such assurance because FAA has accepted compliance statements as proof that the carriers met ICAO safety standards, even in situations when it questioned the audit results. For example, FAA provided memorandums of no objection to DOT’s Office of International Aviation that were based on safety audits conducted by one airline contractor over a 4-year period, many of which did not identify any findings, even though an FAA official told us that he had discussed with the airline FAA’s concern about the number of audits that did not identify any findings. According to the Code-Share Safety Program guidelines, U.S. airlines should not submit compliance statements to FAA until all corrective actions have been completed; the statements should not be predicated on future actions that are planned to be completed. However, FAA officials said that they allow “nonsafety-critical” findings identified during the audit, such as deficiencies in personnel training and omissions in manuals, to be addressed later. Because FAA has not provided the airlines with a standard definition of “safety-critical” findings that must be corrected before the audit can be closed, airlines could interpret the term inconsistently in documenting and resolving corrective actions. An FAA official indicated that developing a definition of safety critical would be difficult and time consuming. An aviation safety expert we consulted said that a definition of safety critical would require considerable study and criteria development because situations can be critical to safety in many ways. He added that a well-trained and experienced aviation safety inspector could identify a safety-critical situation. However, this same expert suggested that, as a quality assurance measure, FAA select several audits each year and check the underlying documentation in depth. Similarly, the DOT Inspector General recommended in 1999 that FAA conduct comprehensive audits of a sample of safety audits to confirm that carriers have applied agreed-upon standards and procedures in conducting the audits. However, even if FAA were to conduct such comprehensive audits, without a definition of safety- critical findings, the agency would still lack assurance that safety-critical findings were identified and resolved. FAA indicated that from August 2003 through July 2004, 18 of the 50 audit reports on foreign airlines it reviewed were returned to U.S. carriers for further action and 4 were placed on hold pending the outcome of IASA reviews; the other 31 foreign carriers received memorandums of no objection. Furthermore, FAA officials said that, according to anecdotal information from some U.S. carriers, too many safety concerns were identified during some safety audits for the carriers to proceed with applications for code-share authorization. However, FAA officials said they do not know how many times the safety audits have prevented airlines that pose safety concerns from becoming code-share partners with U.S. airlines. In addition, effective management controls require that documentation be created and maintained to provide evidence of executing approvals, authorizations, verifications, and performance reviews. FAA devised a checklist for agency staff to complete while reviewing safety audit reports to check for compliance with the program guidelines, record information about findings, or report irregularities. FAA officials said that the checklist was developed to establish and maintain consistency in reviewing the audit reports. However, we found that the checklist did not consistently document what actions FAA took when reviewing the airlines’ audit reports, which findings it reviewed, and which corrective actions it verified were implemented. For example, in some cases, the checklist provided information about the closure of findings, but in other cases, no information was recorded about closure. FAA officials said that portions of the checklist may be left blank until the FAA reviewer has completed discussions with the airline and answered all of the concerns to his or her satisfaction, at which time the FAA reviewer will note that no irregularities were found. Officials said that in such cases, the checklist would not capture this process. However, not completing this information could hinder future reviews of the same airline by impeding comparisons between audits. Furthermore, because FAA often lacked documentation that it had verified the closure of findings, we were unable to determine how frequently FAA may have provided memorandums of no objection on foreign carriers that had not implemented all corrective actions in response to the findings, as occurred in the example discussed earlier. Effective management controls also include monitoring to assess the quality of performance over time. Management controls generally should be designed to ensure ongoing monitoring during normal operations and include regular management and supervisory activities, comparisons, reconciliations, and other actions people take in performing their duties. FAA officials said that the manager of the International Programs and Policy Division, which is responsible for administering the Code-Share Safety Program and is part of FAA’s Flight Standards Service, is briefed by the Code-Share Safety Program staff on the results of their safety audit reviews before a recommendation is made to the Director of Flight Standards to sign the memorandums of no objection that are sent to DOT’s Office of International Aviation. This procedure allows the International Programs and Policy Division manager to monitor the results and the decision-making processes involved. In addition to reviewing the audit reports, FAA monitors the safety of foreign carriers through other sources of information. FAA officials said they also review any accident and incident information from aviation safety databases, company financial histories, ICAO reports on the countries’ civil aviation authorities, media reports, ramp inspection results, and information from FAA international field offices about their inspections of foreign aircraft when these aircraft enter the United States. According to the Code-Share Safety Program guidelines, the U.S. airlines participating in the program should have a process to monitor the safety of their foreign code-share partners on an ongoing basis, and FAA should review this monitoring process. FAA officials said they have reviewed the monitoring systems at seven of the eight U.S. airlines participating in the program. However, FAA had not documented its reviews of the monitoring systems, so we were unable to verify that activity. Furthermore, safety officials at three of the eight U.S. airlines said FAA had not reviewed their monitoring systems. Without an FAA review, deficiencies in these monitoring systems might not be identified. FAA does not maintain information on the types and frequencies of audit findings to provide a means of comparing the findings from initial and recurrent audits of the same airline, or perform trend analysis that could help identify problems across airlines or fleets. Trend analyses would be useful for monitoring, on an ongoing basis, the effectiveness of FAA’s internal quality control system. FAA officials said the checklists are not used for tracking or trend analysis and that FAA does not formally examine either the safety problems occurring most often or the geographic areas where problems are occurring most frequently. However, the officials said that the FAA program manager does want to have a general idea of the types of problems being found, and the checklist provides this information informally. According to one FAA official, the purpose of the checklist is to ensure that the DOT guidelines are met, rather than to create a database of findings. In our view, not maintaining such documentation could impede analyses of trends and comparisons of findings, as well as limit opportunities for assessing risks and prevents determining whether FAA reviewed those findings. Establishing performance measures is another component of effective management controls. The Government Performance and Results Act of 1993 requires agencies to, among other things, set strategic and annual performance goals, and measure and report on performance toward these goals. Management controls play a significant role in helping managers achieve those goals. FAA has established certain performance goals for the Code-Share Safety Program, including reviewing at least 40 safety audit reports during fiscal year 2004. FAA exceeded this goal by completing 57 reviews. In addition, FAA set a performance goal of meeting with major U.S. air carriers to request feedback on the Code-Share Safety Program. FAA met this goal in 2004. The eight U.S. airlines participating in the Code-Share Safety Program have conducted the safety audits of their foreign code-share partners and have monitored the safety of their code-share partners between audits, as specified under the guidelines. Through those audits, the U.S. airlines have identified numerous safety issues associated with their foreign partners’ operations. After completing the audits, the U.S. airlines have submitted written statements to FAA affirming their foreign code-share partners’ compliance with ICAO standards, as specified under the guidelines. However, the U.S. airlines have not always documented the implementation of actions taken in response to the findings. Many airlines are now moving to adopt the international safety audit program, IOSA, which contains procedures that would help to ensure that corrective actions implemented in response to audit findings are documented. Most U.S. airline officials said they believe the Code-Share Safety Program provides reasonable assurance of safety or is effective, but some officials also suggested various changes in its administration. The U.S. airlines participating in the Code-Share Safety Program have been assessing the safety of their foreign code-share partners at least every 2 years, as the guidelines specify. We estimate, based on the results of our sample of 149 randomly selected safety audit reports, that there are 2,047 findings among the audits that the eight U.S. airlines conducted of foreign carriers, which FAA reviewed from February 2000 through September 2004. The program guidelines define a finding as an instance in which “the performance of the standard does not meet the established criteria” under ICAO standards. We estimate that 75 percent of the audits of foreign carriers that the eight U.S. airlines conducted of foreign carriers and that FAA reviewed from February 2000 through September 2004 contained at least one finding. Airline officials told us that most findings related to a lack of documentation. Documentation is important to ensure the implementation of management controls, which should appear, for example, in management directives and operating manuals. However, we found that many of the safety audit findings were broader in scope than a lack of documentation and extended to a lack of underlying policies and procedures. We further estimate that findings related to deficiencies in policies and procedures accounted for 23 percent of all findings. The audits reviewed the carriers’ compliance in eight major categories (organization, flight operations, flight dispatch, maintenance and engineering, cabin operations, cargo and dangerous goods, ground handling, and security). As shown in figure 5, the findings spanned all eight categories, but the largest numbers were in two categories: (1) flight operations, which govern the activities of the pilots, including training, and (2) maintenance and engineering, which involves the oversight of activities to maintain, repair, and overhaul aircraft, aircraft engines, and parts. In the flight operations category, the findings included a lack of drug and alcohol testing policies and a lack of documentation on flight time and rest requirements for flight personnel. In the maintenance and engineering category, one common type of finding related to the maintenance and calibration of tools and supplies, which could affect safety. After U.S. airlines completed their audits, their safety directors submitted statements to FAA affirming their foreign code-share partners’ compliance with ICAO standards. FAA officials said they rely on these compliance statements as the primary evidence that the foreign code-share partners of U.S. airlines have resolved all safety-critical findings. However, on the basis of our review of a sample of the audit reports, we estimate that, for 68 percent of the identified findings, the documentation was insufficient to demonstrate that the findings had been closed or were resolved. Specifically, the documentation either failed to indicate at least one of the following three elements: (1) what corrective action was taken, (2) who accepted the corrective action, and (3) when the corrective action was accepted or the documentation was insufficient to determine whether the findings were closed. An estimated 28 percent of the audit reports that contained findings had at least one finding that lacked all three elements documenting corrective actions. The Code-Share Safety Program guidelines do not indicate that U.S. airlines should have documentation available for FAA’s review to provide evidence of what corrective action was taken, who accepted the action, and when the action occurred in response to the findings identified in audits of their foreign code-share partners. We asked the eight U.S. airlines participating in the Code-Share Safety Program what types of systems they were using to track any findings that were not resolved when the safety audit was complete. We found that three of the U.S. airlines were using computer systems to track the closure of such open findings; three other airlines had computer systems that could track the closure of findings, but their foreign partners had no open findings; and two airlines indicated that they did not have systems to track open findings because their foreign partners did not have any open findings. At one U.S. airline that was using a computer system to track open findings, officials said that a computer malfunction resulted in the loss of 6 months of data. An official from this airline said that before 2004, the airline coordinated closure of any findings directly with the contractor. When asked to produce this information, the airline did not have finding closure documentation available for audits conducted before 2004. This contractor said that although his firm was asked a few times by the U.S. carrier to check on the closure of audit findings by its foreign partner, the U.S. airline was responsible for tracking the closure of findings. Airlines also lacked documentation on the closure of findings in part because an unknown number of findings were closed on-site during the audits and not documented. The FAA program manager said he discouraged closing out findings on-site without documentation during the audits because it does not leave an audit trail about what findings were identified. Documentation provides a record of the execution of management controls which, in this situation, relate to the implementation of corrective actions. We estimate that 25 percent of the audits were closed with no findings identified. According to an FAA official, audits that identify no findings are questionable because the airlines must comply with so many requirements under either ICAO or IOSA standards. One U.S. airline used a contractor to conduct 31 of the audits of foreign airlines in our sample from 1999 through 2003, over half of which identified no findings. As described earlier, an FAA official told us that he had discussed with the airline FAA’s concern about the number of audits conducted by the contractor that did not identify any findings. The FAA official also said that he helped the airline revise its approach to conducting the audits as a part of its internal evaluation program. The contractor told us that it is common for the safety audits not to identify findings because the airlines have prepared for the audit, and the audit findings are sometimes resolved on the spot. The contractor also said that his firm often recommended best practices that the foreign carriers could implement, but these recommendations did not relate to violations of ICAO standards and, thus, were not considered to be findings. Furthermore, this contractor said that a representative from the U.S. airline, who accompanied the contractor’s auditors on the audits, kept the U.S. airline informed. The eight U.S. airlines participating in the Code-Share Safety Program have processes to monitor the safety of their foreign code-share partners on an ongoing basis, including their accident and incident rates, financial condition, equipment age, labor issues, and other issues, as called for in the program guidelines. Safety officials from the eight U.S. airlines said that, to their knowledge, no fatal accidents had occurred on their foreign code- share routes since the Code-Share Safety Program began in 2000. We observed the systems and information sources that each U.S. airline used for monitoring. Airline officials showed us, for example, safety questionnaires that they sent to their code-share partners between formal safety audits, news subscription services, and aviation safety Web sites. Some airline officials also said they occasionally made on-site visits to monitor their partners’ safety. The airlines also indicated that they monitor any accident and incident data for their code-share partners. According to a safety official at one U.S. airline, a carrier’s past accident and incident record does not conclusively prove that a safety problem exists, but it can be an indicator of other deficiencies, such as gaps in training. Some officials from airlines that are part of global alliances also said that they share safety information about their mutual foreign code-share partners. Four U.S. airlines had created computer databases to maintain this monitoring information while the other four maintained paper files. As U.S. airlines and their foreign code-share partners begin to use IOSA—a new safety audit program developed by IATA—some of the weaknesses that we observed in the Code-Share Safety Program may be addressed, and U.S. airlines may receive other benefits. Increased use of IOSAs may help to ensure that audit findings are resolved and corrective actions implemented. IOSA requires that findings that are identified during the audit be documented, excluding those that are corrected immediately on- site during an audit. In addition, IOSA requires documentation of closure for findings, including the three elements we identified—(1) a description of the corrective actions taken, (2) who accepted the corrective actions, and (3) when the corrective action was accepted—as well as the reasoning used by the auditing organization to clear the findings. As noted, documentation of one or more of these elements was missing, or it could not be determined if elements were missing for an estimated 68 percent of the audit findings. Six of the eight U.S. airlines use IOSA standards to audit the safety of their foreign code-share partners, one may do so in the future, and one does not plan to use the standards to audit the safety of its foreign code-share partner. Moreover, according to some airline officials, U.S. airlines have a financial incentive to encourage their foreign code-share partners to undergo IOSAs because the auditing costs are shifted from the U.S. airline to its foreign partner. However, not all U.S. airlines plan to require IOSAs of their foreign code-share partners. For example, officials from one U.S. airline said that IOSAs may be too expensive for some small foreign carriers. Similarly, officials at another U.S. airline said that IOSAs are applicable to airlines with large fleets and major processes but may not be practical for smaller airlines. Officials at a third U.S. airline said they preferred to continue conducting the safety audits themselves, rather than using an auditing organization selected by IATA, because they wanted the assurance of examining their partners’ operations in person, rather than relying on an external organization. Finally, increased use of IOSAs may help standardize aviation safety auditing and streamline FAA’s review of audit reports. Under the IOSA program, the airlines can obtain the audit results of their mutual code-share partners. Of the eight U.S. airlines with foreign code-share partners, six share 18 of the same foreign code-share partners. FAA recently allowed U.S. airlines to submit for review audit reports that other U.S. airlines had conducted on a shared foreign code-share partner. Some U.S. airlines, as members of global airline alliances, plan to share their audit reports of foreign partners and reduce duplicative audits. The IOSA program should make it easier for airlines that are not in such alliances to share audit reports. Increased sharing of the reports could reduce the number of safety audits that the U.S. airlines would need to conduct of their foreign partners and could thus reduce the number of reports that FAA would need to review. Officials at most U.S. airlines participating in the Code-Share Safety Program told us they believe that the program provides reasonable assurance of safety concerning their foreign code-share partners or is effective. One airline official described the program as an “ingenious technique” that has had the effect of raising aviation safety standards worldwide by ensuring that safety issues will be resolved. This airline official said that some foreign airlines, seeking to become code-share partners of U.S. airlines, have restructured programs, rewritten manuals, and instituted new management techniques—evidence, he said, of the program’s effectiveness. Another U.S. airline official said that, without the Code-Share Safety Program, U.S. airlines might not conduct safety audits of their foreign code-share partners. An official at another U.S. airline said the Code-Share Safety Program is a means to ensure that a carrier meets minimum ICAO-based international aviation safety standards and that the IOSA program creates a baseline of auditing standards to be followed worldwide. However, the official said that a safety audit, whether conducted by an auditing organization selected by IOSA or a U.S. airline, is only a snapshot of the carrier for the period in which the audit is conducted. The airline official said that the carrier’s actions before the audit or after the audit may differ and cannot be adequately evaluated until additional safety information is collected from the carrier between safety audits or until the next safety audit. An official at another U.S. airline participating in the Code-Share Safety Program said that although a safety audit provides a very good assessment of an airline’s compliance with aviation safety standards, it does not guarantee the safety of the carrier’s operations. This official added that even if a safety audit were conducted on a carrier monthly, it would not guarantee that the carrier would never have an accident. Furthermore, an official at another U.S. airline said that the Code-Share Safety Program is not necessarily required to provide reasonable assurance of safety concerning the foreign code-share partners of U.S. airlines and that the airline does not necessarily believe that formal, FAA-approved safety audits are the only way to gain such assurance. This airline official said that U.S. airlines should not be required to conduct safety audits of foreign airlines that are operating out of countries that FAA rated as IASA category 1 and that U.S. airlines should be able to choose whether to conduct safety audits in countries that FAA has rated as IASA category 2 or has not rated. This airline official added that while the U.S. airline may continue to audit its partners on its own, it does not believe that FAA should oversee this process. However, an FAA IASA program official told us that the IASA program focuses on the capabilities of the foreign civil aviation authorities and does not ensure the safety of any carriers operating in IASA category 1 countries. This FAA official also said that inconsistencies in aviation safety oversight can exist throughout the world, even in countries with “higher” standards, and that some countries exceed ICAO standards, while others do not. A safety official at one U.S. airline said he believed that the Code-Share Safety Program guidelines should be made regulations. Although officials from DOT’s Office of International Aviation and FAA said that making the program regulatory is not needed because it is working well, this airline safety official said that making the program regulatory would allow requirements to be applied more evenly to all airlines participating in the program. This airline official added that DOT is requiring the guidelines to be followed and therefore they are regulations in practice. A safety official at another U.S. airline questioned why DOT requires “guidelines” to be followed. He said that if DOT wants “rigid compliance” with the guidelines, it should make the program regulatory. A safety official at a third U.S. airline said the program’s requirements should be standardized, noting that, for example, FAA was inconsistent about its requirements for reviewing auditors’ qualifications. An aviation safety expert we consulted also said that the program should be made regulatory, observing that both the Code- Share Safety Program and IASA suffer from a “lack of regulatory teeth” and that making them regulatory would provide clarity to the DOT requirements, which he said are “mere policies.” At the same time, this expert said that although the program is not regulatory, the Code-Share Safety Program guidelines clearly lay out what is expected of the airlines and set the standards that must be met. He added that under the guidelines, U.S. airlines are held accountable for the safety of their foreign code-share partners. Finally, officials at two airlines said that they would like FAA to provide a definition of safety critical or to define when an audit is considered to be closed so that it would be clear which findings must be resolved before closing an audit and submitting a compliance statement. As noted, FAA officials said that they allow nonsafety-critical findings identified during the audits to be addressed after the code-share arrangement is authorized. The safety of foreign code-share partners of U.S. airlines is important because several million people fly on those foreign carriers using tickets purchased from U.S. airlines each year. Under the Code-Share Safety Program, the U.S. airlines are auditing the safety of their foreign code-share partners and identifying safety concerns, which the foreign carriers are addressing. However, FAA’s reviews of the safety audit reports lack management controls for establishing reviewers’ qualifications, verifying corrective actions, and documenting the reviews. FAA, for example, has not established the qualifications needed to review safety audit reports, and FAA field inspectors, who are reviewing many of the safety audit reports, have not been trained in the IOSA program—potentially impeding FAA’s review of audits that were conducted using those standards. In addition, the program guidelines do not provide clear direction to the U.S. airlines and FAA reviewers on which concerns are critical to safety and must be addressed before DOT’s Office of International Aviation will authorize or reauthorize a code-share arrangement. Without a definition of safety-critical concerns and complete documentation of the closure of findings, FAA lacks clear criteria for responding to requests from DOT’s Office of International Aviation about the safety of foreign carriers and lacks assurance that safety-critical concerns have been addressed. Furthermore, FAA is not using effective management controls when it fails to document its reviews of the airlines’ safety audit reports. Without complete documentation, a determination cannot be made of what actions FAA took when reviewing the reports, which findings it reviewed, and which corrective actions it verified were implemented. Because documentation on FAA’s verification of the closure of findings was often lacking, we were unable to determine how frequently FAA may have failed to object to the authorization of code-share arrangements with foreign carriers that had not implemented all corrective actions in response to the findings. FAA also has not implemented a DOT Inspector General’s recommendation that it conduct a comprehensive examination of a sample of audit reports to verify the underlying documentation. Furthermore, FAA’s not collecting and tracking safety audit findings is an obstacle to conducting trend analysis or spotting anomalies. The airlines’ increasing adoption of the IOSA program as a worldwide safety auditing standard is likely to change how FAA conducts its safety reviews of foreign code-share partners of U.S. airlines. Moreover, IOSA requires that actions to correct all findings, except those that are corrected during an audit, be documented—a requirement that is lacking in FAA’s program. However, the adoption of the IOSA program is likely to be gradual, given that, as of June 2005, 66 of IATA’s 265 members had completed the program. Finally, although DOD and FAA officials said they have different program objectives, the two federal agencies are nevertheless duplicating efforts by reviewing many of the same audit reports. In addition, DOD is not receiving the foreign airport security assessment information from TSA that DOT is receiving. TSA’s information would provide DOD with more complete data for its safety reviews. To improve the safety oversight of foreign code-share operations, we recommend that the Secretary of Transportation direct the FAA Administrator to implement the following three recommendations: 1. Revise the Code-Share Safety Program guidelines to improve the effectiveness of the program and the clarity of the procedures that the airlines should follow in documenting and closing out safety audit findings. Because the audit guidelines indicate that the airlines should not submit compliance statements until all corrective actions have been completed, but FAA is allowing the airlines to resolve “nonsafety- critical” findings later, FAA should consider either following that guideline or defining “safety-critical” audit findings, so that the airlines and FAA reviewers know which types of findings must be corrected before submitting the compliance statements. 2. Develop mechanisms to enhance FAA’s management controls over its reviews of the safety audit reports. In developing the mechanisms, FAA should consider standardizing the qualifications and training needed for agency staff to review the airlines’ safety audit reports; identifying ways to document its reviews of the airlines’ safety audit reports; increasing the scrutiny of audit reports that have an unusually high or low number of findings, periodically selecting a sample of safety audits to conduct a comprehensive review of the underlying documentation collected; and collecting and analyzing information on the audit findings for the foreign code-share partners of U.S. airlines so that the data can be more easily quantified and analyzed to spot possible trends and anomalies, should FAA decide such analyses are needed. 3. Finally, explore with DOD potential opportunities to reduce duplication of efforts in reviewing the same safety audit reports. Because security is an important component of assessing airline safety, to improve DOD’s oversight of foreign carriers that transport DOD personnel, we also recommend that the Secretary of Homeland Security direct the Assistant Secretary of Homeland Security for TSA to develop a process of routinely coordinating with DOD regarding information on the security of foreign airports for DOD to consider in reviewing the safety of foreign airlines. Such a process could be documented in a memorandum of understanding or other written procedures to ensure such coordination. We provided drafts of this report to the Department of Homeland Security, (DHS), DOD, and DOT. DHS provided written comments, agreeing with our recommendation regarding TSA. DHS’s comments are reprinted in appendix III. DOD provided no comments on our findings or recommendations. DOD and DOT provided some technical clarifications, which we incorporated into this report as appropriate. We received comments from DOT officials, including FAA’s Deputy Associate Administrator for Aviation Safety. FAA generally agreed with the report and agreed to consider our recommendations. In addition, FAA provided comments on the Code-Share Safety Program, emphasizing that it is a collaborative effort between DOT’s Office of the Secretary, FAA, and the air carriers. FAA officials also said that the program established guidelines for approving international code-share operations, with the intent of encouraging the highest possible levels of safety for international code- share operations. According to FAA, the program outlines the necessary steps that U.S. air carriers must follow in seeking approval from DOT to conduct code-share operations with foreign air carriers. The officials added that the Code-Share Safety Program charges U.S. air carriers with the primary responsibility for ensuring that their foreign code-share partners comply with applicable international aviation standards. As agreed with your office, unless you announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to interested congressional committees; the Secretary of Transportation; the Administrator of FAA; the Secretary of Defense; the Secretary of Homeland Security; and the Assistant Secretary of Homeland Security for the Transportation Security Administration. Copies will also be available to others upon request and at no cost on GAO’s Web site at www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-2834 or dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objective was to review the measures that the federal government is taking to provide reasonable assurance of safety and security when passengers travel on flights operated by the foreign code-share partners of U.S. airlines. To accomplish this, we reviewed (1) the extent to which the Department of Transportation’s (DOT) authorization of U.S. airlines’ code- share arrangements with foreign airlines is designed to consider safety and security, (2) how well FAA has managed the Code-Share Safety Program, and (3) the extent to which U.S. airlines have implemented the Code-Share Safety Program, and the results of their efforts. To determine how safety and security are considered in DOT’s authorization of U.S. airlines’ code-share arrangements with foreign airlines, we interviewed officials at DOT’s Office of International Aviation, Federal Aviation Administration (FAA), Transportation Security Administration (TSA), and the Department of Justice (DOJ) and reviewed the Code-Share Safety Program guidelines and related program documentation, applicable international aviation safety standards, and relevant legal authorities. Our review covered the U.S. airlines’ code-share partnerships with foreign carriers that DOT authorized from February 2000, when the Code-Share Safety Program began, through fiscal year 2004. At DOT, we interviewed the officials who decide whether to authorize such partnerships about the authorization process, their sources of information, and how often they authorize the partnerships. To gain a better understanding of the authorization process and the information considered, we also reviewed a sample of code-share applications that U.S. airlines had filed to establish code-share partnerships with foreign carriers. Our sample consisted of one randomly selected application filed by each of the eight U.S. airlines participating in the Code-Share Safety Program. We also interviewed DOT security officials about how they provide security clearances for foreign carriers and how often they have provided those clearances for code-share authorization. Because TSA was the source of aviation security information for DOT, we interviewed TSA officials about how they assess the security of foreign airlines and airports. We also reviewed data from TSA about the results and frequency of its security assessments of foreign airports and related legal authorities. Based on our understanding of the data through interviews with TSA officials, we determined that the data were sufficiently reliable for our purposes. In addition, we interviewed DOT officials who review the competitive aspects of the code-share arrangements about how they conduct their reviews and how often they have provided those clearances for code-share authorization. Because these DOT officials received advice from DOJ on potential antitrust issues involving the code-share partnerships, we also interviewed DOJ officials who provided that advice about their process and sources of information. At FAA, we interviewed officials about how they assess the capabilities of foreign civil aviation authorities through the International Aviation Safety Assessment (IASA) program and how those assessments relate to the Code-Share Safety Program. We also analyzed data on the results and frequency of IASA reviews since the Code-Share Safety Program was initiated. Based on our understanding of the data through interviews with FAA officials, we determined that the data were sufficiently reliable for our purposes. We reviewed documentation that FAA staff had prepared when they reviewed the airlines’ safety audit reports to determine how they documented their reviews. We also discussed with FAA officials how often FAA provided memorandums of no objection to DOT’s Office of International Aviation to support U.S. airlines’ applications for code-share arrangements with foreign carriers. Because the Code-Share Safety Program was designed to assess foreign airlines’ compliance with aviation safety standards established by International Civil Aviation Organization (ICAO), we interviewed ICAO officials about the standards, related international aviation safety issues, and the ICAO Universal Safety Oversight Audit Program, which assesses the capabilities of countries’ civil aviation authorities. In addition, because many airlines are planning to use a new international safety audit program—the International Air Transport Association’s (IATA) Operational Safety Assessment (IOSA) program—to assess the safety of their foreign partners, we interviewed IATA officials about how the program was developed, how airlines plan to implement it, and how it could affect the Code-Share Safety Program. For background information on how aviation safety varies internationally, we obtained data from IATA on aviation accident rates for different world regions. We did not review the reliability of IATA’s aviation accident data because we used this information only for background purposes. We also interviewed officials from the Air Transport Association—a U.S. airline association—about its involvement in establishing the DOD safety audit program and its views on the Code-Share Safety Program and FAA’s IASA program. Finally, because we found during our review that DOD had also established a program for reviewing the safety of foreign carriers, we interviewed DOD officials about the design and implementation of its program. In addition, we obtained information about the safety audit reports that DOD had reviewed from fiscal year 2001 through fiscal year 2004 and the results, which we compared with the results of those that FAA reviewed. We also discussed with FAA and DOD officials the extent to which they have coordinated their efforts. To determine how well FAA has managed the Code-Share Safety Program, we evaluated whether DOT’s Office of the Secretary and FAA incorporated selected government auditing standards in the program’s design and whether FAA effectively used management controls in reviewing the safety audit reports. Because the Code-Share Safety Program establishes an audit program, we reviewed whether the program’s design, as reflected in the program guidelines, conforms to certain standards identified in Government Auditing Standards. We reviewed selected general standards that are contained in Government Auditing Standards (independence, professional judgment, and competence) to assess the program’s design. Although we examined the audit methodologies that the U.S. airlines had developed and submitted to FAA for review, we did not review them for conformance with government auditing standards because FAA had already conducted this review as a condition of accepting the U.S. airlines’ participation in the program. In addition, because we were evaluating the management of a government program, we examined FAA’s application of management controls, which is synonymous with the term “internal controls,” in its reviews of the safety audit reports using Standards for Internal Control in the Federal Government. We selected the management controls that were applicable to FAA’s review of the audit reports for establishing reviewers’ qualifications, verifying corrective actions, documenting the reviews, and monitoring and measuring performance. We also reviewed the recommendations contained in a 1999 DOT Office of the Inspector General report on aviation safety under international code-share agreements to determine whether and to what extent the report’s recommendations—about how a code-share safety audit program should be designed—were implemented. To determine the extent to which U.S. airlines have implemented the Code- Share Safety Program and the results, we interviewed officials at the eight U.S. airlines that were participating in the program about how they were assessing the safety of their foreign partners and reviewed a sample of the reports. We drew a stratified random probability sample of 153 reports of audits conducted by U.S. airlines of their foreign code-share partners. This sample was drawn from a population of documentation maintained by FAA for the 242 audit reports that the agency had reviewed from February 2000 through September 2004. Of these 153 sampled audits, 2 were out of scope because the airlines withdrew them from consideration and 2 were in scope, but we did not complete our reviews of these reports. We ultimately collected information for 149 in-scope audits. With this probability sample, each audit report in the study population had a positive probability of being selected, and that probability could be computed for any audit. We stratified the population into nine groups on the basis of the U.S. airline conducting the audit, and further, for some of those airlines, whether the foreign airlines being audited were code-share partners with more than one U.S. airline or whether FAA’s records of its reviews of the audit reports contained comments about the findings. Each sampled audit was subsequently weighted in the analysis to statistically account for all of the audits in the study population, including those that were not selected. During our audit work, three airlines provided information about a total of 14 additional audit reports that, according to the airlines, FAA had reviewed. These 14 audits were not included in the population from which we drew our sample because FAA’s files did not contain information about them. Estimates generated in this report pertain only to the 242 audit reports that, according to FAA’s files, the agency reviewed. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results in 95-percent confidence intervals. These are intervals that would contain the actual population values for 95 percent of the samples we could have drawn. As a result, we are 95-percent confident that each of the confidence intervals in this report will contain the true values in the study population. All percentage estimates from the sample of audits have sampling margins of error of plus or minus 10 percentage points or less unless otherwise noted. All numerical estimates other than percentages have margins of error of plus or minus 10 percent of the value of those estimates or less unless otherwise noted. We did not determine whether the airlines complied with international aviation safety standards. However, we performed a content analysis of the audit reports in our sample to determine what types of safety findings were identified regarding the foreign carriers. We recorded the findings and grouped them into eight categories: (1) organization, (2) flight operations, (3) flight dispatch, (4) maintenance and engineering, (5) cabin operations, (6) cargo and dangerous goods, (7) ground handling, and (8) security— because the reports were generally organized into those categories. We then further divided those eight categories into at least six issue subcategories. Two coders independently categorized each finding, and any coding disagreements were resolved between the coders or by a third reviewer. During our review of the audit reports, we also attempted to determine whether corrective actions taken in response to the findings were documented. To accomplish this, we looked for evidence of (1) what corrective action was taken, (2) who accepted the corrective action, and (3) when the corrective action was accepted. We considered these three elements to be sufficient evidence of documentation after observing how some airlines had documented the closure of findings and by reviewing Government Auditing Standards, which indicate that auditors should examine whether recommendations from previous audits have been implemented, and from Standards for Internal Control in the Federal Government, which require management to determine whether proper actions have been taken in response to findings and audit recommendations. In addition to reviewing the audit reports at the airlines, we interviewed safety officials (typically the safety directors) at all eight U.S. airlines participating in the Code-Share Safety Program about how they assess the safety of their foreign code-share partners, including how they plan, carry out, and close the audits, as well as monitor the safety of their foreign partners between audits. We also observed the monitoring systems that they had implemented, as the program guidelines require, and sources of information that they used to monitor the safety of their foreign code-share partners. In addition, we asked the U.S. airline safety officials about their program-related interactions with FAA and DOT’s Office of International Aviation, whether and how they believe the program could be improved, and what they thought about the implications of the airlines’ increased adoption of IOSA by as an international aviation safety audit program. We also obtained the views of an aviation safety expert about the Code-Share Safety Program. We selected this expert because of his experience in aviation safety, which included helping to design FAA’s IASA program. Because some airlines had used contractors to conduct safety audits of their foreign code-share partners, we interviewed one contractor who said that he had conducted or helped to conduct safety audits for five of the eight U.S. airlines in the Code-Share Safety Program about how his firm conducted the audits and the qualifications of his staff. Finally, for background information on the extent to which passengers are traveling on foreign code-share partners of U.S. airlines, we asked the eight U.S. airlines to provide such data from 2000 through 2004 using the same methodology, which was based on the number of tickets that the U.S. airlines sold for travel on their foreign code-share partners. For example, if a U.S. airline sold a single ticket for travel that included one or more foreign code-share partner flight segments, this ticket was counted once. If a U.S. airline sold separate tickets for travel that included more than one foreign code-share partner flight segment, each flight segment was counted as a separate ticket. Some airlines could not provide data for all 4 years, but all eight U.S. airlines were able to provide data for 2004, which we reported. We did not independently verify this information provided by the airlines because it was used only for background purposes. Alaska (1 partner) America West (1 partner) American (24 partners) Contintental (18 partners) AeroLitoral Aeromexico Air Europa Air France Alitalia Brit Air COPA CSA Czech Emirates EVA Airways flybe.British European KLM Cityhopper KLM Exel Airlines KLM Royal Dutch Airlines Korean Airlines Maersk Air TAP Air Portugal Virgin Atlantic (Continued From Previous Page) Delta (21 partners) Northwest (11 partners) Aeromexico Air Alps Air France Alitalia CSA Czech KLM Cityhopper KLM Exel KLM Royal Dutch Airlines Korean Airlines Malev Express Malev Hungarian Airlines (Continued From Previous Page) United (23 partners) US Airways (9 partners) In addition to the above individuals, Elizabeth Eisenstadt, Jessica A. Evans, Brandon Haller, Bob Homan, David Hooper, Casey Keplinger, Elizabeth A. Marchak, Sara Ann Moessbauer, Mark Ramage, and Sidney Schwartz made key contributions to this report. | U.S. airlines are increasingly relying on code-share partnerships with foreign carriers to provide additional sources of revenue. Code-sharing is a marketing arrangement in which an airline places its designator code on a flight operated by another airline and sells and issues tickets for that flight. To determine whether the foreign code-share partners of U.S. airlines meet an acceptable level of safety, in 2000, the Department of Transportation (DOT) established the Code-Share Safety Program, which requires U.S. airlines to conduct safety audits of their foreign code-share partners as a condition of code-share authorization. GAO's objective was to assess the federal government's efforts to provide reasonable assurance of safety and security on foreign code-share flights. GAO reviewed (1) the extent to which DOT's code-share authorization process is designed to consider safety and security, (2) the Federal Aviation Administration's (FAA) management of the Code-Share Safety Program, and (3) the implementation of the program by airlines and the results. In considering U.S. airlines' requests to establish code-share arrangements with foreign carriers, DOT's Office of International Aviation reviews, among other things, any safety and security objections from FAA and TSA. FAA assesses the safety of foreign civil aviation authorities and reviews reports of the safety audits that U.S. carriers have conducted of their foreign airline partners. From fiscal years 2000 through 2004, DOT (1) authorized U.S. airlines to establish or maintain code-share arrangements with foreign carriers 270 times and (2) did not suspend any arrangements because of known safety concerns. According to FAA, however, U.S. airlines occasionally have decided not to pursue code-share arrangements with foreign carriers because they expected FAA would object, and FAA sometimes puts its reviews of proposed code-share arrangements on hold if the agency has safety concerns. FAA and TSA did not object to any of the authorizations during that period for safety or security reasons. Although not involved in the code-share authorization process, the Department of Defense (DOD) reviews the safety of foreign airlines that transport DOD personnel. For their separate programs, FAA and DOD are reviewing many of the same safety audit reports on foreign carriers. The Code-Share Safety Program, which calls for U.S. airlines to conduct periodic safety audits of their foreign code-share partners, incorporates selected government auditing standards involving independence, professional judgment, and competence. However, FAA's reviews of the safety audit reports lacked management controls for reviewers' qualifications, documenting the closure of safety audit findings, verifying corrective actions taken in response to findings, and documenting reviews. Eight U.S. airlines with foreign code-share partners have implemented the DOT program by conducting safety audits of their foreign partners. According to our review of a random sample of audit reports that FAA reviewed from fiscal years 2000 through 2004, the largest numbers of safety findings identified were in the categories of (1) flight operations and (2) maintenance and engineering. GAO estimates that for 68 percent of the findings, the documentation was insufficient to demonstrate that the findings were closed or were resolved. Airlines are beginning to adopt a new safety audit program that requires the documentation of findings and corrective actions. |
The Money Laundering and Financial Crimes Strategy Act of 1998 (Strategy Act) required the President—acting through the Secretary of the Treasury and in consultation with the Attorney General and other relevant federal, state, and local law enforcement and regulatory officials—to develop and submit an annual NMLS to the Congress by February 1 of each year from 1999 through 2003. The goal of the Strategy Act was to increase coordination and cooperation among the various law enforcement and regulatory agencies and to effectively distribute resources to combat money laundering and related financial crimes. The 1998 Strategy Act required that each NMLS define comprehensive, research-based goals, objectives, and priorities for reducing money laundering and related financial crimes in the United States. The annual NMLS generally has included multiple priorities to combat money laundering to guide federal agencies’ activities. Another provision of the Strategy Act authorized the Secretary of the Treasury to designate High Intensity Money Laundering and Related Financial Crime Areas (HIFCA), in which federal, state, and local law enforcement would work cooperatively to develop a focused and comprehensive approach to targeting money-laundering activity. As envisioned by the Strategy Act, HIFCAs were to represent a major NMLS initiative and were expected to have a flagship role in the U.S. government’s efforts to disrupt and dismantle large-scale money laundering operations. They were intended to improve the coordination and quality of federal money laundering investigations by concentrating the investigative expertise of federal, state, and local agencies in unified task forces, thereby leveraging resources and creating investigative synergies. The former U.S. Customs Service, which is now part of ICE, and the FBI both have a long history of investigating money laundering and other financial crimes. In response to the terrorist attacks of September 11, Treasury and Justice both established multiagency task forces dedicated to combating terrorist financing. Treasury established Operation Green Quest, led by Customs, to augment existing counterterrorist efforts by targeting current terrorist funding sources and identifying possible future sources. In addition to targeting individuals and organizations, Operation Green Quest was designed to attack the financial systems that may be used by terrorists to raise and move funds, such as fraudulent charities and the shipment of bulk currency. In January 2003, Customs expanded Operation Green Quest by doubling the personnel commitment to a total of approximately 300 agents and analysts nationwide to work solely on terrorist financing matters. In March 2003, Operation Green Quest was transferred to ICE, within the Department of Homeland Security. On September 13, 2001, the FBI formed a multiagency task force—which is now known as the Terrorist Financing Operations Section (TFOS)—to combat terrorist financing. The mission of TFOS has evolved into a broad role to identify, investigate, prosecute, disrupt, and dismantle all terrorist- related financial and fundraising activities. The FBI also took action to expand the antiterrorist financing focus of its Joint Terrorism Task Forces (JTTF)—teams of local and state law enforcement officials, FBI agents, and other federal agents and personnel whose mission is to investigate and prevent acts of terrorism. In 2002, the FBI created a national JTTF in Washington, D.C., to collect terrorism information and intelligence and funnel it to the field JTTFs, various terrorism units within the FBI, and partner agencies. The attacks of September 11 emphasized the need for federal agencies to wage a coordinated campaign against sources of terrorist financing. Following September 11, representatives of the FBI and Operation Green Quest met on several occasions to attempt to delineate antiterrorist financing roles and responsibilities. However, such efforts were largely unsuccessful until May 2003, when the Attorney General and the Secretary of Homeland Security signed a Memorandum of Agreement that contained a number of provisions designed to resolve jurisdictional issues and enhance interagency coordination of terrorist financing investigations. According to the Agreement, the FBI is to lead terrorist financing investigations and operations, using the intergovernmental and intra- agency national JTTF at FBI headquarters and the JTTFs in the field. The Agreement also specified that, through TFOS, the FBI is to provide overall operational command to the national JTTF and the field JTTFs. Further, to increase information sharing and coordination of terrorist financing investigations, the Agreement required the FBI and ICE to (1) detail appropriate personnel to each other’s agency and (2) develop specific collaborative procedures to determine whether applicable ICE investigations or financial crimes leads may be related to terrorism or terrorist financing. Also, the Agreement required the FBI and ICE to produce a joint written report on the status of the implementation of the Agreement 4 months from its effective date. In September 2003, we reported that, as a mechanism for guiding the coordination of federal law enforcement agencies’ efforts to combat money laundering and related financial crimes, the NMLS has had mixed results but generally has not been as useful as envisioned by the Strategy Act. For example, we reported that HIFCA task forces were expected to have a central role in coordinating law enforcement agencies’ efforts to combat money laundering but generally had not yet been structured and operating as intended and had not reached their expectations for leveraging investigative resources or creating investigative synergies. The NMLS called for each HIFCA to include participation from all relevant federal, state, and local agencies. However, in some cases, federal law enforcement agencies had not provided the levels of commitment and staffing to the task forces called for by the strategy. We found, for instance, that most of the HIFCAs did not have FBI or Drug Enforcement Administration (DEA) agents assigned full time to the task forces. FBI officials cited resource constraints as the primary reason why the bureau did not fully participate. A DEA official told us that, because of differences in agencies’ guidelines for conducting undercover money laundering investigations, DEA would not dedicate staff to HIFCA task force investigative units but would support intelligence-related activities. Also, we noted that four of the five operating HIFCAs had little or no participation from state and local law enforcement agencies. Various task force officials mentioned lack of funding to compensate or reimburse participating state and local law enforcement agencies as a barrier to their participation in HIFCA operations. While recognizing that law enforcement agencies have resource constraints and competing priorities, we noted that HIFCA task forces were expected to make more effective use of existing resources or of such additional resources as may be available. As called for in the 2002 NMLS, Treasury and Justice are in the process of reviewing the HIFCA task forces to enhance their potential and remove obstacles to their effective operation. The results of this review could provide useful input for an evaluation report on the HIFCA program, which the Strategy Act requires Treasury to submit to the Congress in 2004. We further reported that, while Treasury and Justice had made progress on some NMLS initiatives designed to enhance interagency coordination of money laundering investigations, most had not achieved the expectations called for in the annual strategies, including plans to (1) use a centralized system to coordinate investigations and (2) develop uniform guidelines for undercover investigations. Headquarters officials cited differences in the various agencies’ anti-money laundering priorities as a primary reason why initiatives had not achieved their expectations. In our September 2003 report, we noted that our work in reviewing national strategies for various crosscutting issues has identified several critical components needed for their development and implementation, including effective leadership, clear priorities, and accountability mechanisms. For a variety of reasons, these critical components generally have not been fully reflected in the development and implementation of the annual NMLS. For example, the joint Treasury-Justice leadership structure that was established to oversee NMLS-related activities generally has not resulted in (1) reaching agreement on the appropriate scope of the strategy; (2) ensuring that target dates for completing strategy initiatives were met; and (3) issuing the annual NMLS by February 1 of each year, as required by the Strategy Act. Also, although Treasury generally took the lead role in strategy-related activities, it had no incentives or authority to get other departments and agencies to provide necessary resources or compel their participation. And, the annual strategies have not identified and prioritized issues that required the most immediate attention. Each strategy contained more priorities than could be realistically achieved, the priorities have not been ranked in order of importance, and no priority has been explicitly linked to a threat and risk assessment. Further, although the 2001 and 2002 strategies contained initiatives to measure program performance, none had been used to ensure accountability for results. Officials attributed this to the difficulty in establishing such measures for combating money laundering. In addition, we noted that Treasury had not provided annual reports to the Congress on the effectiveness of policies to combat money laundering and related financial crimes, as required by the Strategy Act. As mentioned previously, unless reauthorized by the Congress, the requirement for an annual NMLS ended with the issuance of the 2003 strategy. To assist in congressional deliberations on whether there is a continuing need for an annual NMLS, we reviewed the development and implementation of the 1999 through 2002 strategies. Our September 2003 report recommended that—if the Congress reauthorizes the requirement for an annual NMLS—the Secretary of the Treasury, working with the Attorney General and the Secretary of Homeland Security, should take appropriate steps to strengthen the leadership structure responsible for strategy development and implementation by establishing a mechanism that would have the ability to marshal resources to ensure that the strategy’s vision is achieved, resolve disputes between agencies, and ensure accountability for strategy implementation; link the strategy to periodic assessments of threats and risks, which would provide a basis for ensuring that clear priorities are established and focused on the areas of greatest need; and establish accountability mechanisms, such as (1) requiring the principal agencies to develop outcome oriented performance measures that must be linked to the NMLS’s goals and objectives and that also must be reflected in the agencies’ annual performance plans and (2) providing the Congress with periodic reports on the strategy’s results. In commenting on a draft of the September 2003 report, Treasury said that our recommendations are important, should the Congress reauthorize the legislation requiring future strategies; Justice said that our observations and conclusions will be helpful in assessing the role that the strategy process has played in the federal government’s efforts to combat money laundering; and Homeland Security said that it agreed with our recommendations. Our review of the development and implementation of the annual strategies did not cover the 2003 NMLS, which was issued in November 2003, about 2 months after our September 2003 report. While we have not assessed the 2003 NMLS in detail, we note that it emphasized that “the broad fight against money laundering is integral to the war against terrorism” and that money laundering and terrorist financing “share many of the same methods to hide and move proceeds.” In this regard, one of the major goals of the 2003 strategy is to “cut off access to the international financial system by money launderers and terrorist financiers more effectively.” Under this goal, the strategy stated that the United States will continue to focus on specific financing mechanisms—including charities, bulk cash smuggling, trade-based schemes, and alternative remittance systems—that are particularly vulnerable or attractive to money launderers and terrorist financiers. As mentioned previously, the NMLS was adjusted in 2002 to reflect new federal priorities in the aftermath of the September 11 attacks, including a goal to combat terrorist financing. However, due to difficulties in reaching agreement over which agency should lead investigations, the 2002 NMLS did not address agency and task force roles and interagency coordination procedures for investigating terrorist financing. Law enforcement officials told us that the lack of clearly defined roles and coordination procedures contributed to duplication of efforts and disagreements over which agency should lead investigations. To help resolve these long-standing jurisdictional issues, in May 2003, the Attorney General and the Secretary of Homeland Security signed a Memorandum of Agreement regarding roles and responsibilities in investigating terrorist financing. In our February 2004 report, we noted that most of the key Memorandum of Agreement provisions had been implemented or were in the process of being implemented. For example, in accordance with the Agreement, the FBI and ICE have cross detailed key management personnel at the headquarters level, with an ICE manager serving as Deputy Section Chief of TFOS and an FBI manager detailed to ICE’s financial crimes division. Also, the FBI and ICE have developed collaborative procedures to determine whether appropriate ICE money laundering investigations or financial crime leads may be related to terrorism or terrorist financing. Further, as an integral aspect of the collaborative procedures, ICE created a joint vetting unit, in which ICE and FBI personnel—who have full access to ICE and FBI databases—are to conduct reviews to determine whether a potential nexus to terrorism or terrorist financing exists in applicable ICE investigations or financial crimes leads. If so, the matter is to be referred to TFOS, where the FBI Section Chief is to provide the ICE Deputy Section Chief with information demonstrating the terrorism nexus, as well as the stage and development of the corresponding FBI investigation. Then, the Section Chief and the ICE Deputy Section Chief are to discuss the elements of the terrorism nexus, ICE’s equity or commitment of resources to date in the investigation, violations being pursued by ICE before the Memorandum of Agreement, and the direction of the investigation. After this collaborative consultation, the FBI and ICE are to decide (1) whether the ICE investigation will be conducted under the auspices of a JTTF and (2) agency roles in pursuing related investigations. Specific investigative strategies generally are to be developed at the field level by FBI, ICE, and U.S. Attorneys Office personnel. The Terrorist Financing Unit of the Counterterrorism Section in Justice’s Criminal Division is involved in coordinating and prosecuting matters and cases involving terrorist financing, which are investigated by both the FBI and ICE. Another Agreement provision—requiring ICE to detail a significant number of appropriate personnel to the national JTTF and JTTFs in the field—is being handled on a location-specific, case-by-case basis. In response to our inquiries, FBI and ICE officials said that this provision was not intended to refer to a specific number of personnel and certainly was not intended to imply that all former Operation Green Quest agents were to be detailed to JTTFs. According to ICE officials, as of February 2004, a total of 277 ICE personnel (from various legacy agencies) were assigned full time to JTTFs—a total that consisted of 161 former Immigration and Naturalization Service agents, 59 Federal Air Marshals, 32 former Customs Service agents, and 25 Federal Protective Service agents. ICE officials said that this total does not include ICE agents who will be assigned to JTTFs in consonance with vetted cases being transitioned to JTTFs, nor does it include ICE investigators who participate part time on JTTFs. Another provision in the May 2003 Memorandum of Agreement required that the FBI and ICE jointly report to the Attorney General, the Secretary of Homeland Security, and the Assistant to the President for Homeland Security on the implementation status of the Agreement 4 months from its effective date. As of May 2, 2004, the FBI and ICE had not yet produced the required joint report on the implementation status. The Memorandum of Agreement, by granting the FBI the lead role in investigating terrorist financing, altered ICE’s role in investigating terrorism-related financial crimes. However, while the Agreement specified that the FBI has primary investigative jurisdiction over confirmed terrorism-related financial crimes, the Agreement does not preclude ICE from investigating suspicious financial activities that have a potential (unconfirmed) nexus to terrorism—which was the primary role of the former Operation Green Quest. Moreover, the Agreement generally has not affected ICE’s mission or role in investigating other financial crimes. Specifically, the Agreement did not affect ICE’s statutory authorities to conduct investigations of money laundering and other traditional financial crimes. ICE investigations can still cover the wide range of financial systems—including banking systems, money services businesses, bulk cash smuggling, trade-based money laundering systems, illicit insurance schemes, and illicit charity schemes—that could be exploited by money launderers and other criminals. According to ICE headquarters officials, ICE is investigating the same types of financial systems as before the Memorandum of Agreement. Further, our February 2004 report noted that—while the Memorandum of Agreement represents a partnering commitment by the FBI and ICE— continued progress in implementing the Agreement will depend largely on the ability of these law enforcement agencies to meet various operational and organizational challenges. For instance, the FBI and ICE face challenges in ensuring that the implementation of the Agreement does not create a disincentive for ICE agents to initiate or support terrorist financing investigations. That is, ICE agents may perceive the Agreement as minimizing their role in terrorist financing investigations. Additional challenges involve ensuring that the financial crimes expertise and other investigative competencies of the FBI and ICE are effectively utilized and that the full range of the agencies’ collective authorities—intelligence gathering and analysis as well as law enforcement actions, such as executing search warrants and seizing cash and other assets—are effectively coordinated. Inherently, efforts to meet these challenges will be an ongoing process. Our interviews with FBI and ICE officials at headquarters and three field locations indicated that long-standing jurisdictional and operational disputes regarding terrorist financing investigations may have strained interagency relationships to some degree and could pose an obstacle in fully integrating investigative efforts. From a strategic perspective, the annual NMLS has had mixed results in guiding the efforts of law enforcement in the fight against money laundering and, more recently, terrorist financing. Although expected to have a flagship role in the U.S. government’s efforts to disrupt and dismantle large-scale money laundering operations, HIFCA task forces generally are not yet structured and operating as intended. Treasury and Justice are in the process of reviewing the HIFCA task forces, which ultimately could result in program improvements. Also, most of the NMLS initiatives designed to enhance interagency coordination of money laundering investigations have not yet achieved their expectations. While the annual NMLS has fallen short of expectations, federal law enforcement agencies recognize that they must continue to develop and use interagency coordination mechanisms to leverage existing resources to investigate money laundering and terrorist financing. Through our work in reviewing national strategies, we identified critical components needed for successful strategy development and implementation, but, to date, these components have not been well reflected in the annual NMLS. The requirement for an annual NMLS ended with the issuance of the 2003 strategy. If the Congress reauthorizes the requirement for an annual NMLS, we continue to believe that incorporating these critical components into the strategy—a strengthened leadership structure, the identification of key priorities, and the establishment of accountability mechanisms—could help resolve or mitigate the deficiencies we identified. Also, regarding investigative efforts against sources of terrorist financing, the May 2003 Memorandum of Agreement signed by the Attorney General and the Secretary of Homeland Security represents a partnering commitment by two of the nation’s law enforcement agencies, the FBI and ICE. In the 12 months since the Agreement was signed, progress has been made in waging a coordinated campaign against sources of terrorist financing. Continued progress will depend largely on the agencies’ ability to establish and maintain effective interagency relationships and meet various other operational and organizational challenges. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have. For further information about this testimony, please contact Richard M. Stana at (202) 512-8777. Other key contributors to this statement were Danny R. Burton and R. Eric Erdman. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Money laundering provides the fuel for terrorists, drug dealers, arms traffickers, and other criminals to operate and expand their activities. GAO focused on two issues. The first is whether the nation's annual National Money Laundering Strategy has served as a useful mechanism for guiding federal law enforcement efforts to combat money laundering and terrorist financing. Unless reauthorized by the Congress, the annual requirement ended with the 2003 strategy. The second issue is the implementation status of a May 2003 Memorandum of Agreement, signed by the Attorney General and the Secretary of Homeland Security, that was designed to enhance the coordination of terrorist financing investigations conducted by the Federal Bureau of Investigation (FBI) and the U.S. Immigration and Customs Enforcement (ICE). GAO's September 2003 report noted that the annual strategy generally has not served as a useful mechanism for guiding the coordination of federal law enforcement agencies' efforts to combat money laundering and terrorist financing. For example, although expected to have a central role in coordinating law enforcement efforts, interagency task forces created specifically to address money laundering and related financial crimes generally had not yet been structured and operating as intended and had not reached their expectations for leveraging investigative resources or creating investigative synergies. Also, while the Departments of the Treasury and Justice had made progress on some strategy initiatives designed to enhance interagency coordination of money laundering investigations, most initiatives had not met expectations. Moreover, even though adjusted in 2002 to reflect a new federal priority--combating terrorist financing--the strategy did not address agency and task force roles and interagency coordination procedures for investigating terrorist financing, which contributed to duplication of efforts and disagreements over which agency should lead investigations. GAO's February 2004 report noted that the FBI and ICE had implemented or taken concrete steps to implement most of the key provisions in the May 2003 Memorandum of Agreement on terrorist financing investigations. For instance, the agencies had developed collaborative procedures to determine whether applicable ICE investigations or financial crimes leads may be related to terrorism or terrorist financing--and, if so, determine whether these investigations or leads should thereafter be pursued under the auspices of the FBI. However, as of May 2, 2004, the FBI and ICE had not yet issued a joint report on the implementation status of the Agreement, which was required 4 months from its effective date. Also, GAO noted that the FBI and ICE have confronted and will continue to confront a number of operational and organizational challenges, such as ensuring that the financial crimes expertise and other investigative competencies of both agencies are appropriately and effectively utilized. |
The Park Service’s mission has dual objectives. On one hand, the agency is to provide for the public’s enjoyment of the resources that have been entrusted to its care. This objective involves providing for the use of the parks by supplying appropriate visitor services (such as campgrounds and visitor centers) and infrastructure (such as roads and water systems) to support these services. On the other hand, the Park Service is to protect the resources so that they will be unimpaired for the enjoyment of future generations. Balancing these dual objectives has long shaped the dialogue about how best to manage the national park system. In the past few years, the dialogue about how best to manage the park system has taken on a new dimension. While the Congress and the executive branch have been working under increasingly tight budget constraints, the national park system has continued to expand—35 parks have been added since 1985. In addition, the Park Service estimates that its maintenance backlog, including the costs of general maintenance and rehabilitation to existing facilities and roads, exceeds $4 billion. One of the ways the Park Service has dealt with these conditions is to cut back or curtail visitor services in many parks. These cutbacks and curtailments in services have led to concerns about how the agency is being managed—particularly about how priorities are set within the agency. Most of the funding for the Park Service is for park operating budgets. For fiscal year 1997, the Park Service was appropriated about $1.5 billion. Of this, about $1.2 billion was appropriated to cover the operation of the park system—including the headquarters and regional offices. About 80 percent of the operating funds go directly to the parks to cover the costs of their day-to-day operations. This operating budget is the primary funding source for any park. At the park level, it is generally referred to as the base budget. The process for formulating park operating budgets is incremental. This process begins with the prior year’s budget as a base and focuses priority setting on requests for increases to the prior year’s base budget. Requests for operating increases primarily take two forms: mandatory pay increases and specific increases for individual parks—some for new or higher levels of ongoing operating responsibilities, such as law enforcement, and others for one-time projects, such as the rehabilitation of a historic property. Headquarters takes the initiative in requesting the funding for all required pay increases on a servicewide basis. However, for park-specific increases, the parks compete against one another for limited funds through their regional and headquarters hierarchy. Thus, the formal priority-setting process focuses primarily on marginal increases to last year’s budget—not on the priorities of ongoing park activities. While headquarters plays a key role in formulating requests for increases to the Park Service’s budget, decisions about spending and operating priorities associated with a park’s base budget are delegated to the park managers. The superintendent—the chief park official—at each of the 374 park units reports to one of several regional directors, each of whom reports to headquarters. However, upon receiving their budget allocation for base operations, the superintendents exercise a great deal of discretion in setting operational priorities. Many of the park officials we spoke with stressed the importance of this decentralized, park-based decision-making structure, under which park managers plan and execute their budget with as little involvement from regional and headquarters managers as possible. Park Service officials at all levels within the agency maintained that park-level managers were in the best position to plan activities at their park and make decisions about priorities and spending on a day-to-day basis. Hence, regional and headquarters officials generally do not get involved in priority-setting and spending decisions for parks. Typically, these decisions involve trade-offs among four categories of spending: (1) visitor services (e.g., opening a campground), (2) resource management (e.g., monitoring the condition of threatened species or water quality), (3) maintenance needs (e.g., repairing a trail), and (4) park administration and support (e.g., updating computer systems or attending training). In fiscal year 1997, about 70 percent of the Park Service’s operating budget is allocated for personnel services—salaries and benefits of park employees. The remaining 30 percent is allocated for items such as utilities, contracted services, equipment, training, travel, and supplies. As a general rule, the higher the proportion of personnel to nonpersonnel costs, the less flexibility an agency has to reduce costs in the short term when budgets are tight. Further limiting the Park Service’s flexibility is the large proportion—93 percent—of staff who are permanent employees. Because so many staff are permanent, the parks cannot reduce costs by reducing the largest component of their operating costs—salaries and benefits—during off-peak seasons. At the four parks we visited, the percentage of the park budget dedicated to salaries and benefits ranged from about 75 percent at Yellowstone National Park to about 85 percent at Olympic National Park. Park personnel costs will increase annually with required pay and benefit increases and other administrative actions. To the extent that a park’s budget does not increase at the same rate as its personnel costs, the park must absorb some or all of the increase in salaries and benefits. For example, Independence’s budget increased from $10.42 million in fiscal year 1994 to $10.64 million in fiscal year 1996—an increase of $220,000. However, during this 2-year period, salaries and benefits increased by $376,000 and an administratively required salary enhancement program for park rangers cost an additional $455,000. As a result, during this period the increase in the park’s funding did not cover the increase in salaries and benefits, and the park had to absorb over $610,000 in cost increases. Similarly, at Yellowstone, from fiscal year 1993 through fiscal year 1996, the park’s funding increased by $2 million while mandatory salary and nonsalary components, such as utility costs, rose by about $4 million—requiring the park to absorb about $2 million in increased costs over 3 years. At Great Smoky Mountains, from 1994 through 1996, the operating cost increases for personnel alone were more than twice as great as the funding increases. Since park budgets consist primarily of salaries and benefits, absorbing costs can be very difficult without reducing personnel. Parks frequently try to reduce spending for training, travel, and some supplies, but these costs are only a minimal part of their budget. In some cases, parks have had to make further cuts to absorb increases, either by not hiring seasonal employees or by not filling the positions of permanent employees who resign or retire. In either case, having fewer workers means that some activities will not be performed. For example, in 1996, Great Smoky Mountains absorbed increases in costs by hiring fewer seasonal staff. As a result, park managers chose to close two backwoods campgrounds for that year because there were not enough maintenance staff to clean and maintain them. Yellowstone also absorbed increased costs in 1996 and had to cut back on a number of activities, including the operation of a campground and two museums. During the same year, Olympic eliminated six seasonal law enforcement ranger positions. According to park officials, this cutback delayed the response time to park incidents. The officials also told us that reductions in resource protection patrols resulted in the accumulation of 50 to 100 tons of trash and litter that washed up on the Olympic coast during the winter months. Superintendents typically face numerous trade-offs in making spending decisions. For example, in 1996, Yellowstone faced several competing demands—several of which it was not able to fund. Providing the same levels of activities in 1996 as were provided in 1995 would have cost the park about $2 million more than it was provided. The additional costs were due to mandated increases for items such as employee background investigations, employee salaries and benefits, and increased water and sewage testing. To offset these increased costs, the park managers reduced spending for travel, training, and supplies; permitted several permanent and seasonal staff positions to lapse; and closed a campground and two nearby museums. Our past work has shown that such trade-offs occur frequently at many parks in the system. Although park managers need flexibility to effectively manage their park, accountability for the results achieved with the funds spent is also important. There is nothing inherently wrong with a decentralized management system or with delegating decisions about spending and operating priorities to park managers. However, the park managers we spoke with indicated that they rarely, if ever, discussed with regional or headquarters staff their park’s operating priorities or the results accomplished with the funds spent. Under these conditions, the current decentralized priority-setting and accountability systems lack a focus on the results that were achieved. Our prior work has shown that a good system of accountability would include elements such as (1) a process for establishing expectations for accomplishments, (2) a means of measuring progress against expectations, and (3) a means of holding managers responsible for achieving agreed-upon progress. Park Service officials told us that park superintendents set annual performance expectations with their regional director and are held accountable for meeting these expectations. However, park officials also told us these agreements generally focused on accomplishing tasks, such as completing a park’s general management plan, rather than on accomplishing measurable park goals, such as inventorying and evaluating the condition of cultural resources. Officials at the four parks we visited indicated that few, if any, reviews of or agreements on their annual operating priorities had taken place between regional or headquarters offices and the park. Officials at the four regional offices responsible for the four parks indicated that it was up to the parks to establish operating priorities and said that they did not get involved in setting park priorities. (These four regional offices are responsible for over 275 park units—or over three-fourths of the total number of parks.) Under this system, key components needed to hold superintendents accountable are missing. Without expectations about the goals that are to be achieved in the parks, a means for measuring progress toward these goals is not in place. As a result, the agency’s ability to determine or ensure that the desired results are achieved is diminished. The parks we visited had a variety of planning documents that described critical needs within each park. However, the documents did not establish expectations for addressing these needs or provide for measuring the progress achieved during the year. Furthermore, the needs described in the planning documents were generally not linked to the budget process or to currently available budgetary resources. As a result, critical issues that are expressed as priorities in planning documents may not be funded when spending decisions are made. In the current fiscal climate of tight budgets, it is particularly important for a decentralized agency like the Park Service to have a good system of accountability. If a park, regional office, and headquarters agree on expectations, goals, and results and measure the park’s progress against these expectations, then the agency will have a better system for holding park managers accountable for how park funds are spent. Furthermore, with such a system, the Park Service would be better able to understand and communicate what is being done and what is being accomplished with the agency’s operating funding on a year-to-year basis. Such a system of accountability would be consistent with the goals of GPRA. The Park Service has an opportunity to employ the basic tenets of GPRA to strengthen its system of accountability. GPRA is designed to hold federal agencies more accountable for their performance by requiring them to establish performance goals, measures, and reports that provide a system of accountability for results. It requires each federal agency to develop, no later than September 30, 1997, strategic plans that cover a period of at least 5 years. Beginning with fiscal year 1999, agencies are required to prepare annual performance plans with annual goals that are linked to the goals in the strategic plan. They must then measure their performance against the goals they have set and report publicly on how they are progressing against their expectations. The Park Service has prepared a draft strategic plan that covers the 5-year period from fiscal year 1998 through fiscal year 2002. Implementing GPRA involves three key steps: (1) setting expectations by developing strategic plans that define the mission, goals and desired outcomes for an agency; (2) measuring progress or performance against these expectations; and (3) using information on performance as a basis for deciding whether progress has been achieved. As strategic plans are developed, agencies are required to consult with the Congress and consider the views of other stakeholders. Accountability for results is especially important for an agency like the Park Service, which sets priorities and develops budgets at the park unit level. Under this decentralized management structure, individual park managers can make decisions about park operations that may or may not be consistent with the agency’s mission, priorities, or goals. By implementing GPRA, the Park Service can improve accountability because each unit of the national park system, each program, and the agency as a whole will be developing long-term and short-term plans laying out what is to be accomplished over prescribed periods of time.For example, according to Park Service officials, each of the 374 units in the national park system will be required to develop strategic and annual performance plans that state what each park is expected to accomplish. The performance of each park unit will then be measured against its annual expectations, and both the Congress and the agency can then use this information to assess that park’s progress towards meeting the established expectations. The performance of the agency’s programs and of the agency as a whole will also be assessed using this same kind of process. As this process is implemented, the agency’s priorities should become more clearly defined. By focusing on what is being accomplished and sharing this information with the Congress and other stakeholders, the Park Service can promote a better understanding of (1) the agency’s and each park’s priorities, (2) the links between the agency’s and each park’s priorities, (3) the results achieved with the funds provided, and (4) the shortfalls in performance. In short, greater accountability could be achieved because managers would be held more directly accountable for the results of their decisions. The Park Service is now in the process of implementing GPRA. In October 1996, the agency issued the final draft of the National Park Service Strategic Plan. This plan includes the Park Service’s mission statement, overall goals, and 5-year goals expressed as measurable outcomes that link managers’ performance to such outcomes. Since then, the agency has developed and is now implementing a GPRA training program for its employees so that park-level staff can develop measurable goals that tie into the servicewide strategic plan and begin to measure their progress in achieving these goals. In the spring of 1997, the Park Service plans to issue the final version of its strategic plan, which will set forth its mission, long-term goals, and means of measuring progress towards achieving these goals. Furthermore, in September 1997, the individual parks are expected to establish the strategic and annual performance plans needed to implement the agency’s strategic plan. Successfully implementing GPRA can provide the Congress and the Park Service with a powerful vehicle for communicating and clarifying expectations about the agency’s mission and long-term goals. Therefore, consultations between the Congress and the Park Service on the agency’s strategic plan are critical. As we recently testified, successful consultations (1) include agency officials who have programmatic knowledge and authority to revise the plan, (2) occur after the parties have reached agreement on the depth and scope of the issues to be discussed, and (3) provide an iterative process for improving the strategic plan. Furthermore, because the Park Service is decentralized and provides broad discretion to park managers, it faces significant challenges in implementing a top-down accountability system such as that called for by GPRA. To fully integrate GPRA’s management approach, Park Service managers must begin to define in measurable terms how activities at their park contribute toward achieving the servicewide goals established in the Park Service’s strategic plan. In this regard, our prior work has shown that one of the key challenges facing the parks is the development of the baseline data that are needed to measure progress in achieving goals. Sustained congressional attention to federal agencies’ implementation of GPRA would underscore the importance that the Congress attaches to the success of this process. Both the Congress and all executive branch agencies have a large stake in making the legislation work. Successful implementation will provide the Congress and the Park Service with the management framework and much of the information needed to focus on what is being accomplished with the money provided to the agency, make the hard financial decisions dictated by the current fiscal environment, and improve the ability of the Park Service to deliver its services more effectively and efficiently. We attempted to determine the extent of the Park Service’s reductions in visitor services over the past 5 years. The extent of such reductions agencywide is unknown because the Park Service does not routinely track data on national trends in the level of visitor services or other activities provided in the parks. Our work showed that each of the four parks had reduced visitor services at various times over the past 5 years. Moreover, as we reported in 1995, reductions in visitor services have been occurring in many other parks since at least as far back as 1985. In 1995, we reported that there were significant cuts in visitor services at 11 of the 12 park units we reviewed. According to the Comptroller of the Park Service, the headquarters office does not routinely track cutbacks in visitor services because park superintendents are responsible for managing their park, including making decisions on visitor services, and therefore are in the best position to weigh the trade-offs in reducing operations at their park. Nevertheless, in 1996, in response to a congressional inquiry, the headquarters office attempted to obtain information from park units on such reductions and their effects on visitors and resources for fiscal year 1993 through fiscal year 1996. The Park Service’s records indicated that over 50 parks reported significant cuts in visitor services during fiscal year 1996. The Park Service attributed all of the identified cuts to funding shortages. Examples of cuts in visitor services include the elimination of lifeguard services at some park recreational areas, reduced operating hours or the closure of visitor centers, and the closure of some campgrounds. However, our review showed that some of the data obtained were not accurate and that another attempt by headquarters in January 1997 to obtain updated information on reductions in park operations for fiscal year 1997 produced incomplete results. In the absence of this kind of overall trend information on cutbacks in visitor services, we collected the information at each of the four parks we visited. Over the past 5 years, each of the four parks that we visited reduced visitor services. The extent of such reductions varied among the parks during fiscal years 1993 through 1997, although they were most extensive in fiscal year 1996. However, in considering the amount and scope of cuts in visitor services it is important to consider this information in the full context of overall park operations. Park managers made the cuts in visitor services as part of a broader effort to match park spending with available funds. In each of the four parks, the cutbacks in visitor services were a relatively small portion of the overall reductions in park operations. Most of the cutbacks occurred in areas such as park maintenance, resource management, and park administration. For example, as noted earlier in this report, in 1996 Yellowstone needed to absorb about $2 million in increased costs. Of this amount, $72,000, or about 3 percent, came from reductions in the operations of visitor service facilities, including a campground and two museums. The rest came from reductions in other park operations. Increased operating funds allowed the park to reopen these visitor service facilities in 1997. Although the proportions differ, similar scenarios played out at the other parks we visited. The following summarizes the cuts in visitor services imposed by the four parks during the most recent 5-year period: In 1996, Great Smoky Mountains National Park closed three campgrounds during the winter months and two smaller campgrounds for the whole year. In addition, the park closed one of its visitor centers and staffed two others with uniformed personnel for only 5 hours per day. In 1996, Yellowstone National Park cut several visitor services, closing a campground, visitor center, and two museums. In addition, the park did not fund 10 law enforcement positions and eliminated several guided hikes. Independence National Historical Park closed several historic buildings to visitors and reduced visiting hours at several other buildings for 3 of the 5 years reviewed. Olympic National Park made several cuts in visitor services during each of the past 5 years, including reducing visitor center hours, shortening campground seasons, not opening two entrance stations and backcountry trails, and providing fewer law enforcement patrols and interpretative programs. Appendix I provides more detailed information on the four parks’ cuts in visitor services. Overall, park managers have tried to minimize the impact of operational cutbacks on visitors. According to park managers and records at the four parks we reviewed, visitor services were generally the last areas to be cut. In all four parks, administrative costs for items such as training, travel, and supplies were reduced; maintenance was deferred; positions went unfilled; and other discretionary programs, such as resource management, were reduced before cuts were made in visitor services. Some park managers told us that the services that were cut were selected because their loss would affect the fewest visitors to the parks. For example, at Great Smoky Mountains, three major campgrounds were closed in 1996 during the winter months—a period of lower visitation. Also, other campgrounds were available both inside and outside the park. A visitation survey conducted at Great Smoky Mountains during the summer of 1996 showed that 90 percent of the visitors rated visitor services as good or very good. At Independence, historic buildings that normally received less visitation were closed or operating hours were reduced so that Independence Hall and the Liberty Bell—the two historic structures that received the most visitation—could operate for extended hours during the summer. At Olympic, park managers told us, the park deferred purchases of supplies and equipment, such as vehicles, radios, and computers, as well as employee training, before cutting visitor services. Similarly, Yellowstone cut supplies and equipment, travel, training, and other administrative activities before cutting visitor services. Spending on operations by the Park Service has increased in real terms by about 30 percent since 1985. This increase is comparable to the increases for the Fish and Wildlife Service (28 percent) and for federal domestic discretionary spending as a whole (27 percent) but is higher than those for the other federal land management agencies we examined. For example, operations spending by the Bureau of Land Management and the Corps of Engineers increased by 5 and 3 percent, respectively. In contrast, real spending for the Forest Service’s operations has decreased by 24 percent since 1985. Table I shows the changes in spending for these agencies’ operations from fiscal year 1985 through fiscal year 1997 (estimated). The increase in Park Service spending reflects, in part, an increase in the agency’s responsibilities. From 1985 through 1996, the number of park units increased from 339 in 1985 to 374. In addition, the boundaries of some existing parks expanded, so that total area managed increased from 79 million to 83 million acres. Other additions to the Park Service’s operating responsibilities include an increase in visitation, from an estimated 216 million to 266 million visitors per year, plus requirements for protecting newly designated endangered species and for complying with new regulatory mandates, such as the Americans with Disabilities Act of 1990, the Clean Air Act Amendments of 1990, and more stringent water quality standards. During the same period, from 1985 through 1996, the responsibilities of the other federal land management agencies we reviewed also grew (see app. II). The number of wildlife protection units managed by the Fish and Wildlife Service increased from 582 to 702, and the area managed by the agency increased from 90 million to 92 million acres. In addition, the number of visitors to Fish and Wildlife units increased from 24 million to 29 million. At the Bureau of Land Management, while the number of acres managed decreased from 337 million to 264 million, the estimated number of visitors increased from about 52 million to 59 million. The number of acres managed by the Corps of Engineers changed little. However the number of visitors to the Corps’ recreational sites increased from 172 million to 212 million. The acreage managed by the Forest Service grew little, and the number of units managed by the Forest Service declined slightly. However, the estimated number of visitors increased dramatically, from 541 million in 1985 to 830 million in 1996. We gathered this information to provide a gross indication of whether other federal land management agencies were growing as much as the Park Service. Accordingly, caution must be used in interpreting the data on visitation and acreage and in making comparisons across agencies. One official we spoke with suggested that visitation data from the 1980s tended to be inflated and counting techniques varied greatly across agencies and units within agencies. Also, the influence of visitation and acreage on operating costs may vary greatly from agency to agency and from unit to unit within an agency, depending on how the public land is used and what types of facilities are in place. Balancing the need to protect and preserve park resources for future generations while at the same time meeting the needs of hundreds of millions of park visitors is, at best, a difficult task. Achieving this balance is made even more difficult by the tight fiscal climate now facing the Park Service and other federal agencies. Managing the national park system under these circumstances requires making choices among competing operating priorities. Within the Park Service, these choices are delegated to the individual park managers and typically involve trade-offs in funding resource management activities, visitor services, or park maintenance. In a decentralized organization that gives managers a great deal of decision-making authority, having a system in place to hold them accountable for the results of their decisions is critical. However, today, the Park Service lacks a system that holds park managers accountable for the results of their decisions. Under GPRA, the Park Service has begun to establish servicewide goals for the park system. The next task will be for the Park Service to begin measuring the individual parks’ progress in achieving these goals. Implementing GPRA can both assist the Congress and the Park Service in reaching agreement on goals and expectations for the agency and help hold the individual parks accountable for achieving their goals. The transition to results-oriented management in the Park Service will be neither easy nor quick. But GPRA’s implementation has the potential for improving the agency’s performance—a particularly vital goal when resources are limited and public demands are high. We provided a draft of this report to the Park Service for review and comment. We met with Park Service officials—including the Associate Director for Operations and the Comptroller. The agency generally agreed with the conclusions and the principal findings of the report and provided several clarifying comments that we incorporated where appropriate. To respond to your request and agreements reached with your offices, we met with officials from the Park Service’s headquarters office and from Great Smoky Mountains National Park, Independence National Historical Park, Olympic National Park, and Yellowstone National Park. We also obtained and reviewed pertinent documentation from these officials. We conducted our review from January through March 1997 in accordance with generally accepted government auditing standards. Appendix III provides a more detailed discussion of our objectives, scope, and methodology. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days from the date of this letter. At that time, we will send copies of this report to interested congressional committees and Members of Congress; the Secretary of the Interior; the Director of the National Park Service; and other interested parties. We will make copies available to others upon request. Please call me at (202) 512-3841 if you have any questions on matters discussed in this report. Major contributors to this report are listed in appendix IV. Norris Geyser Basin campground closed (116 sites) Law enforcement patrols and interpretative programs reduced (Table notes on next page) No significant cuts in visitor services. However, during this 5-year period, all four parks reduced other personnel costs, cutting seasonal employees, furloughing permanent employees or cutting temporary employees, and not filling vacant positions. These personnel cuts could affect visitor services. Cutbacks for fiscal year 1997 are as of March 1997. Data not available. The objectives of our review were to (1) describe the process used by the Park Service to develop budgets and establish operating priorities; (2) determine the limitations, if any, of the agency’s priority-setting processes at a sample of parks; (3) determine what, if any, implications the Government Performance and Results Act (GPRA) has for the Park Service; (4) provide information on trends in cutbacks of visitor services at the parks; and (5) compare funding levels for park operations with those for other federal land management agency operations. To determine the process used by the Park Service to establish operational priorities and any limitations of the process at a sample of parks, we interviewed Park Service officials at headquarters, at the four regional offices that oversee the parks included in our sample, and at the four parks in our sample. We also discussed with Park Service officials how park priorities are used in developing budget requests and allocating appropriated funds. We reviewed Park Service headquarters and regional office directives, guidance, and practices for identifying operational priorities; Park Service budget documents; and park planning documents. We visited four parks: Great Smoky Mountains National Park, Independence National Historical Park, Olympic National Park, and Yellowstone National Park. As agreed with your offices, we selected these four parks because they (1) include large natural and historical parks, (2) are located in different regions of the country, and (3) reported several cutbacks in visitor services. We also limited our review to four parks so that we could respond to your need for information by early April 1997. Although we cannot generalize the results of our work to all 374 park units, the parks selected are among the most visible and notable in the national park system. Hence, the information collected should provide a meaningful indication of how the park system establishes operational priorities. To respond to the third objective, we reviewed GAO documents on implementing GPRA and interviewed officials at Park Service headquarters, the four regional offices, and the four parks to discuss how these parks’ processes for establishing operational priorities relate to GPRA’s requirements and to obtain information on the status of the Park Service’s implementation of GPRA. We did not specifically review the Park Service’s processes for implementing GPRA. To obtain information on trends in cutbacks in visitor services, we held discussions with officials from Park Service headquarters, the four regional offices, and the four parks included in our review and obtained documentation related to this issue. As agreed with your offices, we requested trend information for the past 5 years. Also, because servicewide trend information was not available from Park Service headquarters, we collected data on cutbacks in visitor services from the four parks we visited. With respect to the last objective, we interviewed officials and obtained budget trend data from the Park Service (NPS), the U.S. Fish and Wildlife Service (FWS), the Bureau of Land Management (BLM), the U.S. Army Corps of Engineers (COE), and the Forest Service (FS). As agreed with your offices, we obtained budget data for fiscal years 1985 through 1997. The budget data consisted of gross obligations for the operations and maintenance accounts of each agency. We adjusted the obligations data for inflation by using the Gross Domestic Product implicit price deflator developed by the Department of Commerce’s Bureau of Economic Analysis. We then compared the inflation-adjusted change in the Park Service’s obligations over this period with the similarly adjusted changes in the obligations of the other federal land management agencies. We also obtained information that would provide an indication of the growth in the numbers of public visits, acres, and units managed by these agencies. Brent Hutchison Paul Staley The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed: (1) the process used by the National Park Service to develop budgets and establish operating priorities; (2) the limitations, if any, of the agency's priority-setting processes at a sample of parks; (3) what, if any, implications the Government Performance and Results Act (GPRA) has for the Park Service; (4) information on trends in cutbacks of visitor services at the parks; and (5) the funding levels for park operations compared with those for other federal land management agency operations. GAO noted that: (1) while headquarters plays a key role in formulating requests for increases to the Park Service's operating budget, decisions about spending and operating priorities associated with park operating funds are delegated to the individual park managers; as a result, the individual park managers have broad discretion in deciding how to spend park operating funds; these decisions have been difficult because, while park budgets have been rising, the costs of operating the parks have also been rising in response to factors such as required pay and benefit increases; as a result, spending decisions made by park managers frequently involve trade-offs among competing demands within the parks for activities such as resource management, visitor services, or maintenance; (2) the most significant limitation associated with the Park Service's decentralized priority-setting and accountability systems is that they lack a focus on the results achieved with the funds spent; according to the park managers GAO spoke with, regional or headquarters staff rarely, if ever, discussed with them operating priorities or the results accomplished with the funds provided; key components needed to hold park managers accountable are missing; no expectations have been established for the goals that are to be achieved in the parks, and there is no process for measuring progress toward these goals; (3) GPRA offers the Park Service an opportunity to improve its system of accountability; the Park Service is currently implementing GPRA and plans on issuing its strategic plan, which will extend through fiscal year 2002, in the spring of 1997; (4) information is not available from the Park Service to determine agencywide trends in cutbacks of visitor services; each of the four parks that GAO visited has reduced its visitor services to some degree over the past 5 years; however, it is important to note that the cuts in visitor services were relatively small compared with the reductions in other park activities, such as maintenance and administration; and (5) spending on operations by the Park Service has increased in real terms by about 30 percent since 1985; (15) similarly, the operating budget of the U.S. Fish and Wildlife Service has grown by about 28 percent over the same period; the Bureau of Land Management's and the Army Corps of Engineers' operating budgets have increased by 5 percent and 3 percent, respectively while the Forest Service's operating budget has decreased by 24 percent. |
The National Defense Authorization Act for Fiscal Year 2008 (NDAA) directed DOD to establish centers of excellence for traumatic brain injury and post-traumatic stress disorder. Although the NDAA described responsibilities for the centers, it did not specify where the centers should be located within the DOD organization. Instead, it directed the Secretary of Defense to ensure that to the maximum extent practicable centers collaborate with governmental, private, and nonprofit entities. Senior-level DOD officials convened representatives from the Army, Navy, Air Force, Marines, and Department of Veterans Affairs to determine how to establish the centers. Informally, this group was known as the “Red Cell” and its primary mission was to address recommendations related to PH and TBI. Rather than establishing separate centers of excellence for traumatic brain injury and post-traumatic stress disorder, a combined center for both PH and TBI was created. According to one representative, the Red Cell also debated how funding would be divided between PH and TBI and across the military services. The military services, TMA, and DCOE receive PH and TBI funding through the Defense Health Program (DHP) appropriation account. Organizationally the services are led by Secretaries who have a direct relationship with the Secretary of Defense. As shown in figure 1, DCOE reports directly to the Assistant Secretary of Defense for Health Affairs/Director of TMA within the Office of the Secretary of Defense. DCOE consists of a central office and six directorates. The central office conducts multiple functions such as leadership and resource management and is responsible for DCOE’s budget formulation process. The six directorates carry out a range of activities related to PH and TBI, including operating a call center, disseminating information on DOD training programs, developing clinical practice guidelines related to PH and TBI, and identifying PH and TBI research needs. The DCOE network also includes five component centers that provide an established body of knowledge and experience related to PH and TBI. The component centers are the Defense and Veterans Brain Injury Center (DVBIC), Deployment Health Clinical Center (DHCC), Center for the Study of Traumatic Stress (CSTS), Center for Deployment Psychology (CDP), and the National Center for Telehealth and Technology (T2). Over time, PH and TBI funding evolved from DHP amounts directed specifically for PH and TBI to funding support being incorporated into the broader DHP appropriation. In fiscal year 2007, Congress appropriated approximately $600 million specifically for TBI and post-traumatic stress disorder treatment. In fiscal year 2008, Congress specifically appropriated $75 million for PH and TBI activities. In fiscal year 2009, funding for PH and TBI was not appropriated a specific amount, rather funding was drawn from DHP’s general operation and maintenance funds—DOD had discretion over the amount and distribution of funds internally allotted. Beginning in fiscal year 2010, PH and TBI funding was included in the base budget request for the DHP, which established a longer-term funding stream for PH and TBI. As shown in figure 2, in fiscal year 2010 a total of $638 million in DHP operations and maintenance funding was allotted for PH and TBI across the military services, TMA Financial Operations Division (TMA FOD), and DCOE. The Army received the largest portion of funds, about $279 million or 44 percent, while DCOE received approximately $168 million or 26 percent. Of all PH and TBI funding allotted, $96 million or 15 percent was suballoted to component centers within the DCOE network. Budget formulation for DOD occurs as part of the Planning, Programming, Budgeting and Execution Process, which projects near-term defense spending. The system is intended to provide defense decision makers with the data they need to make trade-offs among potential alternatives; thus resulting in the best possible mix of forces, equipment, and support to accomplish DOD’s mission. Specifically, DOD budget formulation occurs in the programming phase of the Planning, Programming, Budgeting and Execution Process, and begins with the development of a program objective memorandum (POM). The POM reflects decisions about resource allocations and proposed budget estimates and is used to inform the development of the President’s Budget and DOD Congressional Justifications. Because DCOE is only one, relatively small entity receiving funds through the broader DHP appropriation, it is not visible in DOD budget presentation materials. The POM covers six fiscal years and is developed in even fiscal years, for example fiscal year 2008 and fiscal year 2010. DOD develops the POM approximately 18 months in advance of the first fiscal year the POM covers. DCOE had a limited role in budget decision making for the fiscal year 2012 POM process. Ultimately, senior DOD officials, including the Health Affairs Deputy Assistant Secretaries of Defense, decided to fund 1 of 18 PH and TBI requests, which did not include DCOE’s. For this POM, DCOE headquarters solicited and received budget requests from component centers. Ultimately, DCOE accepted and incorporated all component center requests into its budget request. However, in some instances DCOE officials said they requested additional justification from component centers. PH and TBI budget requests from across DOD, including DCOE, were collected for consideration in the fiscal year 2012 POM. A working group of PH and TBI subject matter experts within DOD reviewed and prioritized requests for funding above the fiscal year 2010 base budget from across the department. According to a DCOE official, DCOE’s interests were represented by TMA officials who contributed to the prioritization of these requests; however, the final decisions were not formally communicated to DCOE. DCOE had a limited role in budget formulation for the fiscal year 2010 POM because it was still in its first year of operation. According to a senior DOD official, no limits were imposed on PH and TBI budget requests and no trade-off decisions were made. Nevertheless, this year was significant because it was the first year that DCOE’s budget was considered in the DHP baseline budget request. According to DCOE officials, because DCOE had only recently been established, it had limited staff. In addition, component centers were still being realigned under DCOE and both the relationship between component centers and DCOE and the missions of two component centers, T2 and the National Intrepid Center of Excellence (NICOE), were unclear. For the fiscal year 2008 POM process, the newly established DCOE had no role in budget formulation. Instead, the Red Cell convened to determine how the centers of excellence would be implemented and provided recommendations on DCOE’s original budget, which the Senior Oversight Committee approved. Because the POM process occurred on a biannual basis in even fiscal years, DOD did not have a budget formulation process in fiscal years 2009 and 2011. For the fiscal year 2012 POM, DCOE provided limited narrative support for its budget justification. TMA requested that DCOE complete and submit a spreadsheet template with cost estimates and narrative for resource requests above the prior-year baseline. The narrative portion asked for four elements: (1) background, (2) requirements summary, (3) impact to other programs, and (4) the risk if not funded. DCOE and its component centers did not provide this template in a complete manner. Not all of the requested narrative elements were provided. For example, the impact to other programs was not discussed for half the requests DCOE submitted. In addition, the DCOE headquarters request was calculated with a 3.5 percent inflation factor versus the 1.7 percent prescribed in POM guidance, but DCOE did not explain why it needed to use a higher inflation rate. Two years earlier, for the 2010 POM, DCOE provided no narrative support for its budget justification. TMA requested that DCOE provide completed spreadsheets that did not include a narrative component. For this POM, DCOE differentiated the amounts it requested by PH or TBI strategic initiatives and by commodity, but did not provide narrative justifications for these amounts. Guidance contained in OMB Circular A-11 specifies that the basic requirements for a justification include a description of the means and strategies used to achieve performance goals. Means can include human resources, information technology, and operational processes. Strategies may include program, policy, management, regulatory, and legislative initiatives and approaches and should be consistent with the agency’s improvement plans. According to OMB, a thorough description of the means and strategies to be used will promote understanding of what is needed to achieve a certain performance level and increase the likelihood that the goal will be achieved. To develop a comprehensive departmentwide budget submission to OMB, a thorough description of means and strategies in justifications is needed at all levels within an agency. DCOE already collects information that could improve its budget justifications. DCOE requests that both directorates and component centers prepare “fact sheets,” which contain detailed information including mission, activities, relevant legislation, staffing, performance metrics, and resource requirements. Information like that in the fact sheets provides an expanded discussion of performance information. DCOE and TMA could leverage this existing information to improve budget justifications and resulting decisions. Decision making for DCOE’s budget formulation could be facilitated by key information, such as funding and obligations data, additional non- DCOE funding received by its component centers, and performance information resulting from internal reviews. This information could also help DCOE justify and prioritize its budget requests. However, DOD required more than 3 months to query numerous sources and provide us with prior-year data on funding and obligations for DCOE and its component centers. The absence of readily available, comprehensive historical funding and obligations data indicates that TMA and DCOE did not have benefit of these data to inform budget formulation. Furthermore, DCOE and TMA FOD do not have access to systems that track funds authorized for execution on behalf of the DCOE component centers because component center budget execution is conducted at multiple sites that maintain separate financial systems. According to TMA and DCOE officials, DCOE has limited responsibility for budget execution activities. TMA FOD and DCOE must request and compile obligations data for funds administrated by budget execution sites. For example, as shown in figure 3, once DCOE requests that TMA FOD authorize funding for T2, the funds are provided to T2’s host entity, Madigan Army Medical Center. At this point, TMA and DCOE can no longer monitor the execution of T2’s funds through TMA’s financial reporting systems and must request that information. TMA FOD’s financial system contains data on spending it administrates for DCOE headquarters and component centers. DCOE and TMA should use comprehensive historical funding and obligations data to inform budget formulation and justify requests. OMB Circular A-11 directs agencies to present prior-year resource requirements in budget justification materials. Prior to our review, DCOE did not collect information on the sources and amounts of funds component centers received in addition to allotments from DCOE, and therefore did not have benefit of these data to help inform budget decision making. In some cases, component centers receive significant amounts of non-DCOE funding. For example, Deployment Health Clinical Center received about $8.3 million in funding from DCOE in fiscal year 2010, while it was awarded about $3.3 million from external sources. Standards for internal control in the federal government state that information should be recorded and communicated to management and others within the entity who need it. Without information on non-DCOE funding, when DCOE and TMA make trade-off decisions, they cannot consider all the resources available to component centers. While DCOE has begun collecting information on component centers’ non-DCOE funding, it has not had an opportunity to use that data to inform budget formulation and requests because the fiscal year 2012 POM process already occurred. Additionally, DCOE could obtain more performance information to better prioritize and justify its budget requests. In the middle of fiscal year 2010, DCOE began to hold quarterly meetings to evaluate directorates’ performance and reallocate resources used for DCOE’s daily activities. However, component centers are not included in this process. A DCOE official said component centers are excluded because DOD is reviewing the governance structure of all DOD centers of excellence, and this could affect the organizational structure of DCOE. But if DCOE included the component centers in this process, it could collect information that links component center performance with resources and enhance future budget decision making. DCOE’s mission has not been clearly defined to Congress. For example, in one hearing of the House Committee on Armed Services, Members expressed differing visions of DCOE’s mission. One Member expressed frustration that DCOE had not become an “information clearinghouse” and the “preeminent catalogue of what research has been done,” as had been envisioned. A second Member described his vision of DCOE being an overarching body that “coordinates, inspects, and oversees the tremendous amount of good work being done across the nation.” Members also voiced concern about the amount of time needed to establish DCOE and achieve results. In four congressional subcommittee testimonies, DCOE’s first director and the Assistant Secretary of Defense for Health Affairs characterized DCOE as DOD’s “open front door for all concerns related to PH and TBI.” These statements suggest a divergent understanding of DCOE’s role and bolster the importance of clear communication on DCOE’s mission, funding, and activities. DCOE is a relatively small entity and it does not typically appear in DOD DHP budget presentation materials and falls below the most detailed level that is presented—the Budget Activity Group level. DCOE has only appeared in DOD’s budget presentation materials for fiscal year 2010, when PH and TBI funding was first included in the DHP base budget request. In the request, DOD did not specify that DCOE’s individual budget request for 2010 was only about $168 million of the $800 million requested. Specifically, the request stated “$0.8B to fund operations of the Defense Center of Excellence (DCoE) for Psychological Health and Traumatic Brain Injury, and to ensure that critical wartime medical and health professionals are available to provide needed mental health services by improving hiring and retention bonuses and offering targeted special pay.” DOD provides supplemental reporting on PH and TBI expenditures through reports mandated in the National Defense Authorization Act for Fiscal Year 2008, as well as ad hoc reports at Congress’s request. While these reports present activities and accomplishments by strategic initiative, DOD is not required to separately report on DCOE in its annual reports. Thus, while PH and TBI information is reported to congressional decision makers, DCOE specific funding and activities are not visible. The Government Performance Results Act (GPRA) Modernization Act of 2010 further requires agencies to consult with the congressional committees that receive their plans and reports to determine whether they are useful to the committee. Table 1 summarizes selected mandated and ad hoc reports DOD provided to Congress. DCOE faces numerous challenges, such as recruiting staff and shaping relationships with its component centers and military services. Nonetheless, DCOE could take additional steps to make better informed budget decisions and justify resource requests. DCOE lacks key information, such as comprehensive funding and obligations data for component centers and does not make full use of performance data. Better leveraging of such information could enhance DCOE’s ability to influence component centers’ progress towards achievement of positive outcomes for wounded service members. For DCOE to achieve its mission and goals it must have access to and consider information needed to prioritize its activities and communicate its role to stakeholders. As DOD reviews the governance structure of its centers of excellence, such as DCOE, it has an opportunity to ensure that these centers have the tools needed to promote success. To enhance visibility and improve accountability, we recommend that the Secretary of Defense direct the Director of TMA to work with the Director of DCOE on the following three actions: 1. develop and use additional narrative, such as that available in component center fact sheets, in budget justifications to explain the means and strategies that support the request. 2. establish a process to regularly collect and review data on component centers’ funding and obligations, including funding external to DCOE. 3. expand its review and analysis process to include component centers. We provided a draft of this report to the Secretary of the Department of Defense for official review and comment. The Assistant Secretary of Defense of Health Affairs and Director of TRICARE Management Activity provided us with written comments, which are summarized below and reprinted in appendix III. DOD also provided technical comments that were incorporated into the report as appropriate. DOD concurred with all of our recommendations. Specifically, DOD concurred with our recommendation that the Director of TRICARE Management Activity (TMA) work with the Director of the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury (DCOE) to develop and use additional narrative, such as that available in component centers’ fact sheets and budget justifications. DOD also concurred with our recommendation to establish a process to regularly collect and review data on component centers’ funding and obligations, including funding external to DCOE. However, DOD stated that one limitation in executing this recommendation is ensuring entities external to TMA comply with the request to regularly report funding and obligations data. We agree that this limitation presents challenges for DCOE’s and TMA’s oversight of obligations and funding data. However, a complete understanding of this information is important to fully review the resources that affect DCOE’s operations. DOD stated that DCOE is appropriately informed of budget execution data through formal systems, as well as informal coordination and managerial reporting. In addition, TMA stated that it executes a majority of the total operations and maintenance funding that DCOE and its component centers receive and that TMA, DCOE, and the Services have instituted numerous internal controls to monitor planned and actual expenditures. Despite the level of oversight described by DOD, it was not readily able to provide us with disaggregated information on DCOE’s funding and obligations. Although TMA does execute and oversee the majority of operations and maintenance funding for DCOE and its component centers, additional funding remains outside of its oversight, including approximately 18 percent of operations and maintenance funding. The data provided for fiscal year 2010 remain incomplete and the information provided has not been sufficient to confirm its accuracy or reliability. Furthermore, DOD was unable to describe the process used to identify and resolve errors in source data from multiple financial systems, and TMA stated that it could not confirm the accuracy of data from financial systems it does not administrate. This raises questions about DCOE and TMA’s oversight and use of these data to inform budget formulation. Lastly, DOD agreed with the recommendation to expand its review and analysis process to include component centers, but that it did not plan to include two component centers, the Center for the Study of Traumatic Stress and the Center for Deployment Psychology, which are in the process of formally aligning under the Uniformed Services University of the Health Sciences. We are sending copies of this report to the Secretary of Defense and appropriate congressional committees. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Denise M. Fantone at (202) 512-6806 or fantoned@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. We reviewed the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury (DCOE) budget formulation for fiscal years 2008 through 2012. To understand DCOE’s budget formulation process and the data used to inform budget requests, we reviewed documentation relevant to its budget formulation process and interviewed knowledgeable Department of Defense (DOD) officials. To understand DCOE’s structure, history, and funding, we gathered and analyzed information on the creation and organization of DCOE, such as the report on the outcomes of the Red Cell, and memorandums of agreement between DCOE and component centers. We also reviewed the legislative history of DCOE, DOD appropriations acts from fiscal years 2007, 2008, 2009, and 2010, and accompanying committee reports. Initially, we sought to obtain funding and obligations data from fiscal years 2007 through 2011; however, DOD was unable to provide these data in a timely manner, and ultimately provided data that we determined were not sufficiently reliable for presenting funding and obligations figures. As a result, the team reduced the scope of our data request to only include fiscal year 2010. Through interviews and responses to written questions, DOD provided additional information about the process used to generate and validate this data. However, as of May 5, 2011, the data provided for fiscal year 2010 remain incomplete, and the information provided has not been sufficient to confirm the accuracy or reliability of all detailed funding and obligations data. Because such data are necessary to fully understand the budget process for psychological health (PH) and traumatic brain injury (TBI), the team decided to present these data, but to note that we have not confirmed their accuracy. We reviewed DCOE’s mission, strategic goals, and performance measures. Also, we reviewed budget request and justification documents for DCOE, and its component centers for fiscal years 2010 and 2012, and documents that support the development of budget requests, such as component center fact sheets. To understand how DCOE participates in DOD budget formulation processes we reviewed DOD budget formulation guidance, including TRICARE Management Activity (TMA) and Program Objective Memorandum (POM) guidance for fiscal year 2010 and 2012 that specifically affects DCOE. The Defense Health Program appropriation includes three accounts, Operations and Maintenance, Procurement, and Research, Development, Test and Evaluation (RDT&E). We focused our review on the budget formulation process for Operations and Maintenance funding because DCOE and DCOE component centers do not receive any baseline funding for Procurement and RDT&E, which are obtained through separate budget processes. We interviewed officials at Health Affairs, Force Health, Protection and Readiness, TMA, the Uniformed Services University of the Health Sciences (USUHS), DCOE, and DCOE’s component centers about the budget formulation process, and the information used in budget decision making. To determine what information is available to Congress on DCOE’s funding and activities, we reviewed the President’s budget requests and DOD’s justification documents for fiscal years 2010, 2011, and 2012. In addition, we reviewed reports mandated by the 2008 National Defense Authorization Act on PH conditions and TBI, and reports requested by the Senate Appropriations Committee on PH and TBI expenditures. To identify congressional direction on information requirements, we reviewed DOD appropriations acts from fiscal years 2007, 2008, 2009, and 2010, accompanying committee reports, and congressional hearing records. We conducted this performance audit from June 2010 through June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Six directorates carry out a range of activities related to psychological health (PH) conditions and traumatic brain injury (TBI). Strategic Communications Directorate—To strategically inform and disseminate to multiple audiences and stakeholders; providing relevant and timely information, tools, and resources for warriors, families, leaders, clinicians, and the community that empowers them, supports them, and strengthens their resilience, recovery, and reintegration. Psychological Health Clinical Standards of Care Directorate— To promote optimal clinical practice standards to maximize the psychological health of warriors and their families. Research Directorate—To improve PH and TBI outcomes through research; quality programs and evaluation; and surveillance for our service members and their families. Resilience and Prevention Directorate—Assist the military services and the DOD to optimize resilience; psychological health; and readiness for service members, leaders, units, families, support personnel, and communities. Education Directorate—To assess training and educational needs in order to identify, and promote effective instructional material for stakeholders resulting in improved knowledge and practice of PH and TBI care. Traumatic Brain Injury Clinical Standards of Care Directorate—To develop state of the science clinical standards to maximize recovery and functioning and to provide guidance and support in the implementation of clinical tools for the benefit of all those who sustain traumatic brain injuries in the service of our country. The Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury (DCOE) network also includes five component centers that provide an established body of knowledge and experience related to PH and TBI. Component centers include: Defense and Veterans Brain Injury Center (DVBIC)—With a focus on TBI, DVBIC was created as a collaboration between DOD and Department of Veterans Affairs that serves military personnel, veterans, and their families by providing clinical care, conducting research, and providing education and training to DOD providers. Deployment Health Clinical Center (DHCC)—Focused on deployment-related health concerns, including PH, DHCC serves military personnel, veterans, and their families by providing outpatient care, conducting research, leading the implementation of a primary care screening program for post-traumatic stress disorder and depression, and information to military health system providers. Center for the Study of Traumatic Stress (CSTS)—By addressing a wide scope of trauma exposure that includes the psychiatric consequences of war, deployment, disaster, and terrorism, CSTS serves DOD, and collaborates with federal, state, and private organizations. Activities include conducting research, providing education and training to military health system providers, and providing consultation to government and other agencies on preparedness and response to traumatic events. Center for Deployment Psychology (CDP)—Covering both PH and TBI, CDP trains military and civilian psychologists and other mental health professionals to provide high quality deployment-related behavioral health services to military personnel and their families. National Center for Telehealth and Technology (T2)— Addressing both PH and TBI, T2 serves military personnel, veterans, and their families by acting as the central coordinating agency for DOD research, development, and implementation of technologies for providing enhanced diagnostic, treatment, and rehabilitative services. The following GAO comments on the Department of Defense’s letter dated June 3, 2011, supplement those that appear in the text of the report. 1. While DOD stated that DCOE is appropriately informed of budget execution data through formal systems, as well as informal coordination and managerial reporting, DOD was not readily able to provide us with basic information on funding and obligations. Furthermore, the data provided for fiscal year 2010 remain incomplete and the information provided has not been sufficient to confirm its accuracy and reliability. This raises questions about DCOE and TMA’s oversight and use of these data to inform budget formulation. Accurate and reliable status of funding data should be used as the starting point to inform, justify, and prioritize future budget requests. Although DOD stated that funding data provided to us on February 15, 2011, should be reported on, we continue to believe that these data do not reflect specific psychological health and traumatic brain injury funding that DCOE provided to component centers. Service-level data provided on that date were not subsequently revised. However, data for DCOE and its component centers were revised multiple times after receiving initial data on February 15, 2011. We continued to work with DCOE and TMA to address inconsistencies, incorporate new data, and establish a common understanding of budget terminology, such as allotments and obligations. Moreover, DOD provided numerous revisions to data provided after February 15, 2011, and continued to do so even in comments to the draft of this report. While DOD believes that the data provided are reliable, DOD was unable to describe the process used to identify and resolve errors in source data from multiple financial systems, and TMA stated that it could not confirm the accuracy of data from financial systems it does not administrate. In addition to the individual listed above, Carol M. Henn, Assistant Director; Erinn L. Sauer; Michael Aksman; Alexandra Edwards; Robert Gebhart; Jyoti Gupta; Chelsa Gurkin; Felicia Lopez; and Steven Putansu made major contributions to this report. | The National Defense Authorization Act for Fiscal Year 2008 established the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury (DCOE) in January 2008 to develop excellence in prevention, outreach, and care for service members with psychological health (PH) conditions and traumatic brain injury (TBI). DCOE consists of six directorates and five component centers that carry out a range of PH- and TBI-related functions. GAO was asked to report on (1) DCOE's budget formulation process; and (2) availability of information to Congress on DCOE. GAO reviewed budget guidance, budget requests and performance data. GAO reviewed Department of Defense (DOD) reports submitted to Congress on PH and TBI and interviewed DOD officials. DCOE's role in the DOD budget formulation process is limited. For fiscal year 2012, DCOE's role in budget formulation was limited to consolidating component center budget requests and providing budget requests to the TRICARE Management Activity (TMA). Further, the budget requests DCOE provided to TMA did not have complete narrative justifications. Office of Management and Budget Circular A-11 specifies that the basic requirements for a justification include a description of the means and strategies used to achieve performance goals. At the time of GAO's review, prior-year funding and obligations data and funding received by component centers from sources external to DCOE were not readily available. The absence of these data indicates that TMA and DCOE did not have benefit of this data to inform budget formulation decisions. Also, quarterly reviews conducted by DCOE that collect data on performance and resources do not include component centers. Expansion of reviews and greater access to performance information could provide DCOE an opportunity to collect information that links component center performance with resources and better informs budget decision making. DCOE's mission and funding have not been clearly defined to Congress. At a congressional hearing, Members expressed differing visions of DCOE's mission and voiced concern about the amount of time needed to establish DCOE and achieve results. Moreover, in four congressional subcommittee testimonies, DCOE's first director and the Assistant Secretary of Defense for Health Affairs characterized DCOE as DOD's "open front door for all concerns related to PH and TBI." These statements suggest a divergent understanding of DCOE's role and bolster the importance of clear communication on DCOE's mission, funding, and activities. Because DCOE is a relatively small entity primarily funded through the larger Defense Health Program appropriation, it falls below the most detailed level that is presented in congressional budget presentation materials. In addition, at Congress's request DOD provides mandated and ad hoc reports on PH and TBI expenditures. While these reports present information on activities and accomplishments for PH and TBI, DOD does not--and is not required to--report separately on DCOE. To enhance visibility and improve accountability, GAO recommends that the Secretary of Defense direct the Director of TMA work with the Director of DCOE to develop and use additional narrative in budget justifications, to regularly collect and review data on funding and obligations, and expand its review and analysis process. DOD concurred with GAO's recommendations. GAO understands that the expanded review and analysis process would not include realigned component centers. GAO agrees that ensuring entities external to TMA comply with regular collections of funding and obligations data could be a limitation. |
The purpose of DOD’s Pilot Mentor-Protégé Program, as established in statute, is to provide incentives to major DOD contractors to furnish eligible disadvantaged small business concerns with assistance designed to (1) enhance the capabilities of eligible disadvantaged small business concerns to perform as subcontractors and suppliers under DOD contracts and other contracts and subcontracts; and (2) increase the participation of such business concerns as subcontractors and suppliers under DOD contracts, other federal government contracts, and commercial contracts. DOD’s Office of Small Business Programs (OSBP) manages the program, including developing and overseeing the policies and procedures for the program. In addition, a number of DOD components administer individual mentor-protégé agreements. DOD components and the mentors and protégés that participate in the program are required to follow DOD’s regulations and policies and procedures. For example, DOD’s regulations state that the director of small business programs for each DOD component has the authority to approve firms as mentors and to approve mentor-protégé agreements, among other things. OSBP has developed tools to facilitate program administration, including a template to assist mentors and protégés in developing proposed agreements and a checklist to assist DOD components with ensuring that proposed agreements address required elements, which we discuss later in this report. However, DOD components and mentors and protégés are not required to use the tools. DOD’s Pilot Mentor-Protégé Program offers two types of agreements. Reimbursable agreements provide payments directly to the mentor for the cost of providing developmental assistance to the protégé. Credit agreements provide mentors with credit toward their subcontracting goals for providing developmental assistance to protégés. Mentors can provide protégés with various types of assistance under mentor-protégé agreements, including general business management, subcontract awards, progress payments, advance payments, and loans. Mentors may also obtain assistance for protégés from Small Business Development Centers, Procurement Technical Assistance Centers, historically black colleges and universities, and minority institutions of higher education (see fig. 1). DOD policies and procedures also require mentors to report on the progress made under active mentor-protégé agreements in semiannual reports. Mentors must also include in semiannual reports, among other things, (1) any assistance obtained by the mentor firm for the protégé firm from Small Business Development Centers, Procurement Technical Assistance Centers, historically black colleges and universities, and minority institutions of higher education for developmental assistance provided to protégés, (2) dollars credited (if any) toward applicable subcontracting goals as a result of developmental assistance provided to the protégé, and (3) the impact of the agreement in terms of capabilities enhanced, certifications received, or technology transferred. In addition, annual reports are to contain data on the protégé’s employment, revenue, and participation in DOD contracts. DOD procedures also require protégés to report on the progress made in each of the 2 years following the completion of their agreement. Further, DOD procedures require the Defense Contract Management Agency (DCMA), another DOD component that manages agreements for the department, to conduct a performance review annually of each mentor-protégé agreement. DOD’s process for establishing and approving mentor-protégé agreements generally consists of nine steps (as illustrated in fig. 2): 1. Mentor seeks approval to participate in program. A firm that is interested in participating in the DOD Pilot Mentor-Protégé Program must submit an application for approval as a mentor to the Director of OSBP. To be eligible as a mentor, among other factors, a firm must not be affiliated with the protégé prior to the approval of the agreement and must demonstrate it is qualified to provide assistance that will contribute to the program’s purpose, among other requirements. 2. Mentor establishes a counterpart. The mentor is solely responsible for selecting the protégé. To be eligible as a protégé, a firm must be, among other things, a women-owned small business, Historically Underutilized Business Zone (HUBZone) small business, service- disabled veteran-owned small business, or an eligible entity employing the severely disabled. 3. Mentor conducts a needs assessment. Once the mentor selects the protégé, the mentor conducts a preliminary assessment of the protégé’s developmental needs. The mentor and protégé then mutually agree on the developmental assistance that the mentor is to provide to address those needs. 4. Parties determine the type of agreement. The mentor and protégé may apply for either a reimbursable mentor-protégé agreement or a credit agreement. 5. Parties develop the agreement. The mentor and protégé develop a proposed mentor-protégé agreement that must address a number of required elements, which we discuss in greater detail later in this report. 6. Mentor submits proposed agreement for review. The mentor submits the proposed agreement to the cognizant DOD component for review and approval. 7. DOD component approves or denies agreement. As previously noted, the director of small business programs for each DOD component has the authority to approve mentor-protégé agreements. 8. Parties start the agreement. The mentor and protégé begin the agreement, which cannot generally exceed 3 years. 9. Parties report on progress. The mentor is required to submit a semiannual report on the progress made under an active mentor- protégé agreement throughout the term of an agreement. The protégé is required to submit an annual report on progress made during each fiscal year of the program term and for each of the 2 fiscal years following the expiration of the agreement. The NDAA for Fiscal Year 2016 made several changes to DOD’s Pilot Mentor-Protégé Program. For example, with respect to the relationship between a mentor firm and protégé firm, the act strengthened affiliation rules by explicitly stating that the presence of certain family relationships between owners and managers in both the mentor and protégé firms constitutes affiliation and thereby is not allowed. In addition, the act amended the size requirement so that DOD’s size standard for protégés is set at less than half of that used by SBA. See appendix III for additional information on selected changes to the program made by the NDAA for Fiscal Year 2016. According to SBA, the All Small Mentor-Protégé Program is designed to apply to all federal small business contracts, including those that small businesses currently participate in under other federal mentor-protégé programs. In addition, according to SBA, the rule establishing the regulations for the program is intended to be consistent with the regulations for SBA’s 8(a) Business Development Mentor-Protégé Program. SBA’s All Small Mentor-Protégé Program includes certain requirements for the mentor-protégé programs of other federal agencies, but, as required by federal law, the program exempts DOD’s Pilot Mentor- Protégé Program from these requirements. For example, the final rule states that the head of a department or agency must submit a plan to SBA for any previously existing mentor-protégé program that the department or agency seeks to continue within 1 year of the effective date of the rule (or by August 24, 2017). In addition, the final rule states that a federal department or agency may not start a new mentor-protégé program unless the head of the department or agency submits a plan to the SBA Administrator for the program and the SBA Administrator approves the plan. The incentives provided to mentors under SBA’s programs differ from those provided under DOD’s Pilot Mentor-Protégé Program, as we discuss in greater detail later in this report. DOD components have the authority to approve mentor-protégé agreements, and they are required to follow the regulations, policies, and procedures DOD has established for the program. However, DOD lacks controls needed to provide reasonable assurance that mentor-protégé agreements approved by DOD components contain all elements required by DOD policies and regulations. DOD’s regulations and OSBP procedures do not prescribe the process DOD components should follow to approve mentor-protégé agreements. Instead, DOD’s OSBP has allowed DOD components to develop their own detailed procedures. As shown in the following examples, these procedures differ in terms of some elements, such as criteria and time frames: Air Force. The Department of the Air Force (Air Force) issues an announcement each fiscal year that outlines its process for considering proposed mentor-protégé agreements. The fiscal year 2016 announcement describes a two-step solicitation process. For fiscal year 2016, the Air Force was first to solicit white papers from prospective mentors describing, among other things, the proposed developmental assistance the mentor planned to provide to the protégé and the anticipated benefits of the agreement to the Air Force and protégé. Second, the Air Force was to invite mentors with white papers found to be consistent with the program goals to submit a formal mentor-protégé agreement proposal. As part of the proposal, mentors were required to submit a mentor-protégé agreement using the template found on OSBP’s website. The announcement also stated that the Air Force planned to consider four factors when determining whether to recommend proposals for award: (1) relevance of proposed technology to current Air Force requirements, (2) overall technical approach to providing transfer of technology and developmental assistance to the protégé, (3) the mentor’s capacity and capability to achieve the objectives of the mentor-protégé program, and (4) estimated cost of the proposed assistance. Army. The Department of the Army (Army) guidance states that proposed mentor-protégé agreements are to be approved and funded twice each fiscal year. The guidance states that the Army intends to approve and fund proposed agreements that have a strong technical component or focus on the innovative transfer of state-of-the-art technology that supports Army troops. The Army guidance further states that mentors are required to submit a mentor-protégé agreement using the template found on OSBP’s website. The guidance notes that the Army is to consider the following factors in descending order of importance when evaluating proposed mentor- protégé agreements: (1) subcontracting and prime contractor opportunities, (2) technical approach for developmental assistance, (3) involvement of historically black colleges and universities and minority institutions, (4) relevance to Army programs, (5) corporate capability and commitment, (6) management plan, and (7) mentor and protégé past and present performance. Navy. The Department of the Navy (Navy) guidance states that the prospective mentor and protégé are required to schedule an informal introductory briefing with the Navy’s director of small business programs to discuss the proposed concept and objective of the mentor-protégé agreement. As part of the briefing, the mentor and protégé are required to discuss, among other things, the mentor’s experience providing developmental assistance under other mentor- protégé agreements; the proposed benefits to the mentor, protégé, and the Navy under the proposed agreement; and the rationale for why the proposed agreement should be sponsored. The Navy is to then provide the mentor and protégé with feedback on whether to submit a proposed mentor-protégé agreement. The Navy’s guidance also states that the Navy accepts mentor-protégé agreement proposals during three time frames each fiscal year (December 1-31, March 1-31, and August 1-31). As part of the proposal, mentors are required to submit a mentor-protégé agreement using the template found on OSBP’s website. The guidance further states that the Navy is to consider the following factors in descending order of importance when evaluating proposed mentor-protégé agreements: (1) merit of the technology transfer to the protégé, (2) perceived benefit of the agreement to the Navy, (3) perceived benefit of the agreement to the protégé, (4) percentage of hours associated with technology transfer, (5) subcontracting opportunities available to the protégé, (6) utilization of historically black colleges and universities and minority institutions, Procurement Technical Assistance Centers, and Small Business Development Centers, and (7) proposed cost. Although DOD components have flexibility in considering a range of factors when approving mentor-protégé agreements, the agreements themselves are required to include certain elements that are defined in DOD’s procedures, and these elements serve a variety of purposes. For example, an agreement must include the protégé’s primary North American Industry Classification System (NAICS) code, which is used to determine the protégé’s eligibility to participate in the program. In addition, the agreement must describe the developmental assistance planned for the protégé, how the assistance is to address the protégé’s identified needs and enhance its ability to perform successfully under contracts or subcontracts within DOD and other federal agencies, and factors to assess the protégé firm’s developmental progress under the program, including specific milestones for providing assistance. The agreement must also specify a program participation term that does not exceed 3 years, which provides assurance that the mentor provides assistance to the protégé in a timely manner. Further, the agreement also is required to be signed and dated by the mentor and protégé, which provides evidence of meeting the program requirement between the two parties. Figure 3 presents a complete list of the elements that mentors and protégés are required to include in their agreements. Our review of a randomly selected probability sample of 44 of the 78 total active DOD mentor-protégé agreements in place as of June 2016 found that a number of these agreements were missing required elements. Specifically, based on our review, we estimate that 27 percent of the agreements did not address all required elements. With respect to specific elements, we estimate that 25 percent of the agreements did not include the signature of the mentor and protégé, 9 percent did not include the protégé’s primary NAICS code, and 7 percent did not include an anticipated start and end date for the agreement. These missing elements suggest that the components’ procedures for approving mentor-protégé agreements do not provide reasonable assurance that agreements are completed in accordance with DOD requirements. According to officials from OSBP, which manages DOD’s Pilot Mentor- Protégé Program, the DOD component that approves a mentor-protégé agreement is responsible for providing assurance that the agreement includes the required elements in accordance with the program’s policies. However, DOD’s procedures do not define how DOD components should review these proposals. OSBP officials told us that OSBP developed a checklist for DOD components to use to document the review of proposed mentor-protégé agreements. The checklist includes whether the mentor and protégé have signed the agreement, whether the mentor and protégé meet the program’s eligibility requirements, and whether the agreement includes a description and milestones for the developmental assistance that is to be provided under the agreement, among other things. A senior OSBP official added that OSBP does not require DOD components to use the checklist, nor does OSBP periodically review the actions DOD components have taken to review mentor-protégé agreements for approval because OSBP does not want to micromanage the components. Instead, the official said that OSBP relies on DOD components to ensure that approved mentor-protégé agreements include all required elements, and in some cases, DOD components have developed their own checklist. Federal internal control standards state that management should implement control activities through policies, and that this principle may be applied by, among other things, periodically reviewing control activities for continued relevance and effectiveness in achieving the entity’s objective or addressing related risks. As part of its responsibility for managing the Pilot-Mentor Protégé Program, OSBP is ultimately responsible for overseeing the program’s policies and procedures. Because OSBP does not review the procedures established by DOD components for approving proposed mentor-protégé agreements or conduct its own monitoring of the components, it does not have assurance that the agreements approved by the components include all required elements. As a result, OSBP cannot ensure that the program requirements are serving their intended purposes—for example, as previously discussed, missing industry codes are used to determine whether protégés are eligible to participate in the program, and the signatures of the mentor and protégé are required for the agreement to meet the program requirements. Although DOD has established some performance measures for its Pilot Mentor-Protégé Program, the department lacks performance goals and other measures needed to fully assess the program. DOD prepares an annual report for the program related to the progress of protégés under active mentor-protégé agreements and protégés that completed or otherwise terminated participation in the program during the preceding 2 fiscal years. The report includes specific performance measures that describe changes in protégés’ annual revenue, number of employees, number of DOD prime contract awards, and number of DOD subcontract awards. DOD’s annual reports show that protégés experienced increased revenues, number of employees, and DOD prime contracts and subcontracts while participating in the program from fiscal years 2011 through 2015. In contrast, the reports show that protégés that left the program during this period experienced decreased revenues and employees in the 2 years following program participation (see table 1). However, we found that DOD lacks performance goals and additional measures needed to effectively assess its Pilot Mentor-Protégé Program. Specifically, DOD has not established any measurable goals for the performance information it reports for the program. Performance measurement is the ongoing monitoring and reporting of program accomplishments, particularly progress toward pre-established goals. We have previously identified performance measurement as a best practice that allows organizations to track progress in achieving their goals and gives managers crucial information to identify gaps in program performance and plan any needed improvements. In addition, according to federal internal control standards, managers need to compare actual performance against planned or expected results and analyze significant differences. While DOD collects and reports information on protégés’ annual revenue, employment levels, and prime contract and subcontract awards, it has not established goals for these measures. As a result, DOD is limited in its ability to use this information to assess program performance. In addition, DOD has not established measures for one part of the program’s statutory purpose. The information that DOD currently reports relates to the part of the program purpose to increase the participation of small businesses in DOD and other federal government contracts. As previously discussed, part of the program’s purpose is also to enhance the capabilities of eligible small businesses to perform as subcontractors and suppliers under DOD and other contracts and subcontracts, and the current program measures do not address this part of the program’s purpose. Mentors report information to DOD in semiannual reports on how the capabilities of protégés were enhanced, what certifications the protégés obtained, and what technology the mentors transferred to the protégés. However, DOD does not include this information in its annual report for the program, and it has not developed performance measures or goals related to this information. As a result, DOD is not using the information it collects to fully assess how well the program is enhancing the capabilities of eligible small businesses to perform under DOD and other contracts, and, therefore, not fully addressing the purpose of the program. OSBP officials told us that they have not considered setting targets each year for the program’s current performance measures because these measures—revenues, employee levels, and prime contract and subcontract awards—do not reflect the primary focus of the program, which the officials noted is to assist small businesses in becoming part of the defense supply chain. The officials noted that DOD is working to develop measures that provide a better indication of program effectiveness. For example, the officials described a potential measure that would assess how the assistance mentors provide to protégés under agreements helps resolve operational challenges. However, DOD had yet to establish any such measures as of January 2017. Moreover, the officials estimated that it would take about 2 years for DOD to develop and establish baselines for these measures. Until DOD establishes performance goals and related performance measures consistent with the program’s stated purpose, it may be more difficult for DOD to analyze the effectiveness of the Pilot Mentor-Protégé Program and to identify and prioritize potential improvements. As a result, Congress may not have the information it needs to make informed decisions on whether to reauthorize the pilot program, terminate it, or make it permanent. DOD’s Pilot Mentor-Protégé Program and SBA’s All Small Mentor- Protégé Program have a number of key differences, including their eligibility requirements and participation incentives. These differences are largely based on statutory provisions and program rules that may—but will not necessarily—affect future efforts to harmonize the programs. Implementation of SBA’s All Small Mentor-Protégé Program is ongoing, but DOD and SBA officials said they plan to consider harmonizing their programs after SBA’s program is fully implemented in July 2017. Based on our review of statutory provisions and program rules, we found that DOD’s Pilot Mentor-Protégé Program and SBA’s All Small Mentor- Protégé Program differ in a number of ways. For example, DOD and SBA apply the SBA size standards differently when determining the eligibility of protégés under their respective programs. SBA’s small business size standards establish the largest that a firm can be and still qualify as a small business for federal government programs. DOD’s Pilot Mentor- Protégé Program limits the size of eligible protégés to less than one-half of SBA’s small business size standard. In contrast, a firm can qualify as a protégé under SBA’s All Small Mentor-Protégé Program if it does not exceed the SBA small business size standard for its primary NAICS code. Table 2 illustrates how DOD and SBA apply the SBA size standard differently under their respective programs for selected NAICS codes. For example, a business that manufactures printed circuit assemblies would be eligible as a protégé under DOD’s Pilot Mentor-Protégé Program as long as it employed fewer than 375 employees; under SBA’s All Small Mentor-Protégé Program, the same business would be eligible as long as it employed no more than 750 employees. Further, DOD’s Pilot Mentor-Protégé Program permits mentors to enter into agreements with more than one protégé. Under SBA’s All Small Mentor-Protégé Program, mentors generally are not permitted to have more than one protégé at a time. However, SBA may authorize a firm to mentor more than one protégé at a time if it can demonstrate that the additional mentor-protégé relationship will not adversely affect the development of either protégé firm (e.g., the second firm may not be a competitor of the first firm), but under no circumstances does SBA permit a mentor to have more than three protégés at one time. In addition, DOD’s Pilot Mentor-Protégé Program receives appropriated funds, while SBA’s All Small Mentor-Protégé Program does not; as previously noted, DOD’s Pilot Mentor-Protégé Program received $28.3 million for fiscal year 2016. DOD uses its appropriated funding to provide direct reimbursement to mentors for providing developmental assistance to protégés. Incentive mechanisms are another key difference between DOD’s Pilot Mentor-Protégé Program and SBA’s All Small Mentor- Protégé Program. Under DOD’s program, mentors may receive direct reimbursement or credit toward subcontracting goals for providing developmental assistance to protégés. Under statute, generally a mentor in DOD’s Pilot Mentor-Protégé Program may not receive reimbursements for costs of assistance that exceed $1 million in a fiscal year. Under SBA’s All Small Mentor-Protégé Program, a mentor and protégé may agree to seek any government prime contract or subcontract jointly as a small business, provided the protégé qualifies as “small” for the procurement, and receive exclusion from the affiliation rules for the duration of the mentor-protégé agreement after both meeting certain regulatory requirements and obtaining SBA approval. In addition, SBA’s program permits federal agencies to provide incentives during the contract evaluation process to a mentor that will provide significant subcontracting work to its SBA-approved protégé firm, where appropriate. Although the statutory provisions and program rules that determine mentor incentives differ, DOD’s Pilot Mentor-Protégé Program and SBA’s All Small Mentor-Protégé Program face similar inherent risk that mentors may receive benefits but may not necessarily provide developmental assistance to protégés as agreed to in their mentor-protégé agreements. Officials from both agencies said they take certain actions intended to mitigate this risk. For example, DOD requires an annual review of each mentor-protégé agreement to verify the performance information provided in the annual reports. Similarly, SBA reviews the protégé’s report on the mentor-protégé relationship and may decide not to continue the agreement if it finds that the mentor has not provided the agreed upon assistance or the assistance has not resulted in developmental gains for the protégé. Appendix IV provides more details on the characteristics of DOD’s Pilot Mentor-Protégé Program and SBA’s All Small Mentor-Protégé Program. As previously noted, SBA’s All Small Mentor-Protégé Program is designed to apply to nearly all federal small business contracts, including those that small businesses currently participate in as part of other federal mentor-protégé programs. Under the new program, federal agencies will be required to be granted approval from the SBA Administrator in order to continue their existing mentor-protégé programs or to start new ones. By statute, federal departments and agencies with mentor-protégé programs in effect before January 2, 2013, are permitted to continue them until late July 2017 without being granted the SBA approval. However, after late summer 2017, approval must be granted to continue the program. Although it is too soon to tell whether the other federal agencies that currently administer mentor-protégé programs will seek to continue their own programs, SBA’s experience in implementing the All Small Mentor- Protégé Program may provide useful insights for harmonizing other federal mentor-protégé programs—that is, identifying and assessing differences and eliminating or reducing inconsistencies, where appropriate. For example, implementing SBA’s program may involve actions to potentially harmonize characteristics of mentor-protégé programs that serve different socioeconomic categories of small businesses and multiple agencies with various procurement needs. Although the statute creating SBA’s program exempted DOD from the program’s requirements, the legislation implementing DOD’s program and the final rule implementing SBA’s program state that both programs are designed to enhance the capabilities of protégé firms to perform under federal government and commercial contracts. This general similarity in program purpose could facilitate harmonization between the programs. DOD and SBA officials said that opportunities to harmonize the programs may exist; however, they said it is too soon to explore harmonization because SBA is in the early stages of implementing the All Small Mentor- Protégé Program. Officials said that they plan to consider opportunities to harmonize their programs once implementation of SBA’s new program is complete later in 2017. DOD’s Pilot Mentor-Protégé Program is intended to both (1) enhance the capabilities of eligible disadvantaged small business concerns to perform as subcontractors and suppliers under DOD contracts and other contracts and subcontracts and (2) increase the participation of such business concerns as subcontractors and suppliers under DOD contracts, other federal government contracts, and commercial contracts. DOD procedures require that mentor-protégé agreements contain certain elements to help provide assurance that the agreements support program purposes and that the participants meet program requirements, among other things. However, we found that program participants did not consistently include all of the required elements in these agreements and that DOD’s OSBP does not review the procedures that DOD components use to approve the agreements, as suggested by federal internal control standards. As a result, some agreements have been approved even though they were missing required elements, and DOD cannot ensure that these program requirements are serving their intended purposes. In addition, DOD’s performance measures for the decades-old program lack performance goals and additional measures needed to effectively assess its Pilot Mentor-Protégé Program. Although DOD officials said they are considering changes to their performance measures, they had yet to make such changes as of January 2017. Until DOD establishes performance measures and related measurable goals consistent with the program’s stated purpose, it may be more difficult for DOD to fully assess the effectiveness of the Pilot Mentor-Protégé Program and to identify and prioritize potential improvements. In addition, Congress may not have the information it needs to make informed decisions on whether to reauthorize the pilot program, terminate it, or make it permanent. To provide reasonable assurance that DOD’s Pilot Mentor-Protégé Program meets its mission, we recommend that the Director, DOD’s Office of Small Business Programs, take the following two actions: 1. Conduct periodic reviews of the processes DOD components follow to approve agreements and take oversight actions, as appropriate. 2. Complete actions to develop performance goals and related measures that are consistent with the program’s stated purpose. We provided a draft of this report to DOD and SBA for comment. In its written comments, which are summarized and reprinted in appendix V, DOD concurred with both recommendations. SBA provided technical comments, which we incorporated as appropriate. DOD concurred with our recommendation to conduct periodic reviews of the processes DOD components follow to approve agreements and take oversight actions, as appropriate. DOD said it has drafted a standard operating procedure (SOP) that addresses GAO’s recommendations. In addition, DOD said that the SOP requires DOD component program managers to review mentor-protégé proposal packages prior to submitting them for review and concurrence. DOD also said the draft SOP includes a new checklist that identifies required elements, such as the North American Industry Classification System code, mentor approval letter, and fully executed copy of the mentor-protégé agreement. DOD also concurred with our recommendation to complete actions to develop performance goals and related measures that are consistent with the program’s stated purpose. DOD stated that it is currently evaluating ways to develop measures that provide a better indication of program effectiveness for active mentor-protégé agreements and the 2 years following program participation. DOD noted that establishing baseline performance goals for all protégés to meet would not be advantageous due to the differences of each agreement and the small business needs of each respective DOD component; however, DOD stated that establishing a set of goals by tier group, for example, may better determine program performance as it would even the playing fields between large and small business concerns. Finally, DOD stated that it collects information from mentors on how they have enhanced the capabilities of protégés and plans to begin including these data in the annual report to Congress once performance measures and goals have been established. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, DOD’s Director, Office of Small Business Programs, and the SBA Administrator. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This report examines (1) the Department of Defense’s (DOD) procedures for approving mentor-protégé agreements; (2) DOD’s performance measures for the program; and (3) differences between DOD’s Pilot Mentor-Protégé Program and the Small Business Administration’s (SBA) All Small Mentor-Protégé Program and the agencies’ efforts to harmonize the two programs. To address the first objective, we reviewed DOD’s current regulations and policies and procedures for approving mentor-protégé agreements, including changes mandated by the National Defense Authorization Act (NDAA) for Fiscal Year 2016. In addition, we reviewed guidance developed by three military services (the Departments of the Air Force, Army, and Navy) that administer and have the authority to approve mentor-protégé agreements. We also reviewed a randomly selected probability sample of 44 of the 78 active DOD mentor-protégé agreements in place as of June 2016 to assess how well DOD’s procedures provide reasonable assurance that mentor-protégé agreements meet program requirements. With this probability sample, each member of the study population had a nonzero probability of being included, and that probability could be computed for any member. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. All percentage estimates in this report are presented along with their margins of error at the 95 percent confidence level. We interviewed DOD officials from the Office of Small Business Programs (OSBP) to discuss the procedures for approving mentor-protégé agreements and the controls in place to ensure that DOD components followed the procedures. We assessed DOD’s practices against Standards for Internal Control in the Federal Government. We also reviewed the NDAA for Fiscal Year 2016 to identify changes that the act made to the DOD Pilot Mentor-Protégé Program, such as amending the size requirement so that a protégé is less than half of the SBA size standard for its primary North American Industry Classification System code. To address the second objective, we reviewed the measures and other information DOD reports on the performance of its Pilot Mentor-Protégé Program, including DOD’s current measures, which provide information related to the participation of protégés as subcontractors and suppliers under DOD and other contracts, and whether these measures address how the capabilities of protégé firms have been enhanced. This review included the annual reports prepared by the Defense Contract Management Agency (DCMA) for fiscal years 2011 through 2015 that provide aggregate data on protégé firms’ annual revenues, number of employees, and awards of prime contracts and subcontracts during their tenure in the program and for 2 years after they leave the program. We also reviewed the most recent semiannual report provided to DOD by the mentor and protégé for the randomly selected probability sample of mentor-protégé agreements we reviewed to examine the types of performance information included in these reports. We also interviewed DOD officials regarding their use of performance information. We compared DOD’s practices to those GAO previously identified as being associated with agencies that were successful in measuring their performance and federal internal control standards. To address the third objective, we reviewed the DOD Pilot Mentor- Protégé Program’s implementing statute and subsequent legislative changes to the program, including those made by the NDAA for Fiscal Year 2016. We also reviewed SBA’s final rule which implemented provisions of the Small Business Jobs Act of 2010 and the National Defense Authorization Act for Fiscal Year 2013, which established the All Small Mentor-Protégé Program and the policies and procedures for SBA’s 8(a) Business Development Program. We identified selected characteristics of the programs, including the purpose, funding, number of agreements, eligibility requirements of mentors and protégés, authority to approve agreements, types of assistance mentors may provide to protégés, reporting requirements for mentors and protégés, and the mechanism used to encourage mentor participation in each program. We also identified differences between the programs for some of these characteristics and discussed them with DOD and SBA officials, including how differences between the programs could affect possible efforts to harmonize the programs. In addition, we analyzed a randomly selected probability sample of 44 of the 78 active DOD mentor-protégé agreements in place as of June 2016 to identify characteristics of protégé firms. These characteristics include the number of years protégé firms have been in business, the number of employees of the protégé firms, annual revenues of the protégé firms, and their previous contracting experience. We also identified the types of developmental assistance mentor firms agreed to provide to protégé firms based on information reported for the randomly selected probability sample of agreements. These results can be found in appendix II. We conducted this performance audit from January 2016 to April 2017, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Under the Department of Defense’s (DOD) Pilot Mentor-Protégé Program, mentor firms agree to provide eligible small business concerns (protégé firms) with technical, managerial, and other business development assistance under approved mentor-protégé agreements. Based on our review of a randomly selected probability sample of 44 of the 78 active DOD mentor-protégé agreements in place as of June 2016, we analyzed selected characteristics of protégé firms that enter the program and the types of developmental assistance that mentor firms agree to provide protégé firms. As shown in figure 4, we estimate that over half (57 percent) of protégé firms have been in business for 10 years or less, and an estimated 21 percent have been in business 5 years or less. We also estimate that more than 33 percent of protégé firms have been in business for more than 15 years. We also analyzed DOD mentor-protégé agreement documentation to determine the number of employees in protégé firms. We found that an estimated 77 percent of the protégé firms had 50 or fewer employees (see fig. 5). In addition, we examined DOD mentor-protégé agreement documentation to determine protégé firms’ annual revenue. We found that an estimated 57 percent of protégé firms had annual revenue of $5 million or less. More than three-quarters of protégé firms had annual revenue of $10 million or less, and an estimated 17 percent of protégé firms had annual revenues above $10 million (see fig. 6). We also analyzed DOD mentor-protégé agreement documentation to determine the protégé firms’ previous contracting experience. We found that an estimated 39 percent of protégé firms indicated having obtained a DOD prime contract in the 2 fiscal years before the proposed mentor- protégé agreement was submitted. We also found that an estimated 74 percent of protégé firms indicated having obtained DOD subcontracts in the 2 fiscal years before the proposed mentor-protégé agreement was submitted and that an estimated 26 percent indicated having received a federal agency (other than DOD) subcontract award from the mentor firm during that time. In addition, an estimated 53 percent of protégé firms indicated having received subcontract awards from their mentor firm in the 2 fiscal years before the proposed agreement was submitted. We also found that an estimated 13 percent of protégé firms indicated they did not have previous contracting experience (see fig. 7). Based on our analysis of DOD mentor-protégé agreement documentation, we found that mentor firms anticipated providing some types of assistance more frequently than others. For example, we found that all mentor firms anticipated providing general business assistance to their protégé firms. In addition, we found that an estimated 93 percent of mentor firms anticipated awarding subcontracts to their protégé firms, and an estimated 80 percent anticipated obtaining assistance for their protégé firms through historically black colleges and universities, minority institutions of higher education, Procurement Technical Assistance Centers, or Small Business Development Centers. In contrast, an estimated 2 percent of mentor firms anticipated extending loans to protégé firms. We also found that an estimated 20 percent anticipated providing progress payments and an estimated 18 percent anticipated providing advance payments to their protégé firms (see fig. 8). The National Defense Authorization Act (NDAA) for Fiscal Year 2016 made a number of changes to the Department of Defense’s (DOD) Pilot Mentor-Protégé Program. In addition to the changes previously discussed, Section 861 of the NDAA for Fiscal Year 2016 also made the following changes and additions, among others: Extended the DOD mentor-protégé program for 3 more years through fiscal year 2018 for the formation of new agreements and for the reimbursement of incurred costs and credit toward attaining subcontracting goals, under existing agreements incurred through fiscal year 2021. Imposed a limit on when protégé firms can enter into mentor-protégé agreements under the program to only during the 5-year period beginning on the date the protégé firm enters into its first mentor- protégé agreement. Changed program eligibility requirements to include that a protégé firm may, among other things, be a small business concern that is either a nontraditional defense contractor or currently provides goods or services in the private sector that are critical to enhancing the capabilities of the defense supplier base and fulfilling key DOD needs. Required mentor firms to report at least once a fiscal year any new awards of subcontracts on a competitive or noncompetitive basis to the protégé firm under DOD contracts or other contracts, including the value of such subcontracts. Required mentor firms to report at least once a fiscal year any assistance obtained on behalf of the protégé from Small Business Development Centers, certain entities providing procurement technical assistance, or historically black colleges or universities or minority institutions of higher education. Required the Office of Small Business Programs (OSBP) to review reports submitted by mentor firms on specific statutory topics and if a mentor-protégé agreement is not furthering the purpose of the program, decide not to approve any continuation of the agreement. Removed “business development” assistance from the permissible mentor firm personnel provided assistance and removed cash in exchange for an ownership interest in the protégé firm from the types of assistance a mentor firm may provide to a protégé firm. Revised the definitions of the term ‘‘disadvantaged small business concern” limiting it to firms that have less than half the size standard corresponding to their primary North American Industry Classification System code. Changed affiliation rules by explicitly stating that the presence of certain family relationships between the owners and managers in the mentor and protégé firms constitutes affiliation and thereby is not allowed. Enhance the capabilities of eligible small business concerns to perform as subcontractors and suppliers under DOD contracts and other contracts and subcontracts, and increase the participation of such business concerns as subcontractors and suppliers under DOD contracts, other federal government contracts, and commercial contracts. Small Business Administration (SBA) 8(a) Business Development (BD) Mentor-Protégé Program Enhance the capabilities of the protégé (an 8(a) BD Program participant), assist the protégé with meeting the goals established in its SBA-approved business plan, and improve the protégé’s ability to successfully compete for contracts. SBA All Small Mentor-Protégé Program Enhance the capabilities of protégé firms by requiring approved mentors to provide business development assistance to protégé firms and to improve the protégé firms’ ability to successfully compete for federal contracts. 28.3 million (fiscal year 2016) 78 (as of June 2016) 465 (as of November 2016) 45 (as of November 2016) Protégé must be a small disadvantaged business, a business entity owned and controlled by an Indian tribe or a Native Hawaiian Organization, a women-owned business, a Historically Underutilized Business Zone (HUBZone) small business, service-disabled veteran-owned small business, a small business that statutorily qualifies as a nontraditional defense contractor, or an eligible entity employing the severely disabled. Protégé must qualify as small by being less than half the size standard corresponding to its primary North American Industry Classification System (NAICS) code. Mentor may rely on written representation that the protégé qualifies as small when entering into a mentor-protégé agreement but must confirm a protégé’s status as a HUBZone small business concern. Protégé must (1) qualify as small for the size standard corresponding to its primary North American Industry Classification System (NAICS) code or identify that it is seeking business development assistance with respect to a secondary NAICS code and qualify as small for the size standard corresponding to that NAICS code; and (2) demonstrate how the business development assistance to be received through its proposed mentor-protégé relationship would advance the goals and objectives set forth in its business plan. Protégé must qualify as small for the size standard corresponding to its primary North American Industry Classification System (NAICS) code or identify that it is seeking business development assistance with respect to a secondary NAICS code and qualify as small for the size standard corresponding to that NAICS code. A firm may self-certify that it qualifies as small for its primary or identified secondary NAICS code. When the firm is not small for the primary NAICS code and seeks to qualify for the secondary NAICS code, the firm must demonstrate how the mentor-protégé relationship is a logical business progression for the firm and will further develop or expand current capabilities. SBA will not approve a relationship in a secondary NAICS code in which the firm has no prior experience. A mentor, among other requirements, must be: (1) not affiliated with the protégé firm prior to the approval of the agreement; (2) demonstrate that it is qualified to provide assistance that will contribute to the purpose of the program and is in good financial health and character and does not appear on a federal list of debarred or suspended contractors; (3) an entity other than small business that is a DOD prime contractor with an active subcontracting plan (unless a waiver to the small business exception has been obtained from the Director, Small Business Programs, Office of the Undersecretary of Defense for Acquisition, Technology, and Logistics); or (4) a graduated 8(a) firm that provides documentation of its ability to serve as a mentor; and (5) approved to participate as a mentor. Small Business Administration (SBA) 8(a) Business Development (BD) Mentor-Protégé Program Mentor must (1) be capable of carrying out its responsibilities to assist the protégé firm under the proposed mentor-protégé agreement; (2) possesses good character; (3) not appear on the federal list of debarred or suspended contractors; and (4) be able to impart value to a protégé due to lessons learned and practical experience gained because of the 8(a) BD program, or through its knowledge of general business operations and government contracting. Mentor must demonstrate that it (1) is capable of carrying out its responsibilities to assist the protégé under the proposed mentor-protégé agreement; (2) possesses good character; (3) does not appear on the federal list of debarred or suspended contractors; and (4) can impart value to a protégé firm due to lessons learned and practical experience gained or through its knowledge of general business operations and government contracting. Director, Small Business Programs, for each military department or defense agency generally approves mentor- protégé agreements. The Associate Administrator for Business Development or his/her designee must approve the mentor-protégé agreement. The agreement will not be approved if SBA determines that the assistance to be provided is not sufficient to promote any real developmental gains to the protégé, or if SBA determines that the agreement is merely a vehicle to enable the mentor to receive 8(a) contracts. The Associate Administrator for Business Development or his/her designee must approve the mentor-protégé agreement. The agreement will not be approved if SBA determines that the assistance to be provided is not sufficient to promote any real developmental gains to the protégé, or if SBA determines that the agreement is merely a vehicle to enable the mentor to receive small business contracts. A mentor may enter into agreements with more than one protégé. Small Business Administration (SBA) 8(a) Business Development (BD) Mentor-Protégé Program Generally, a mentor will have no more than one protégé at a time. However, SBA may authorize a concern to mentor more than one protégé at a time where it can demonstrate that the additional mentor-protégé relationship will not adversely affect the development of either protégé firm (e.g., the second firm may not be a competitor of the first firm). Under no circumstances will a mentor be permitted to have more than three protégés at one time between the SBA 8(a) Business Development Mentor-Protégé Program and the SBA All Small Mentor-Protégé Program. SBA may authorize a participant to be both a mentor and a protégé at the same time where the participant can demonstrate that the second relationship will not compete or otherwise conflict with the first mentor- protégé relationship. SBA All Small Mentor-Protégé Program Generally, a mentor will have no more than one protégé at a time. However, SBA may authorize a concern to mentor more than one protégé at a time where it can demonstrate that the additional mentor-protégé relationship will not adversely affect the development of either protégé firm (e.g., the second firm may not be a competitor of the first firm). Under no circumstances will a mentor be permitted to have more than three protégés at one time between the SBA 8(a) Business Development Mentor Protégé Program and the SBA All Small Mentor-Protégé Program. SBA may authorize a participant to be both a mentor and a protégé at the same time where the participant can demonstrate that the second relationship will not compete or otherwise conflict with the first mentor- protégé relationship. Protégé may only have one active DOD mentor-protégé agreement. Protégé firm may generally have only one mentor at a time. SBA may approve a second mentor for a particular protégé firm where additional relationship will not compete/conflict with the existing mentor-protégé relationship and either (1) the added relationship pertains to an unrelated secondary NAICS code or (2) the protégé is seeking expertise the existing mentor does not possess. Protégé firm may generally have only one mentor at a time. SBA may approve a second mentor for a particular protégé firm where additional relationship will not compete/conflict with the existing mentor-protégé relationship and either (1) the added relationship pertains to an unrelated secondary NAICS code or (2) the protégé is seeking expertise the existing mentor does not possess. Mentor may, among other actions, provide management and/or technical assistance, award subcontracts, make loans, make advance and/or progress payments under subcontracts, and obtain assistance from Small Business Developmental Centers, Procurement Technical Assistance Centers, historically black colleges and universities, or minority institutions of higher education. Small Business Administration (SBA) 8(a) Business Development (BD) Mentor-Protégé Program Mentor may provide management and/or technical assistance, award subcontracts, trade education, make loans and/or equity investments, and cooperate on joint venture projects. Mentor may provide management and/or technical assistance, award subcontracts, trade education, make loans and/or equity investments, and cooperate on joint venture projects. Mentors must report on the progress made under active mentor-protégé agreements semiannually, including (1) dollars obligated (for reimbursable agreements); (2) expenditures; (3) dollars credited toward applicable subcontracting goals, if any; (4) number and dollar value of subcontracts awarded to the protégé firm; (5) description of developmental assistance provided, including milestones achieved; (6) impact of agreement in terms of capabilities enhanced, certifications received, and/or technology enhanced. Protégés must report data annually on the progress made in employment, revenues, and participation in DOD contracts. The protégé report may be provided as part of the mentor report. In its annual business plan update, the protégé must report to SBA for the protégé’s preceding program year (1) all technical and/or management assistance provided by the mentor to the protégé; (2) all loans to and/or equity investments made by the mentor in the protégé; (3) all subcontracts awarded to the protégé by the mentor, and the value of each subcontract; (4) all federal contracts awarded to the mentor/protégé relationship as a joint venture (designating each as an 8(a), small business set aside, or unrestricted procurement), the value of each contract, and the percentage of the contract performed and the percentage of revenue accruing to each party to the joint venture; and (5) a narrative describing the success such assistance has had in addressing the developmental needs of the protégé and addressing any problems encountered. Protégés must report to SBA annually, within 30 day of the agreement approval anniversary, (1) all technical and/or management assistance provided by the mentor to protégé; (2) all loans to and/or equity investments made by the mentor in the protégé; (3) all subcontracts awarded to the protégé by the mentor and all subcontracts awarded to the mentor by the protégé, and the value of each subcontract; (4) all federal contracts awarded to the mentor-protégé relationship as a joint venture, the value of each contract, and the percentage of the contract performed and the percentage of revenue accruing to each party to the joint venture; and (5) a narrative describing the success such assistance has had in addressing the developmental needs of the protégé and addressing any problems encountered. Additionally, the protégé must report the mentoring services it receives by category and hours. DOD may provide mentors with either cost reimbursement or credit against applicable subcontracting goals established under contracts with DOD or other federal agencies. Small Business Administration (SBA) 8(a) Business Development (BD) Mentor-Protégé Program A mentor and protégé may participate in a joint venture as a small business for any government prime contract or subcontract, including procurements with a dollar value less than half the size standard corresponding to the assigned NAICS code and 8(a) sole source contracts, provided the protégé qualifies as small for the procurement and additional provisions for 8(a) sole source contracts. A mentor and protégé may participate in a joint venture as a small business for any government prime contract or subcontract, provided the protégé qualifies as small for the procurement. Such a joint venture may seek any type of small business contract (i.e., small business set-aside, 8(a), HUBZone, service-disabled veteran-owned small business, or women-owned small business) for which the protégé firm qualifies and within certain requirements, receives exclusion from the affiliation rules for the duration of the joint venture agreement. Procuring activities may provide incentives in the contract evaluation process to a mentor that will provide significant subcontracting work to its SBA- approved protégé firm, where appropriate. In addition to the contact named above, Marshall Hamlett (Assistant Director), Abiud Amaro Diaz, Isidro Gomez, Farrah Graham, Marc Molino, Jessica Sandler, and Jennifer Schwartz made key contributions to this report. | DOD's Pilot Mentor-Protégé Program, was first authorized as a pilot program in 1990, and has been repeatedly renewed as a pilot program, most recently through September 30, 2018. For fiscal year 2016, total funding for this program was $28.3 million. The joint explanatory statement to accompany the National Defense Authorization Act for Fiscal Year 2016 includes a provision for GAO to report on DOD's pilot program. This report examines, among other things, (1) DOD's procedures for approving mentor-protégé agreements and (2) DOD's performance measures for the program. GAO analyzed DOD guidance, reviewed a randomly selected probability sample of active DOD mentor-protégé agreements and estimated their completeness at a 95 percent confidence interval, reviewed DOD's annual program reports for fiscal years 2011 through 2015, and interviewed agency officials. The Department of Defense (DOD) relies on military services and agencies (DOD components) to approve the agreements that establish relationships between participants in its Pilot Mentor-Protégé Program. This program provides incentives for major defense contractors (mentors) to provide assistance to small disadvantaged firms (protégés) in an effort to enhance their capability to compete for federal and commercial contracts. However, DOD does not have reasonable assurance that approved agreements include all elements required by the program's regulations and policies. These elements include, among others, the protégé's industry code and the signature and date of the mentor and protégé. These elements serve a variety of purposes—for example, the industry code is used to determine the protégé's eligibility to participate in the program, and the signature and date of mentor and protégé are required in order for the agreement to be legally bindingto meet program requirements. Based on a review of a randomly selected probability sample of 44 of the 78 active mentor-protégé agreements in place as of June 2016, GAO estimates that 27 percent of agreements were missing required elements. For example, GAO estimates that 25 percent of agreements were not signed by both the mentor and protégé. Federal internal control standards state that management should implement control activities through policies and practices, including periodically reviewing control activities for continued effectiveness. DOD's Office of Small Business Programs (OSBP) manages the program and oversees program policies and procedures. However, OSBP does not review the DOD components' processes for approving mentor-protégé agreements and therefore has not taken appropriate oversight actions to provide reasonable assurance that agreements meet all requirements. As a result, the components have approved agreements that do not include required elements, and OSBP cannot ensure that the requirements are serving their various purposes. DOD's fiscal year 2011 through 2015 annual reports on its Pilot Mentor-Protégé Program include performance measures for several areas, but DOD lacks performance goals and other measures needed to effectively assess the program. Some of these measures show that during this period, protégés' revenue, number of employees, and DOD prime and subcontract awards increased while protégés participated in the program, but revenues and employment levels decreased in the 2 years after their participation ended. GAO found that DOD has not established any measurable goals for these measures. In addition, DOD collects information from mentors on how they have enhanced the capabilities of protégés, but DOD does not include this information in the program's annual report and has not developed performance measures or goals related to this information. GAO has previously identified performance measurement as a best practice that allows organizations to track progress and gives managers information to plan needed improvements. DOD officials told GAO they are working to develop measures that better indicate program outcomeseffectiveness, but as of January 2017 they had not established such measures. Without performance goals and related measures, DOD may be limited in its ability to analyze the effectiveness of the program, and Congress may not have information needed to inform future decisions regarding the program. GAO recommends that DOD (1) conduct periodic reviews of the components' processes for approving agreements and address identified deficiencies, as appropriate, and (2) develop performance goals and related measures that are consistent with the program's stated purpose. DOD concurred with GAO's recommendations. |
Figure 1 provides an overview of CBP organizational components that had prime contractor invoice review, approval, and payment responsibilities under the original SBInet program. CBP’s Contracting Division, which is part of the Office of Administration Procurement Directorate’s Enterprise Contracting Office, was assigned SBInet program prime contractor management and administrative responsibilities, including receiving and approving or rejecting invoices. The Office of Technology Innovation and Acquisition within CBP was assigned responsibility for managing key program and contractor oversight acquisition functions associated with SBInet, such as verifying and accepting contract deliverables. This office was also to work closely with CBP’s SBI Contracting Division. Contracting officers (CO) and contracting officer’s technical representatives (COTR) in the SBI Contracting Division were assigned responsibility for reviewing invoices and maintaining contract files. The SBI Business Operations Division was responsible for providing CO-approved invoices to CBP’s National Finance Center (NFC) for processing contractor invoices for payment. In carrying out its assigned prime contractor invoice-related responsibilities under the original SBInet program, CBP relied on the FAR, HSAR, as well as DHS, CBP, and SBI program policies and procedures. The overarching policy for federal contracting is the FAR. DHS issued a supplemental regulation, HSAR, an acquisition manual for DHS contracting and, in October 2008, CBP’s SBI Contracting Division issued standard operating procedures (SOP) setting out required review, approval, and processing steps for all SBI prime contractor invoice processing. Figure 2 provides an overview of CBP’s process for reviewing, approving, and paying prime contractor invoices under the original SBInet program. CBP’s process for reviewing, approving, and paying prime contractor invoices under the original SBInet program relies on both preventative and detective controls. Generally, preventative controls are more efficient and effective than detective controls. CBP’s detective controls began with SBInet’s prime contractor submitting an invoice to the SBI Contracting Division’s CO, COTR, and CBP’s NFC for review. After reviewing the invoice, the COTR was responsible for recommending whether the CO should approve or reject the invoice. If the CO and COTR approved an invoice, the CO was to notify the SBI Business Operations Division within the Program Management Office to check for fund availability. If funds were available, CBP’s NFC was to be notified to process the invoice for payment. Further, while not required, CBP’s COs may request the Defense Contract Audit Agency (DCAA), a defense agency within the U.S. Department of Defense, to conduct an invoice review or rate verification for any invoice. In addition, CBP officials told us that the CO will always request DCAA to conduct a closeout audit (a detective control) for any task order, although this is not required. DHS’s Office of Procurement provides that the CO submit a memorandum to request a final audit for the entire IDIQ contract, or any of it delivery order, or task order components. Under the original SBInet program, CBP took actions intended to establish internal controls over contractor payments. CBP established SOPs setting out required contractor invoice review, approval, and processing steps for CBP’s COs and COTRs to follow. These procedures were based on requirements in the FAR. We identified the need to improve CBP’s controls in two important areas. Specifically, we identified the need to improve CBP’s preventative controls over payments to the SBInet program contractor with respect to requiring invoices with sufficiently detailed data supporting billed costs to facilitate effective invoice review and specific, sufficiently detailed, risk-based invoice review procedures to enable full, effective, and documented reviews prior to making payments. Because CBP’s preventative controls were not fully effective, the agency will continue to (1) be impaired in its ability to provide assurance that the estimated $780 million already paid the prime contractor under the original SBInet program was proper and allowable, in the correct amount, and only for goods and services provided and (2) rely heavily on detective controls (primarily contract closeout audits) for assurance concerning the propriety of SBInet program disbursements. Further, until CBP takes action to improve its preventative controls, it will continue to be impaired in its ability to effectively review the estimated $80 million obligated, but yet to be disbursed, to the prime contractor under the original SBInet program. In addition, our findings have implications as possible “lessons learned” for DHS to consider and address as appropriate in designing and implementing contract payment controls for its new technology portfolio approach. Standards for Internal Control in the Federal Government and related guidance provide that an entity’s internal controls should enable it to verify that ordered (invoiced) goods and services were proper and met the government’s specifications. CBP’s policies and related SOPs applicable to the original SBInet program required the prime contractor to submit invoices showing total costs incurred by cost element (i.e., direct labor, direct materials, major subcontracts, other direct costs, overtime premium, overhead, travel, and general and administrative expenses). However, CBP’s policies and procedures did not require the invoices to include any additional supporting detail. Not requiring such detail not only precluded us from testing whether invoiced costs complied with the SBInet contract and were properly supported, but, more important, resulted in numerous instances in which CBP’s COs and COTRs did not have the detailed support they needed to effectively review the SBInet contractors’ invoices. For example, in one instance a CO requested additional detailed information such as travel dates and travel destinations to review the reasonableness of a lump sum invoiced cost amount for travel. Figure 3 shows an example of a SBInet prime contractor invoice submitted and paid for costs incurred for the period from September 12 through 25, 2008. Figure 3 also highlights lump sum invoiced costs for the “Direct Labor” and “Travel” cost elements. In this example, the SBInet prime contractor billed, and CBP paid, a total of $3,705,718.70, including $1,518,873.38 for the period for direct labor without any supporting details such as the hours worked and labor rate category. Supporting details are necessary to allow reviewing officials to determine, for example, whether the appropriate rate was charged and to assess the reasonableness of the hours charged. Similarly, for the $108,148.57 billed for travel for the period, the contractor did not provide supporting details necessary for a reviewing official to assess the amount claimed, such as the purpose of the trip and travel destination. Supporting details help CBP’s COs and COTRs assess the propriety of invoice cost elements billed to the government, and effectively review the prime contractor’s invoices. Lacking sufficiently detailed data supporting the original SBInet contractor’s invoiced costs, we were unable to determine whether the 99 invoices we sampled for review were proper and in compliance with original SBInet program contract provisions. Our review identified numerous instances of CBP CO and COTR frustration when they were unable to obtain detail to support SBInet contractor lump sum invoiced costs, despite repeated requests. The SBInet prime contractor denied these requests on the basis that CBP’s policies did not require supporting detail. CBP paid the invoiced amounts in all cases. In November 2009, CBP requested, and the SBInet prime contractor began providing, some additional information with its invoices submitted under the original SBInet program task orders. For example, the prime contractor included additional information on work performed for the invoice period. However, the additional information the contractor provided did not include sufficient additional detail needed to support an effective review of invoiced costs, such as hours worked, labor rate category, purpose of travel, or travel destination. CBP could have relied on a provision of the contract to obtain additional support for lump sum invoiced costs. That is, as authorized by the contract’s Allowable Cost and Payment Clause (FAR 52.216-7 (a) (1)), CBP could have required the SBInet contractor to provide, in such form and with reasonable details, support for lump sum invoiced cost element claims. CBP and SBI Contracting Division management officials told us they were aware of their ability to obtain supplemental detailed supporting cost information under the FAR Allowable Cost and Payment clause. However, they also told us that CBP made a business decision for the overall SBI program (including the original SBInet program) not to request such detailed supporting data from its prime contractor, but rather to rely on other oversight mechanisms (such as closeout audits) to help identify any contractor billing issues. Closeout audits are less effective as a control to identify or correct any contractor payment issues because they may not be conducted until a number of years after completion of a contract. The contractor’s ability to repay any improper payments may deteriorate, responsible prime contractor officials may change, their memories may fade, or needed supporting data may be lost. As provided by Standards for Internal Control in the Federal Government, well-developed and consistently implemented policies and procedures are critical in providing reasonable assurance that management’s directives are carried out and program risks, such as the risk of improper payments, are minimized. CBP’s SOP applicable to COs’ and COTRs’ reviews of SBInet contractor invoices provided general guidance that COs and COTRs were to “evaluate invoices to certify receipt of the product or service in accordance with the terms of the contract or order, as well as the accuracy and validity of the elements in the invoice.” However, CBP’s procedures for reviewing prime contractor invoices submitted under the original SBInet program were not sufficiently detailed and appropriately risk based to enable consistent, effective, and documented invoice reviews. CBP’s prime contractor invoice review procedures under the original SBInet program did not identify the specific review steps required for COs and COTRs to carry out and document effective, risk based reviews that could reasonably ensure that the SBInet contractor’s invoices were in the correct amount and accurately reflected all and only allowable goods and services as provided for under the original SBInet program. For example, CBP’s SOP for reviewing payments to the prime contractor under the original SBInet program did not reflect such specific review steps as how to consider the relative risks and review invoice cost elements (including major subcontracts, direct materials, direct labor, and other direct costs); what qualifies as sufficient supporting evidence for amounts invoiced by the prime contractor; and how to review invoice credit amounts and contract reserve adjustments. SBI’s prime contractor reported to CBP that it met two of its six small business subcontracting goals that were identified in the prime contractor’s subcontracting plan for the reporting period ended March 31, 2010. The prime contractor reported that it was unable to meet the remaining small business goals primarily as a result of non-small business contract awards necessitated by the Secure Fence Act of 2006, as amended. Specifically, to obtain the material needed to meet the 2006 statutory directive, in December 2007, the prime contractor entered into a large-scale steel purchase of approximately $242 million that it told us was only available from a large business. Consequently, the SBI prime contractor reported that this steel purchase reduced the subcontract award dollars available such that it was unable meet all of its small business contract award goals for the SBI program. The SBI’s prime contractor reported on its performance against small business subcontracting goals that were identified in its subcontracting plan and aligned with the categories used by the Small Business Administration (SBA) for prime contracts. That is, consistent with SBA definitions, the SBI prime contractor established five socio-economic subcategories for its SBI program: (1) Small Disadvantaged Business, (2) Woman-Owned Small Business, (3) Historically Underutilized Business Zone (HUBZone), (4) Veteran-Owned Small Business, and (5) Service-Disabled Veteran-Owned Small Business. The SBI prime contractor-established goals for each subcategory ranged from 1 to 5 percent of the total SBI program contract dollars awarded. Additionally, the SBI prime contractor also established an overall small business goal of awarding 40 percent of the total SBI program contract dollars to small businesses, which included, but was not limited to socio-economic small business goals. As shown in table 1, as of March 31, 2010, the SBI contractor reported that it met subcontracting participation goals for the HUBZone and Veteran- Owned small business categories. Further, the prime contractor reported that while it had awarded a total of over $262 million in SBI program funds to small businesses, its overall small business participation rate of approximately 33 percent from November 2005 through March 31, 2010, fell short of the 40 percent subcontracting goal. SBI prime contractor officials told us they relied on self reported subcontractor data to report on the extent to which it met established small business participation targets. DHS’s January 2011 decision to end the original SBInet program has implications for contractor payments controls under both the original SBInet program and its successor program. That is, our SBInet contractor payment findings apply to the remaining residual original SBI program funds, to SBInet contract closeout audits, and as “lessons learned” for the successor SBI technology program. In January 2011, the Secretary directed CBP to end SBInet as originally conceived as a single technology solution, and instead to implement a new border security technology portfolio approach utilizing existing technologies tailored to specific sectors of the Southwest border and their varying terrains and population densities. DHS announced that it intends to acquire all the technologies for the new approach through full and open competitions. The Secretary also directed that while the new technology approach should include elements of the former SBInet program where appropriate, DHS did not plan to use the current SBInet prime contract to procure any of the technology systems under the new plan. Therefore, our SBInet contractor payment findings are directly relevant to the remaining $80 million in original SBI program funds obligated but not yet disbursed. As discussed previously, CBP has an opportunity to improve the design of its preventative controls with respect to the remaining funds to be paid to the original SBInet program prime contractor. Further, our findings have implications for the contract closeout activities associated with the contract and task orders under the original SBInet program. Given our findings on the design of prime contractor payment controls under the original SBInet program, prompt action to complete closeout audits related to payments to the prime contractor under the original SBInet program contract and task orders is imperative. While CBP has the authority to do so as of February 2011, it had not yet requested that DCAA to conduct closeout audits on any of the original SBInet program contracts and related task orders. CBP actions to timely request and monitor effective completion of these audits are important because it may take several years for a contractor to close its books and additional time for DCAA to review the final rates applicable in each calendar year. Also, in requesting and monitoring contract closeout audits, it will be important that CBP also provide information on our findings so that DCAA can adjust its planned review and testing procedures accordingly. However, CBP has not yet established a monitoring mechanism to follow up on the status of any DCAA closeout audits. Internal control standards provide that any previously identified deficiencies related to an entity or program should be considered in planning future audits of the entity or program. They also state that an entity should establish performance monitoring activities. Such monitoring represents an essential internal control activity that can be used to help assess the effectiveness of CBP controls over contractor payments (including vulnerability to, and recovery of, any improper payments), and whether any additional follow-up actions are necessary. Finally, our findings concerning payments to the prime contractor under the recently ended SBInet program are useful as “lessons learned.” Our findings related to DHS controls over prime contractor payments under the original SBInet program will be applicable to designing procedures and controls for future technology portfolio contracting efforts. Our findings related to the design of current SBInet prime contractor payment policies and procedures represent important “lessons learned” for designing and implementing appropriate controls over contractor payments under the new technology portfolio approach. Effective controls over contractor payments are essential to helping provide assurance that SBInet program funds were disbursed only for authorized goods and services, and in the correct amounts. Because preventative controls are generally more cost-efficient and effective than detective controls, timely actions to strengthen controls in this area are particularly important with respect to the remaining $80 million in original SBInet program funds not yet disbursed. Furthermore, given the nearly $800 million in contract payments CBP already disbursed, full and timely completion of detective controls, particularly closeout audits, will also be essential in providing reasonable assurance that SBInet program funds were disbursed only for authorized goods and services and in the correct amounts. As such it will be important for CBP to take prompt actions to (1) request that DCAA design and conduct closeout audits recognizing the need to strengthen detective controls and (2) establish a mechanism to coordinate and track completed closeout audits to ensure that such audits are fully effective, and completed in time to effectively address to any errors or improper payments identified. Further, our findings concerning the design of CBP’s controls over payments to the prime contractor under the recently ended SBInet program serve as “lessons learned” to be considered in designing and implementing controls as part of the newly announced technology portfolio approach. We recommend that the Secretary of Homeland Security direct CBP’s SBI Contracting Division Director to take the following five actions. With respect to the remaining funds not yet disbursed under the original SBInet contract: Revise CBP’s SBI SOPs to require the SBInet contractor to submit data supporting invoiced costs to CBP in sufficient detail to facilitate effective CO and COTR invoice review. Revise CBP’s SBI SOPs to include specific, risk-based steps required for COs and COTRs to properly review and approve contractor invoices to ensure that they accurately reflect all program costs incurred, including specifying required documentation of such review and approval. With respect to closeout audits under the original SBI prime contract and any task orders that receive closeout audits under DHS’s SOPs: Request that DCAA to perform closeout audits as expeditiously as possible, including providing information on the contractor payment control findings concerning the original SBInet program that we identified for consideration in determining the extent and nature of DCAA testing required as part of such audits. Establish procedures for coordinating with DCAA to monitor the status of closeout audits related to the original SBInet program. With respect to the new technology portfolio approach: Document the consideration and incorporation as appropriate, of lessons learned based on our findings on the design of controls over payments to the original SBInet contractor in designing and implementing contract provisions and related policies and procedures for reviewing and approving prime contractor invoices. Such provisions should provide for obtaining sufficiently detailed data supporting invoiced costs to support effective invoice reviews and include the specific, appropriately risk-based steps required for COs and COTRs to carry out an effective contractor invoice review. In commenting on a draft of this report, DHS concurred with two of our recommendations and concurred in principle with the remaining three. DHS agreed that the government needs to perform adequate review of contractor requests for interim and final payments and that the government should maintain effective and repeatable processes for risk- based reviews to provide effective “preventative” management controls. With respect to the two recommendations for which it concurred, DHS cited actions under way to provide more focused, risk-based invoice review procedures, and incorporate lessons learned from past contractor invoice review experience into policies and processes for invoice review under the new technology portfolio approach. For the three recommendations with which it concurred in principle, DHS expressed concerns with respect to the cost-effectiveness or appropriateness of the recommended actions. Specifically, regarding our recommendation to revise CBP’s procedures to require the SBInet contractor to submit data supporting invoiced costs to CBP in sufficient detail to facilitate effective CO and COTR invoice review, CBP stated that it plans to enhance its current required review process by June 30, 2011, to provide copies of all contractor invoices directly to DCAA. However, DHS commented that requiring the prime contractor to submit substantial supporting documentation data where controls are already in place is not cost-effective. In this regard, we modified our recommendation by deleting examples of details to accompany invoices in order to allow CBP flexibility to decide specifically what detailed supporting data are needed. Nonetheless, as evidenced by numerous CO and COTR requests for more detailed data to support their invoice reviews discussed previously in our report, we continue to believe that the contractor should be required to provide information in sufficient detail to facilitate invoice reviews that can function as effective preventative controls in this area. DHS also concurred in principle with our recommendations to request and monitor the expeditious completion of DCAA closeout audits. DHS agreed that there is a need to close out contracts as soon as practical and plans to continue to discuss with DCAA the importance of completing annual incurred cost audits so that contracts can be closed. However, DHS commented that DCAA management, not DHS, ultimately determines the completion of these audits. As discussed in our draft report, the focus of our recommendations was on DHS assisting DCAA in efficiently and effectively carrying out its responsibilities. Specifically, we recommended that DHS provide DCAA with information on our findings concerning the SBInet contractor’s invoices to help focus its audit work and coordinate with DCAA to monitor the status of closeout audits. We continue to believe that DHS and CBP need to establish a mechanism to monitor completed closeout audits to ensure that such audits are fully effective and completed in time to effectively address to any errors or improper payments identified. By strengthening their monitoring of the status of DCAA closeout audits, DHS and CBP officials could better help ensure that corrective actions and lessons learned are effectively implemented and adopted, as appropriate. We therefore believe that these two recommendations remain valid. We also made changes as appropriate throughout the draft in response to DHS technical comments. DHS comments are reprinted in appendix I. We are sending copies of this report to the Chairmen and Ranking Members of the Senate and House Committees on Appropriations and other Senate and House committees and subcommittees that have authorization and oversight responsibilities for homeland security. We will also send copies to the Secretary of Homeland Security, the Commissioner of U.S. Customs and Border Protection, and the Director of the Office of Management and Budget. The report also is available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-9095 or at raglands@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. In addition to the contact named above, Chanetta Reed, Assistant Director; Kwabena Ansong; Heather Dunahoo; Nicholas Epifano; Aaron Livernois; and Stephen Lowery made key contributions to this report. | In 2005, the Department of Homeland Security (DHS) initiated a multibillion-dollar contract to secure part of the nation's borders, the Secure Border Initiative (SBI). At that time, SBI was to include a single solution technology component; SBInet. DHS assigned the U.S. Customs and Border Protection (CBP) responsibility for overseeing the SBI contract, including SBInet. In January 2011, DHS announced that it was ending SBInet, and replacing it with a new technology portfolio. GAO was asked to (1) assess CBP's controls over payments to the prime contractor under the original SBInet program, and (2) provide information about the SBI program prime contractor's reporting against small business subcontracting goals. GAO assessed CBP controls against federal standards for internal control and relevant federal regulatory provisions, and summarized data on contractor performance against small business contracting goals. GAO's review of CBP's controls over payments to the prime contractor under the original SBInet program identified the need to improve controls in two critical areas. Specifically, GAO found that CBP's design of controls for SBInet contractor payments did not (1) require invoices with sufficiently detailed data supporting billed costs to facilitate effective invoice reviews or (2) provide for sufficiently detailed, risk-based invoice review procedures to enable effective invoice reviews prior to making payments. Although CBP's established procedures were based on the Federal Acquisition Regulation (FAR), GAO identified numerous instances of CBP contracting officers lacking detailed support in the SBInet contractor invoices they received for review. Because CBP's preventative controls were not fully effective, the agency will continue to (1) be impaired in providing assurance that the reported $780 million it already paid to the contractor under the original SBInet program was allowable under the contract, in the correct amount, and only for goods and services provided, and (2) rely heavily on detective controls (such as timely, effective contract closeout audits) for all SBInet funds disbursed. Further, timely action to improve CBP's preventative controls is critical for the estimated $80 million in original SBInet program funds yet to be disbursed. Also, in light of the recent DHS announcement that it is replacing the originally conceived SBInet program with a new technology portfoliobased approach, GAO's findings concerning weaknesses in CBP's design of controls over payments to the prime contractor under the recently ended SBInet program can serve as "lessons learned" to be considered in designing and implementing controls as part of the newly announced portfolio-based approach to providing technological support to border security. With respect to performance against small business contracting goals, the prime contractor reported that it met two of the six small business subcontracting goals for the overall SBI program. Specifically, it reported that it met subcontracting participation goals for Historically Underutilized Business Zone and Veteran-Owned small business categories, but was unable to meet the other four small business goals because a large steel purchase significantly reduced the subcontract dollars available for small businesses to participate in the SBI contract. GAO makes five recommendations to improve CBP controls over prime contractor payments under the SBInet and the successor technology portfolio, including actions to strengthen invoice review procedures, provide more detailed support, and to better focus closeout audits. DHS concurred in principle with all recommendations, but for some, DHS also commented on the cost-effectiveness or others' role in implementation. GAO addresses these comments in the report. |
The 2000 and 2002 wildland fire seasons proved to be two of the worst in over 50 years. During the 2000 fire season, almost 123,000 fires burned more than 8.4 million acres and cost the federal government over $1.3 billion. In 2002, almost 89,000 fires burned about 7 million acres, an area larger than the states of Maryland and Rhode Island combined. For decades, the federal wildland fire community pursued a policy of suppressing all fires as soon as possible. Over the years, suppressing fire in areas where it naturally occurred has caused an increase in the volume of brush, small trees, and other vegetation. The increase in such “forest fuels,” combined with a severe drought in much of the nation over the past few years, has increased the severity of wildland fires. The result in some instances has been catastrophic. In 2002, the Rodeo-Chediski fire in Arizona, the Hayman fire in Colorado, and the Biscuit fire in Oregon and California became the largest fires in those states in more than a century. To deal with this threat, the administration asked the Forest Service and Interior to recommend how best to respond and how to reduce the impacts of such fires in the future. The resulting report and the associated implementation documents became known as the National Fire Plan. This blueprint recommended that the Congress substantially increase funding for several key activities, such as suppressing wildland fires and reducing the buildup of unwanted hazardous forest fuels. Of the federal agencies involved with helping to reduce the threat posed by wildland fires, the Forest Service is by far the most significant in terms of the broad range of forest activities that it is responsible for and the public attention it receives. Compared with the other federal land management agencies in fiscal years 2001 and 2002, the Forest Service received more than half of all funding provided for forest fuels reduction activities. For these fiscal years, the Congress provided the Forest Service with $414 million for reducing hazardous fuels—the other land management agencies received $381 million combined. The Forest Service is responsible for managing over 192 million acres of public lands—nearly 9 percent of the nation’s total surface area and about 30 percent of all federal lands in the United States. In carrying out its responsibilities, the Forest Service traditionally has administered its programs through nine regional offices, 155 national forests, 20 grasslands, and over 600 ranger districts (each forest has several districts). Figure 1 shows a map of the national forests and Forest Service regions. The National Environmental Policy Act requires the Forest Service, and all other federal agencies, to assess and report on the likely environmental impacts of any land management activities they propose that significantly impact environmental quality. For example, certain proposed Forest Service activities, such as fuels reduction projects, timber sales, and grazing allotments, may require such environmental analysis and reporting. More specifically, if a proposed activity is expected to significantly impact the environment, the Forest Service is required to prepare an environmental impact statement. If, however, a proposed activity is unlikely to have a significant effect on the environment, the Forest Service is not required to prepare an environmental impact statement—such activities are classified as categorical exclusions. When the Forest Service is not sure whether an activity will have a significant impact on the environment, the agency prepares an intermediate-level analysis called an environmental assessment. If an environmental assessment determines that the activity will significantly affect the environment, the Forest Service prepares an environmental impact statement. (See fig. 2). Under certain circumstances, the public has a right to administratively appeal Forest Service decisions.These appeals must be evaluated by the Forest Service within prescribed time frames and could result in decisions being reversed and the associated land management activities being substantially revised or even cancelled. Generally, the public can appeal decisions associated with environmental impact statements or environmental assessments. Decisions associated with categorical exclusions are generally not appealable. Further, as a general rule, once the administrative appeals process is complete, the public can litigate any decision, including categorical exclusions, in federal court. Controversy has surrounded this issue for some time. On the one hand, critics have asserted that administrative appeals and litigation are stopping or unnecessarily slowing the decision-making processes of the Forest Service and their efforts to reduce forest fuels on federal lands. They expressed the view that many appeals are “frivolous” and brought for the purpose of frustrating, rather than improving, land management actions, and that they greatly increase the costs of managing the national forests. Supporters of the current process, on the other hand, have responded that appeals have not been excessive or unwarranted, that few appeals are frivolous, and that the current process for handling appeals is adequate. Supporters further assert that the Congress intended the federal land management process to include administrative reviews of agency decisions to (1) ensure public participation in the decision-making process and (2) ensure that agency managers adequately consider the various factors and policies impacting the environmental health of the nation’s lands. Recent administrative rule changes and legislative proposals modify or would modify the current appeals process and exempt certain projects from the process. In August 2002, the administration announced the Healthy Forest Initiative, which has been controversial as well; some regarding it as an effort to reduce unnecessary red tape and needless delays and others considering it a tool to increase logging activity. The initiative is intended to help reduce the threat of catastrophic wildfires and improve the health of the national forests by, among other things, streamlining the planning and appeals processes. In particular, recent administrative rule changes modify the appeal procedures and establish new categorical exclusions for certain fuels reduction projects. The Congress is also considering legislation to, among other things, exempt certain fuels reduction activities from the existing appeal requirements. The bill would require the Secretary of Agriculture to issue regulations establishing a separate administrative process to address disputes concerning these projects. The debate surrounding the Healthy Forest Initiative centers on the extent and frequency of appeals and litigation of fuels reduction activities. However, because the Forest Service does not have a national database to track both its decisions involving forest fuels reduction activities and the extent to which they were appealed or litigated, we were asked to develop this information. The information in this report provides these data for fiscal years 2001 and 2002. For fiscal years 2001 and 2002, the national forest managers reported that there were 818 decisions involving forest fuels reduction activities. These decisions affected almost 4.8 million acres of national forest land. Most of these decisions were excluded from detailed environmental impact analysis because the Forest Service determined that they had little or no significant impact on the land. Of the 818 decisions involving forest fuels reduction activities, the forest managers reported that 52 of the decisions (about 6 percent) were expected to have significant environmental impacts, thus requiring the preparation of environmental impact statements. About 280 of the decisions (about 34 percent) initially had the potential for some environmental impact and required the preparation of environmental assessments. All of the remaining decisions (486 or about 59 percent) involved activities that had no or only minor environmental impacts and, as such, were categorically excluded from documentation in an environmental assessment or an environmental impact statement. In reporting these data, it is important to emphasize that the Forest Service does not have a uniform definition of a fuels reduction activity. The lack of a uniform definition is an important limitation because it could affect the consistency of the data reported to us by the national forests in terms of which activities are identified as fuels reduction projects. Accordingly, if the supporting Forest Service decision documents explicitly stated that the purpose of the activities was fuels reduction, we accepted the decision. However, if the decision documents did not include an explicit discussion of fuels reduction, we did not accept the decision. Many activities have the practical effect of reducing forest fuels, but the purpose may be for something other than fuels reduction. For example, a tree thinning activity may reduce fuels, but the stated purpose of the project may be to treat an insect infestation. If so, fuels reduction would not be a designated purpose of the activity, and the decision was not included in our analysis. In addition, a commercial timber harvest will reduce fuels by removing trees, but the stated purpose may be commodity production. If so, the decision was not included in our analysis. If the commercial timber sale or thinning activities included a stated purpose of reducing fuels, the decision was included in our analysis. Amount of Acreage Affected The forest fuels reduction decisions for fiscal years 2001 and 2002 covered almost 4.8 million acres of national forest land. Of the 4.8 million acres, the forest managers reported that 0.3 million acres (about 7 percent) involved activities that were expected to have significant environmental impacts, thus requiring the preparation of environmental impact statements. About 1.5 million acres (about 31 percent) involved activities that initially had the potential for some environmental impact and required the preparation of environmental assessments. All of the remaining acreage (3.0 million or about 62 percent) involved activities that had no or only minor environmental impacts and, as such, were categorically excluded from preparation of a detailed environmental impact analysis. There are a few limitations to the acreage data. The 4.8 million acres does not correspond to the number of acres actually treated in fiscal years 2001 and 2002. Once a decision is made and documented, there are many reasons that activities covered by decision may be delayed or not implemented, including funding availability, personnel availability, weather conditions, and administrative appeals or litigation. In addition, the national forests may have submitted more than one decision with activities on the same area of land. Therefore, the 4.8 million acres may include overlapping acreage. Further, the national forest managers reported decisions involving personal firewood activities, including one large project from the Tonto National Forest in Arizona that could potentially skew the acreage data. Under the personal firewood program, forest managers designate areas where the public can obtain a wood cutting permit and gather firewood for personal use. Forest managers can identify all of the acreage available for firewood removal under this program as fuels reduction activities. However, it is possible that the public may collect only firewood that is easily accessible, such as near roads and trails, rather than covering the entire designated area. One decision from the Tonto National Forest in Arizona designates 1 million acres as eligible for firewood removal. These 1 million acres are 21 percent of the total acreage reported as treated or planned to be treated for fuels reduction activities for all national forests. According to Forest Service officials, it is unlikely that the public will remove fuels from all 1 million acres. Table 1 shows the number of decisions with forest fuels reduction activities, the amount of acreage affected, and their environmental impact significance. Of the 818 decisions involving forest fuels reduction activities, 24 percent were appealed. However, more than half were not subject to appeal because they were categorically excluded from documentation in an environmental impact statement or environmental assessment. Overall, of the 818 total decisions, 332 were appealable because they had environmental impacts that were either uncertain or significant and required the preparation of an environmental assessment or environmental impact statement. Of these 194 (58 percent) were appealed. These appealed decisions affected about 950,000 acres. In addition, 25 decisions (about 3 percent of all decisions) were litigated. The litigated decisions affected about 111,000 acres. In fiscal years 2001 and 2002, 486 (59 percent) of all decision involving fuels reduction activities were not subject to appeal. The remaining 332 decisions involved forest fuels reduction activities that were generally more controversial because they were expected to have significant environmental impact or initially had the potential for significant environmental impacts. Of the 332 appealable decisions, 194 were appealed affecting over 950,000 acres. Table 2 summarizes the number of decisions appealed by decision type and the number of acres affected. In reviewing the appeals data in table 2, it is important to point out that many types of land management activities may be analyzed and included as part of one decision. A single decision may include activities such as timber sales, road construction, grazing permits, and habitat improvement in addition to fuels reduction activities. As a result, when an appeal is pursued, it may or may not be based on concerns about fuels reduction activities. Under the Forest Service appeal regulations, the entire decision is appealed, not the individual activities. Therefore, the public may object to only one activity in a decision but all land management activities covered by the decision will be affected by an appeal. For example, a single decision may contain activities involving commercial thinning, prescribed burning, stream improvements, road construction, and a trail closure. An appellant may object to the road construction activity but not the forest thinning activities. However, all of the activities covered by a decision will be affected until the appeal is resolved. There is no limit to the number of appeals that can be filed on an individual decision. In total, appellants filed 285 appeals on the 197 appealed decisions. One hundred and thirty-four decisions had 1 appeal, 48 decisions had 2 appeals, 10 decisions had 3 appeals, 3 decisions had 4 appeals, 1 decision had 5 appeals, and 1 decision had 8 appeals. Appendix III provides information on appeal rates for each Forest Service region. All decisions can be litigated. In fiscal years 2001 and 2002, 25 decisions (about 3 percent) were litigated. These litigated decisions affected about 111,000 acres (about 2 percent). Not surprisingly, decisions with significant environmental impacts were litigated more often. Of the 52 decisions where the Forest Service was required to prepare environmental impact statements, 15 (29 percent) were litigated. Table 3 provides a summary of the decisions litigated and the acres affected by the litigation. Appendix III provides information on the number of litigated decisions, by Forest Service region. Of the 197 appealed decisions the Forest Service reviewed, 144 (about 73 percent) were allowed to be implemented without any changes. However, the Forest Service did not allow 38 decisions (about 19 percent) to be implemented. The Forest Service required the remaining 15 decisions (about 8 percent) to be changed prior to implementation. Of the 25 litigated decisions, 19 have been resolved and 6 were still ongoing at the time of our review. Most of the appellants and plaintiffs were interest groups. Generally, appealed decisions have one of three outcomes. First, the Forest Service can allow a decision to be implemented without any changes. Second, the Forest Service can allow a decision to be implemented, but only if certain, specified changes are made. Third, the Forest Service can prevent a decision from being implemented. There are a variety of factors that can affect the disposition of an appeal and lead to these outcomes. Each of these factors is specified in Forest Service regulations. Some of these factors are procedural and have little or nothing to do with the merit of an appeal, and some are based on the merit of the appeal. Table 4 provides a brief summary of the three basic decision outcomes and an explanation of the factors that can lead to various appeal outcomes. Figure 3 shows the disposition of each of the 197 appealed decisions for fiscal years 2001 and 2002. Decisions proceed with changes (15) Decisions do not proceed (38) Appendix IV provides a summary of the appeal outcomes, by region. Under certain circumstances, members of the public, including private individuals and interest groups, can appeal decisions of Forest Service officers.A decision can be appealed multiple times and multiple appellants can be parties to an appeal. For example, the Little Blacktail Ecosystem Restoration Project Record of Decision issued in the Kaniksu National Forest in Idaho had three appeals; the Ecology Center, Lands Council, Kootenai Environmental Alliance, and Friends of the Pond joined in one appeal; the Alliance for the Wild Rockies filed another appeal; and a private individual filed the third appeal. In these instances, each interest group and the private individual counted as appellants—6 total appellants—even though they were appealing 1 decision and had filed 3 appeals. Due to these situations, there were 285 appeals on the 197 appealed decisions. The 285 appeals had 559 appellants. The 559 appellants included 482 appeals by 85 different interest groups, mostly environmental groups, and 77 appeals by 53 private individuals. Table 10 of appendix V lists each interest group that appeared as an appellant in fiscal years 2001 and 2002 and the number times they appeared. Of the interest groups, 7 appeared as appellants 20 or more times. These groups include the Alliance for the Wild Rockies, Ecology Center, Forest Conservation Council, Lands Council, National Forest Protection Alliance, Oregon Natural Resources Council, and Sierra Club. Following a final decision by the Forest Service on an appeal, members of the public, can file a lawsuit and seek a review of the decision from a federal district court. Plaintiffs are usually the same parties who previously appealed the decisions with the Forest Service. It may take weeks to years to resolve a case once a decision is litigated. Of the 25 litigated decisions, 6 were continuing at the time of our analysis. For the remaining 19 cases, lawsuits for 5 decisions were dismissed because the plaintiffs and the Forest Service agreed to settle their claims. District courts reached an outcome on the 14 remaining decisions—9 decisions were decided favorably to the plaintiffs, and 5 decisions were decided favorably to the Forest Service. Both plaintiffs and the Forest Service have the option of appealing the decisions of the district court to the relevant federal court of appeals. We did not collect information on whether the decisions were appealed to a higher court. Appendix V provides information on the outcomes of litigated decisions, by region. Multiple plaintiffs can be parties to a lawsuit. Of the 25 litigated decisions, 26 different interest groups and one private individual were plaintiffs. The interest groups were primarily environmental groups. Five groups were plaintiffs in 4 or more decisions: the Ecology Center, Sierra Club, Oregon Natural Resources Council, Hell’s Canyon Preservation Council, and Native Ecosystems Council. Most of the appeals that occurred in fiscal years 2001 and 2002 were processed within the prescribed time frames. Specifically, of the 285 appeals that were filed, about 79 percent were processed within the prescribed 90 days. The applicable laws and regulations establish procedures for public notice of a decision and the time frames for appeal. Once the public is given notice of a decision, appellants have 45 days to file an appeal. If an appeal is filed, the Forest Service has 45 days from the close of the appeal period to determine the outcome of the appeal. In total, the Forest Service has up to 90 days to resolve an appeal once the agency notifies the public of a decision. While the agency is determining the disposition of an appeal, a Forest Service official is required to contact an appellant and offer to meet informally to dispose of the appeal. Figure 4 provides a flowchart showing the appeals process that applied during fiscal years 2001 and 2002. Of the 285 appeals filed in fiscal years 2001 and 2002, 226 (79 percent) were processed within 90 days of the date that the decisions were made and published. In contrast, 59 appeals (about 21 percent) were not processed within 90 days. For those appeals that were not processed within the 90- day limit, the appeal processing times ranged from 91 to 240 days, with a median processing time of 119 days. The Forest Service offered several reasons for not processing the 59 appeals within the 45-day formal disposition period. These reasons included inadequate staffing, the unavailability of staff around the holiday season, and appeal backlog. We did not verify or analyze the support for the reasons that the Forest Service provided. Further, to fully understand the appeals process, it is important to understand that under certain circumstances, appellants may have more than one opportunity to appeal a decision. Once a decision is reversed or withdrawn by the Forest Service as a result of an appeal, the agency can revise and reissue the decision. This is usually done to accommodate concerns that have been raised during an initial appeal. Moreover, the Forest Service also has the option of not reissuing the decision. In our analysis, 32 decisions had been reissued. Of those reissuances, 30 were appealed again and 2 were implemented without appeal. Once a decision is reissued, the permitted processing times for handling appeals begin again. Appendix VII provides a summary of the appeals processing times for each Forest Service region. Reducing the buildup of vegetation that fuels severe fires requires vegetation management, or fuels reduction. There are four basic fuels treatment methods. These are prescribed burning, mechanical thinning, the application of chemicals/herbicides, and grazing. Prescribed burning is the most frequently used method to reduce the accumulation of dangerous fuels on forested acres. Decisions involving the two main types of fuels treatment methods, prescribed burning and mechanical treatment, were appealed at about the same rate. A prescribed fire is one that is intentionally ignited to meet specific land management objectives. In addition to reducing the risk of wildfires, prescribed fires also are used to prepare areas for reforestation or to improve wildlife habitat. How and when a prescribed fire can be successfully conducted is influenced by many conditions, such as the type and moisture levels of vegetation, topography, temperature, wind speed, and humidity. All of these factors are to be considered and documented by fire management personnel prior to initiating a prescribed burn. Figures 5 and 6 show examples of a prescribed burn. Prescribed burning was the most frequently used fuels treatment method during fiscal years 2001 and 2002—in terms of both the number of decisions that included prescribed burning activities and the number of acres affected. Of the 818 decisions with fuels reduction activities, 570 (about 70 percent) included prescribed burns. Of the total 4.8 million acres covered by all decisions, 3.2 million acres (about 67 percent) had been or were to be treated using this method. There is a range of mechanical treatments that can be used to reduce forest fuels. Harvesting timber and removing smaller noncommercial trees and brush can accomplish fuels reduction. In addition, thinning stands of trees to reduce competition for light, moisture, and nutrients may improve forest health. Mechanical thinning is typically done using power equipment, such as bulldozers, chain saws, chippers, and mulchers. Figures 7 and 8 show examples of mechanical thinning projects. Mechanical thinning is the second most utilized method for reducing forest fuels. Of the 818 decisions with fuels reduction activities, 491 (about 60 percent) included mechanical treatment methods. These treatments involved 0.8 million acres—about 17 percent of all the acreage treated or planned for treatment in fiscal years 2001 and 2002. Chemical treatments are herbicides used to control and remove the hazardous buildup of forest vegetation. Herbicides are usually applied as liquids mixed with water or oil and then sprayed on the soil surface to be absorbed by the plant roots. Generally, there are four methods of applying herbicides: (1) aerial application, using helicopters or other aircraft; (2) mechanical equipment, using truck-mounted or truck-towed wand or broom sprayers; (3) backpack equipment, generally a pressurized container with an agitation device; and (4) hand application by injection, daubing cut surfaces, or application of granular formulations to the soil. Grazing animals, such as cattle and goats, can also be used to reduce the buildup of hazardous forest fuels. However, grazing is less utilized because it is increasingly competing with other uses of public lands, such as recreation, wildlife habitat, riparian management, endangered species management, mining, hunting, cultural resource protection, wilderness, and a wide variety of other uses. Chemical treatments and grazing are the least utilized treatment methods. Of the 818 fuels reduction decisions reported, 3 (less than 1 percent) included chemical/herbicide treatments, and 2 (less than 1 percent) included grazing. These two types of treatment methods affected about 700 acres—less than 1 percent of the total acres treated or planned for treatment in fiscal years 2001 and 2002. In addition to the four basic hazardous fuels treatment methods, there are other methods that are sometimes used. These other methods include activities such as cutting underbrush by hand or the public’s removal of firewood by hand. One hundred and twelve (14 percent) of all fuels reduction decisions in fiscal years 2001 and 2002 included these other kinds of treatments. However, while the use of the other methods was relatively infrequent, the amount of acreage affected was considerable—mostly due to the 1 million acre personal fire wood removal program from the Tonto National Forest in Arizona. There are two important points that need to be highlighted regarding this fire wood removal program. First, while the project covers 1 million acres, it does not necessarily mean that firewood will be removed from all of these acres. It simply means that these acres are available for the removal of firewood. Accordingly, the extent of fuels reduction on these acres is not clear. It is possible that the number of acres actually reported for the project can be significantly overstated. Second, even though officials at the Tonto National Forest reported this as part of the forest fuels reduction program, Forest Service headquarters officials questioned the merit of including it in our report because they believed it skewed the data by increasing the amount of acreage having fuels reduction activities. In the final analysis, we reported this project as a fuels reduction activity because the Tonto forest officials identified it as such in their decision documents. Table 5 summarizes the fuels reduction methods used by the Forest Service in fiscal years 2001 and 2002. The columns in the table 5 do not add to the total number of decisions (818) or the total amount of acreage affected (4.8 million). This occurs for two reasons. First, a decision can include prescribed burning on some acreage and another treatment method on other acreage. Second, the same acreage can be treated by more than one method. For example, an area can be thinned using prescribed burning and then be further thinned using mechanical means. Forest managers reported that 280 decisions with fuels reduction activities included acres treated or planned for treatment by both prescribed burning and mechanical methods. Appeal rates for the two main types of treatments, prescribed burning and mechanical, were about the same. Appealable decisions with a mechanical treatment component were appealed about 64 percent of the time. Appealable decisions with prescribed burning activities were appealed at about the same rate—63 percent of the time. Similarly, 34 percent of all decisions with mechanical treatment methods were appealed, and 29 percent of all decisions with prescribed burning activities were appealed. Table 6 provides a summary of the appeal rates for decisions with the different treatment methods. An analysis of the data shown in table 7, on the basis of the amount of acreage affected, shows that decisions with prescribed burning covered the most acreage appealed. Appendix VIII provides data on treatment methods and appeal rates, by Forest Service region. Typically, the Forest Service contracts with other organizations to carry out fuels reduction activities in the national forests. In doing this, the agency generally uses three types of contracting mechanisms—timber sale contracts, service contracts, and stewardship contracts. A decision can use more than one type of contract to carry out fuels reduction activities. The Forest Service awards timber sale contracts to individuals or companies to harvest and remove trees from federal lands under its jurisdiction. Service contracts are awarded to contractors by the Forest Service to perform specific tasks to reduce forest fuels, such as thinning trees or clearing underbrush. Stewardship contracts are used by the Forest Service to conduct on-the-ground restoration and enhancement of landscapes with public and private entities. Service contracts are the most frequent contracting method used. Decisions using timber sale contracts and stewardship contracts are the most frequently appealed. Forest Service timber sale contracts set forth specific terms and provisions of a sale, including the estimated volume of timber to be removed, the time period of the removal, the price to be paid to the government, and the environmental protection measures to be taken. Of the 818 total fuels reduction decisions, 278 (34 percent) involved timber sale contracts. The Forest Service also uses traditional service contracts to reduce the accumulation of fuel loads. Typically, a service contract identifies the tasks to be performed, such as removing and treating the unmarketable, cut materials. The cut materials affect the fuel loads and can be left as is, piled and burned, lopped and scattered to accelerate rotting, or removed from the site. Of the 818 total fuels reduction projects, 356 (44 percent) of the decisions involved service contracts. Stewardship contracts use a combination of service contracts and timber sale contracts to care for national forest system land. In 1998, the Forest Service was given stewardship contracting authority so that the agency could work with private and public entities to achieve federal management goals. For example, this authority provided the Forest Service with the ability to trade goods for services (such as timber in exchange for road maintenance). A stewardship contract might include prescribed burning to improve wildlife habitat or reduce forest fuels in conjunction with the sale of forest products off the same piece of land. Of the 818 total fuels reduction decisions, 41 (5 percent) of the decisions involved stewardship contracts. Figure 9 shows the frequency of service, timber sale, and stewardship contracts used in decisions with fuels reduction activities. The total number of decisions in figure 9 does not total 818 because there are also other means used to implement fuels reduction activities. Forest Service personnel are frequently used to perform the needed work. Typically, Forest Service personnel are used in conjunction with different contract types. Of the 818 decisions, 673 (82 percent) involved some work by Forest Service personnel. Further, other means, such as contracts that utilize prison labor and contracts that collaborate with other federal agencies like the Bureau of Land Management, are also used to help reduce forest fuels. Eighty-three (10 percent) of all 818 decisions with fuels reduction activities used these other mechanisms. Decisions that are implemented through the use of timber sale contracts and stewardship contracts were the most frequently appealed. Because of the controversy that surrounds timber harvesting activities and their impact on the environment, it is not surprising that contracts for this type of activity would be scrutinized and challenged by the forest interest groups or other stakeholders. Appendix IX summarizes the contracting methods used and appeal rates, by Forest Service region. Two areas of particular interest on national forest land where fuels reduction activities can occur are in the wildland-urban interface and inventoried roadless areas. The wildland-urban interface areas are those areas where federal lands surround or are adjacent to human development and communities. In contrast, inventoried roadless areas are undeveloped areas with no or few roads. Fuels reduction activities occur more on wildland-urban interface areas than in inventoried roadless areas. Of the 818 decisions involving fuels reduction activities, 462 decisions had activities in the wildland-urban interface and 76 decisions had activities in inventoried roadless areas. Decisions with fuels reduction activities in the inventoried roadless areas are appealed more frequently. The Forest Service broadly defines the wildland-urban interface as areas where humans and their development meet or intermix with wildland forest fuels. There are three categories of communities that meet its definition: (1) an interface community exists where structures directly abut wildland fuels; (2) an intermix community exists where structures are scattered throughout a wildland area; and (3) an occluded community exists, often within a city, where structures abut an island of wildland fuels, such as a park or open space. Figure 10 shows an example of a community in the wildland-urban interface. Individual forest managers may or may not use the definition of wildland- urban interface that the Forest Service provides. According to the information provided by the national forests in response to our survey, most forest managers reported that they used the Forest Service’s definition or they developed their own definition. Other managers reported that they either did not have a definition or did not know if they had a definition. The inconsistent application of these definitions by forest managers should be considered when using the information reported about whether fuels reduction activities were in the wildland-urban interface. An August 2003 GAO report highlighted the fact that agencies need to define which lands are part of the wildland-urban interface. Without doing so, the Forest Service will be constrained in its ability to prioritize locations for fuels reduction treatments and to allocate funding accordingly. We recommended in the August report that the Forest Service develop a consistent, specific definition of the wildland-urban interface so that detailed, comparable nationwide data could be collected to identify the amount and location of lands in the wildland-urban interface. Development of a consistent definition will facilitate the prioritization of fuels reduction treatments. Of the 818 decisions with fuels reduction activities, the national forest managers reported 462 decisions (57 percent) had fuels reduction activities in the wildland-urban interface. Of these 462 decisions, 169 were appealeable—that is, they were decisions analyzed in conjunction with environmental assessments or environmental impact statements. Of the 169 appealable decisions, 89 were appealed—that is, 53 percent of appealable decisions and 19 percent of all decisions with fuels reduction activities in the wildland-urban interface. The 462 decisions covered 1.5 million acres—that is, 31 percent of the total acreage (4.8 million) for all reported fuels reduction activities. Inventoried roadless areas, as the name implies, are undeveloped areas generally without roads, which the Forest Service has specifically defined. The intent of the roadless designation is to conserve these natural areas by limiting road building and logging activities. Figure 11 shows an example of an inventoried roadless area on national forest land. In contrast to the wildland-urban interface areas, roadless areas have specific boundaries, which make it much easier for forest managers to report on decisions with treatments in these areas. Of the 818 decisions, the national forests reported 76 decisions—about 9 percent of all decisions—with fuels reduction activities in roadless areas. Of these 76 decisions, 41 were appealable and 26 were appealed—that is, 34 percent of all decisions with treatments in roadless areas and 63 percent of appealable decisions. The 76 decisions covered 240,000 acres—about 5 percent of all acreage treated or planned for treatment in fiscal years 2001 and 2002. Appendix X provides information on the number of decisions involving fuels reduction activities in the wildland-urban interface and inventoried roadless areas and the frequency of appeals for each Forest Service region. We provided a draft of this report to the Forest Service for review and comment. The agency generally agreed with the information presented in the report. However, the agency did offer a few comments that it believed would help clarify some of this information. Specifically, the Forest Service believes that we should not have included information on a 1 million acre personal use firewood program at one forest because, in their opinion, doing so unnecessarily skews the data by increasing the amount of acreage with fuels reduction activities. We did not change the report to omit this information because, as the Forest Service agrees, it was reported and documented as a fuels reduction project by the agency. Nonetheless, to ensure clarity, we highlighted in the report the unique nature of the project, where appropriate. The agency suggested that we highlight the fact that a single decision can be appealed multiple times, and that the Forest Service’s workload increases accordingly. In its comments, the agency commented that we should provide additional information on that point in the body of the report to emphasize the impact of multiple appeals on the workload of the agency. We believe this point was already addressed in the body where we noted that there were 285 appeals on the 197 appealed decisions. In addition, we also provided a breakdown of the number of appeals per decision. Nonetheless, we did add language to the Results in Brief section of the report and the Highlights section, noting that decisions can be appealed multiple times. The Forest Service also commented that because appeal rates vary widely throughout the nation, we should add language in the narrative regarding local perceptions of appeal rates and how they can differ from the national data. The agency noted that when local groups or individuals state that many projects are held up by appeals, they are more likely referring to their experience at the local level. We believe the information needed to discern regional differences was already presented in the report; therefore, we did not make changes to the report. The Forest Service’s written comments are presented in appendix XII. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Agriculture, the Chief of the Forest Service, and other interested parties. We will make copies available to others upon request. This report will also be available on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report were Cliff Fowler, Curtis Groves, Richard Johnson, Roy Judy, Nicole Shivers, Patrick Sigl, and Shana Wallace. The Forest Service does not maintain its own database on the number of decisions or appeals throughout the national forest system. Accordingly, to address each of the objectives, we had to develop a national database. To do this, we used a Web-based survey of all 155 national forests. The survey focused on all Forest Service decisions in fiscal years 2001 and 2002 with a forest fuels reduction component, including those that were categorically excluded from preparation of an environmental impact statement, that were issued in fiscal years 2001 and 2002. The specific information we needed to satisfy our objectives was located at several organizational levels—headquarters, regional offices, individual forests, and district offices within each forest. For instance, information on the individual decisions, particularly the environmental impact statements and environmental assessments, was located at the forest-level. Information on categorical exclusions was primarily located only at the district offices. Our survey was addressed to forest supervisors. We asked forest supervisors to gather the necessary information from the other organizational units within the Forest Service, as needed, to complete the survey. We also asked each forest supervisor for a contact person at the forest who was familiar with the National Environmental Policy Act process requirements, since it guides land management decision-making and planning activities. This contact person served as our focal point at each forest and was responsible for providing us with survey responses and addressing the follow-up questions and documents that we requested. We developed a data collection instrument to obtain the relevant information. Appendix XI contains a copy of the instrument used to gather these data. To help us understand the decision-making and appeals and litigation processes and to help us formulate the questions for our survey instrument, we met with Forest Service personnel at headquarters in Washington, D.C.; the region 5 office in Vallejo, California; the Stanislaus and Tahoe National Forests in California; and the George Washington and Jefferson National Forests in Virginia. Once we developed the questions, we pretested the instrument at the Kootenai National Forest in Montana, the Payette and Boise National Forests in Idaho, and the Monongahela National Forest in West Virginia. We gave the forests 3 weeks to respond to the survey and granted extensions as needed. We obtained a 100 percent response rate from the forest managers. We verified the accuracy of about 10 percent of the survey responses submitted. We used a random number to identify the first decision to be verified and then selected every 10th decision submitted by the forests. After selecting a decision, we obtained the supporting decision documents, National Environmental Policy Act documents, and appeals information from the forests and verified the information submitted for the randomly selected decisions. Using this approach, we verified 85 total decisions. Any discrepancies between the survey responses and our data verification were discussed and resolved with the responsible forest official. Through our data verification process, we determined that the data submitted were generally reliable. In addition to our verification of the information supporting the 85 randomly selected decisions, we also reviewed the data to determine whether there were any aberrations in the submitted data (e.g., illogical dates or inconsistent responses). We contacted the appropriate forest officials and corrected many aberrations in the data. As a result of our review and verification, we identified 42 decisions that were eliminated from the information provided by the forest managers. These decisions were eliminated for a variety of reasons. For example, the decisions (1) were not issued within fiscal years 2001 and 2002 or (2) lacked clear documentation that the activities had a fuels reduction purpose. There are some limitations to the data we gathered. As with any survey, the information obtained from the national forests was self-reported, and we were not able to independently ensure that all decisions were reported. In particular, we had no way to determine if forests were underreporting their activities. To get some indication of whether this might be occurring, we contacted eight environmental groups to review the decisions submitted by selected forests in order to determine if there was any indication that the forests were underreporting decisions. These groups did not identify any instances of underreporting. We conducted our work from September 2002 through September 2003 in accordance with generally accepted government auditing standards. The Forest Service consists of nine regions. Figure 12 highlights the areas covered by each region. The Southern Region (region 8) had the largest number of decisions with fuels reduction activities (180 decisions) with the largest planned acreage—2.1 million acres. The Alaska Region (region 10) listed the least number of decisions with fuels activities (2) and the least 12 provides a summary of the amount of acreage—1,408 acres. number of decisions and acres planned in each Forest Service region. Figure 13 summarizes the appeals and litigation information by each Forest Service region. The Northern Region (region 1) had the highest appeal rate for both all decisions and appealable decisions appealed. The Northern Region (region 1) had 48 percent of all decisions appealed and 90 percent of appealable decisions appealed. The Alaska Region (region 10) had no decisions appealed. The Southern Region (region 8) had the lowest appeal rates for regions with recorded appeals—7 percent of all decisions and 36 percent of appealable decisions. The Northern Region (region 1) had the highest number of litigated decisions with 8. The Southwestern (region 3), Southern (region 8), and Alaska (region 10) Regions did not report any litigated decisions with fuels reduction activities. Figure 14 summarizes the appeal outcomes for decisions with fuels reduction activities by Forest Service region. All of the decisions in the Southern Region (region 8) were permitted to proceed without changes. The Eastern Region (region 9) had the lowest percentage of decisions that were allowed to proceed without changes—50 percent. hwestern Region (region 3) had the highest percentage of decisions that were not allowed to proceed due to appeals—38 percent. Table 9 summarizes the number of litigated decisions and the outcomes for each Forest Service region. The Northern Region (region 1) had 8 litigated decisions and 3 were settled or continuing. Of those decided, 3 were in favor of plaintiffs and 2 were in favor of the Forest Service. The Pacific Northwest Region (region 6) had all 5 of its litigated decisions resolved—4 in favor of plaintiffs and 1 in favor of the Forest Service. Three regions— Southwestern (region 3), Southern (region 8), and Alaska (region 10)—had no decisions litigated. Table 10 provides a list of appellants by Forest Service region. Interest groups were most active in the Forest Service’s Northern (region 1), Pacific Southwest (region 5), and Pacific Northwest (region 6) Regions. Private individuals were most active in the Rocky Mountain (region 2) and Pacific Southwest (region 5) Regions. Interest groups were the least active in the Alaska (region 10), Southern (region 8), and the Southwestern (region 3) Regions. Table 11 provides a list of litigants by Forest Service region. Interest groups were most active in the Forest Service’s Northern (region 1), Intermountain (region 4), Pacific Southwest (region 5), and Pacific Northwest (region 6) Regions. The Southwestern (region 3), Southern (region 8), and Alaska (region 10) Regions did not have any decisions litigated. Figure 15 summarizes the processing time frames for appeals of decisions for each Forest Service region. ntain Region (region 2) had The Rocky Mou the highest rate of appeals processed within the 90-day prescribed time frame at a rate of 100 percent. he Pacific Northwest Region (region 6) had T a rate below 50 percent by processing 17 of 49 appeals (about 35 percent) within the 90-day prescribed time frame. Figure 16 summarizes the fuels reduction methods used and how frequently decisions with those methods were appealed by Forest Service region. The Southern Region (region 8) had the most decisions (166) with prescribed burn activities. The Pacific Southwest Region (region 5) had the most decisions (126) with mechanical treatments. The Northern Region (region 1) experienced the highest appeal rates for decisions with prescribed burning and mechanical treatment activities—95 percent of appealable decisions and 55 percent of all decisions for prescribed burning; and 93 percent of appealable decisions and 63 percent of all decisions for mechanical treatment. Figure 17 shows a summary of the types of contracts used for implementing fuels reduction activities and how frequently decisions involving the contract types were appealed by region. The Pacific Northwest Region (region 6) had the most decisions (75) that included service contracts. The Pacific Southwest Region (region 5) issued the most decisions (65) with timber sale contracts. The Northern Region (region 1) had the most decisions (14) with stewardship contracts. The Intermountain (region 4), Pacific Southwest (region 5), and Eastern (region 9) Regions had all of their decisions with stewardship contracts appealed— totaling 4 decisions for all three regions. Figure 18 summarizes the number of decisions with fuels reduction activities in the wildland-urban interface (WUI) and frequency of appeals by region. The Southern Region (region 8) had the most decisions (125) in the WUI. The Northern Region (region 1) had the most decisions (23) that were appealed. The highest appeal rate for all decisions (50 percent) was the Eastern Region (region 9). The highest rate for appealable decisions (88 percent) was the Northern Region (region 1). Figure 19 summarizes the number of decisions with fuels reduction activities in inventoried roadless areas (IRA) and frequency of appeals by region. The Northern Region (region 1) had the most decisions (21) in IRAs. e Intermountain Region (region 4) had the most appealed Th decisions (8). The highest appeal rate for all decisions (50 percent) was in the Eastern Region (region 9). The highest appeal rate for appealable decisions (100 percent) was in the Eastern Region (region 9). The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | The federal fire community's decades old policy of suppressing wildland fires as soon as possible has caused a dangerous increase in vegetation density in our nation's forests. This density increase combined with severe drought over much of the United States has created a significant threat of catastrophic wildfires. In response to this threat, the Forest Service performs activities to reduce the buildup of brush, small trees, and other vegetation on national forest land. With the increased threat of catastrophic wildland fires, there have been concerns about delays in implementing activities to reduce these "forest fuels." Essentially, these concerns focus on the extent to which public appeals and litigation of Forest Service decisions to implement forest fuels reduction activities unnecessarily delay efforts to reduce fuels. The Forest Service does not keep a national database on the number of forest fuels reduction activities that are appealed or litigated. Accordingly, GAO was asked to develop this information for fiscal years 2001 and 2002. Among other things, GAO was asked to determine (1) the number of decisions involving fuels reduction activities and the number of acres affected, (2) the number of decisions that were appealed and/or litigated and the number of acres affected, (3) the outcomes of appealed and/or litigated decisions, and (4) the number of appeals that were processed within prescribed time frames. In a GAO survey of all national forests, forest managers reported that in fiscal years 2001 and 2002, 818 decisions involved fuels reduction activities covering 4.8 million acres. Of the 818 decisions involving fuels reduction activities, about 24 percent were appealed--affecting 954,000 acres. However, of the 818 decisions, more than half, 486 decisions, could not be appealed because they involved activities with little or no environmental impact. Of the 332 appealable decisions, 194 (about 58 percent) were appealed. There can be multiple appeals per decision. In addition, 25 decisions (3 percent) affecting about 111,000 acres were litigated. For 73 percent of the appealed decisions, the Forest Service allowed the fuels reduction activities to be implemented without changes; 8 percent required some changes before being implemented; and about 19 percent could not be implemented. Of the 25 litigated decisions, 19 have been resolved. About 79 percent of appeals were processed within the prescribed 90-day time frame. Of the remaining 21 percent, the processing times ranged from 91 days to 240 days. The Forest Service, in commenting on a draft of this report, generally agreed with the report's contents. Their specific comments and our evaluation of them are provided in the report. |
The nation’s nuclear weapons stockpile remains a cornerstone of U.S. national security policy. As a result of changes in arms control, arms reduction, and nonproliferation policies, the President and the Congress in 1993 directed that a science-based Stockpile Stewardship Program be developed to maintain the stockpile without nuclear testing. After the establishment of that program, DOE, in January 1996, created the Stockpile Life Extension Program. The purpose of this program is to develop a standardized approach for planning nuclear weapons refurbishment activities to enable the nuclear weapons complex to extend the operational lives of the weapons in the stockpile well beyond their original design lives. Within NNSA, the Office of Defense Programs is responsible for the stockpile. This responsibility encompasses many different tasks, including the manufacturing, maintenance, refurbishment, surveillance, and dismantlement of weapons in the stockpile; activities associated with the research, design, development, simulation, modeling, and nonnuclear testing of nuclear weapons; and the planning, assessment, and certification of the weapons’ safety and reliability. A national complex of nuclear weapons design laboratories and production facilities supports the Office of Defense Programs’ mission. This complex consists of three national laboratories that design nuclear weapons: Lawrence Livermore National Laboratory in California, Los Alamos National Laboratory in New Mexico, and Sandia National Laboratories in New Mexico and California. The complex also includes the Nevada test site and four production sites: the Pantex plant in Texas, the Y-12 plant in Tennessee, the Kansas City plant in Missouri, and the Savannah River site in South Carolina. NNSA refurbishes nuclear weapons according to a process called Phase 6.X, which was jointly developed with the Department of Defense. This process consists of the following elements: Phase 6.1, concept assessment. This phase consists of studies to provide planning guidance and to develop information so that a decision can be made on whether or not to proceed to a phase 6.2. Phase 6.2, feasibility study. This phase consists of developing design options and studying their feasibility. Phase 6.2A, design definition and cost study. This phase consists of completing definition of selected design option(s) from phase 6.2 through cost analysis. Phase 6.3, development engineering. This phase consists of conducting experiments, tests, and analyses to validate the design option and assess its potential for production. Phase 6.4, production engineering. This phase consists of making a strong commitment of resources to the production facilities to prepare for stockpile production. Phase 6.5, first production. This phase consists of producing a limited number of refurbished weapons and then disassembling and examining some of them for final qualification of the production process. Phase 6.6, full-scale production. This phase consists of ramping up to full-production rates at required levels. As of May 1, 2003, according to NNSA officials, four nuclear weapons were undergoing phase 6.X refurbishment activities. The W-80 warhead, the B-61 bomb, and the W-76 warhead are all in phase 6.3, development engineering, while the W-87 warhead is in phase 6.6, full-scale production. Prior to its budget submission for fiscal year 2001, the Office of Defense Programs divided the operating portion of the Weapons Activities account into two broad program activities—stockpile stewardship and stockpile management. Stockpile stewardship was defined as the set of activities needed to provide the physical and intellectual infrastructure required to meet the scientific and technical requirements of the (overall) Stockpile Stewardship Program. Stockpile management activities included DOE’s historical responsibilities for surveillance, maintenance, refurbishment, and dismantlement of the enduring stockpile. However, each category was dominated by a single large activity known as core stewardship and core management, which made it difficult to determine precisely where funds were being spent. For example, in the Office of Defense Programs’ budget submission for fiscal year 2000, core stewardship accounted for 48 percent of the stockpile stewardship activity’s budget request, while core management accounted for 73 percent of the stockpile management activity’s budget request. The lack of clarity associated with this broad structure caused concern both at DOE and in the Congress. In February 1999, the Deputy Assistant Secretary for Research, Development, and Simulation, who manages the stockpile stewardship activity, began to develop a new program activity structure to improve the planning process for his program and more closely integrate the program with the needs of the stockpile. The new structure was built around three new program activities—Campaigns, Directed Stockpile Work, and Readiness in Technical Base and Facilities. Campaigns are technically challenging, multiyear, multifunctional efforts conducted across the Office of Defense Programs’ laboratories, production plants, and the Nevada test site. They are designed to develop and maintain the critical capabilities needed to enable continued certification of the stockpile into the foreseeable future, without underground testing. Campaigns have milestones and specific end-dates or goals, effectively focusing research and development activities on clearly defined deliverables. Directed Stockpile Work includes the activities that directly support specific weapons in the stockpile. These activities include the current maintenance and day-to-day care of the stockpile, as well as planned life extensions. Readiness in Technical Base and Facilities includes the physical infrastructure and operational readiness required to conduct Campaign and Directed Stockpile Work activities at the production plants, laboratories, and the Nevada test site. This includes ensuring that the infrastructure and facilities are operational, safe, secure, compliant, and ready to operate. Within each of these three activities is a set of more detailed subactivities. For example, within the Campaigns activity are individual campaigns to study, among other things, the primary in a nuclear weapon or to develop a new capability to produce nuclear weapons pits. Similarly, the Directed Stockpile Work activity includes subactivities to conduct surveillance or produce components that need regular replacement within nuclear weapons. Finally, the Readiness in Technical Base and Facilities activity includes subactivities to capture the costs for the operation of its facilities. In submitting its new program activity structure to the Office of the Chief Financial Officer for review and approval for use in the budget submission for fiscal year 2001, the Office of Defense Programs believed that the new structure would, among other things, better reflect its current and future missions; focus budget justification on major program thrusts; and improve the linkage between planning, budgeting, and performance evaluation. Budget requests developed since fiscal year 2001 have been presented using the Campaigns, Directed Stockpile Work, and Readiness in Technical Base and Facilities activity structure. Within the Office of Defense Programs, two organizations share the responsibility for overall weapons refurbishment management. Those organizations are the Office of the Assistant Deputy Administrator for Research, Development, and Simulation and the Office of the Assistant Deputy Administrator for Military Application and Stockpile Operations. The first office directs funding to the laboratories for research and development, while the second office directs funding for engineering development and production to the laboratories and production sites. According to NNSA’s Life Extension Program Management Plan, both organizations also share responsibilities. Both oversee life extension program execution; ensure that the life extension program baseline, if successfully accomplished, will meet customer requirements; and provide life extension program information to higher levels for review. The management plan also stipulates that each life extension shall have one program manager and one deputy program manager, with one being assigned from each of the two aforementioned organizations, and that these two individuals will share program management responsibilities. While NNSA’s fiscal year 2003 budget request did not provide a clear picture of all activity necessary to complete the Stockpile Life Extension Program, NNSA has begun to take action to produce a more comprehensive and reliable picture of the program for fiscal year 2004 and beyond. With respect to fiscal year 2003, NNSA did not develop a comprehensive Stockpile Life Extension Program budget because historically it has developed its budget by broad function—such as research and development—rather than by individual weapon system or program activity such as the Stockpile Life Extension Program. NNSA provided the Congress with supplementary information in its fiscal year 2003 budget request that attempted to capture the budget for the Stockpile Life Extension Program; however, this information was not comprehensive because it did not include the budget for activities necessary to successfully complete the life extension efforts. For example, the budget for high explosives work needed to support three life extension efforts was shown in a different portion of NNSA’s budget request. Recently NNSA has decided, after forming a task force to study the issue, to budget and manage by weapon system beginning with its fiscal year 2004 budget request, with this transition officially taking place with congressional approval of the fiscal year 2005 budget request. As a result, NNSA’s fiscal year 2004 budget request was more comprehensive because it attributed a larger portion of the Defense Programs’ budget to the life extension program. NNSA’s fiscal year 2003 and 2004 budget requests were also not reliable because the data used to develop them had not been formally reviewed—through a process known as validation—as required by DOE directive. Instead, NNSA relied on more informal and less consistent analyses. NNSA officials have stated that a formal budget validation process would be reintroduced for the fiscal year 2005 budget cycle. NNSA’s congressional budget request for fiscal year 2003 did not contain a comprehensive, reliable budget for the Stockpile Life Extension Program or the individual weapon systems undergoing refurbishment. NNSA developed its budget by broad function—such as Campaigns, Directed Stockpile Work, and Readiness in Technical Base and Facilities—rather than by individual weapon system or program activity such as the Stockpile Life Extension Program. While the Congress has accepted previous NNSA budget submissions as structured, it also has requested detailed information on NNSA’s stockpile life extension efforts. Specifically, the fiscal year 2002 Energy and Water Development Appropriations Act conference report directed NNSA to include detailed information by weapon system in the budget justification documents for its fiscal year 2003 and subsequent presidential budget requests to Congress. The conference report also indicated that the budget should clearly show the unique and the fully loaded cost of each weapon activity, including the costs associated with refurbishments, conceptual study, and/or the development of new weapons. NNSA responded to the congressional requirement by providing an unclassified table in an annex to its fiscal year 2003 budget that contained data on the budget request for the four individual life extensions. This data, however, did not contain budget funding for work outside the Directed Stockpile Work program activity that is required to carry out the life extensions. For example: The narrative associated with the High Explosives Manufacturing and Weapons Assembly/Disassembly Readiness Campaign indicates that $5.4 million, or an 80 percent funding increase, was needed in fiscal year 2003 to support the B-61, W-76, and W-80 refurbishments. The narrative did not provide a breakdown by individual refurbishment. However, NNSA’s implementation plan for this campaign indicated that nearly $50 million would be needed to support the three refurbishments over fiscal years 2002 through 2006. The narrative associated with an expansion project at the Kansas City plant within the Readiness in Technical Base and Facilities program activity indicated that $2.3 million was needed in fiscal year 2003 and $27.9 million was needed in the outyears to support the B-61, W-76, and W-87 refurbishments. The narrative also indicated that this expansion was required in order to meet first production unit schedules associated with the refurbishments. In addition, a significant portion of the funding in the annex table was not assigned to any specific refurbishment but rather was included under a budget line item termed “multiple system.” NNSA officials told us they did not ask field locations to break down the multiple system funding by individual refurbishment because this funding was for “general capability” activities that would continue to be required even if a weapon system were cut. Further, they said that there was currently no good allocation scheme, so a breakdown by weapon system would be inaccurate and, therefore, serve no useful purpose. However, NNSA officials provided us no information indicating that NNSA had ever studied possible allocation schemes or showing that allocation was not feasible. Moreover, according to the DOE’s chief financial officer, NNSA can and should break out the multiple system funding by weapon system. This official indicated that doing so would put the budget in line with presidential guidance and Office of Management and Budget objectives that advocate presenting a budget by product rather than by process. In commenting on our report, NNSA stated that DOE’s chief financial officer had no basis for making any assertions about whether NNSA should break out the multiple system funding by weapon system. However, the chief financial officer has responsibility for ensuring the effective management and financial integrity of DOE’s programs. More broadly, because NNSA provided the Congress with a table by weapon system in a budget annex and in Nuclear Weapon Acquisition Reports, the agency questioned the need for further identification of the Stockpile Life Extension Program in the fiscal year 2003 budget. Agency officials, including the Deputy Administrator for Defense Programs, told us that NNSA was reluctant to budget by weapon system because it would like to retain the “flexibility” the current budget structure affords the agency in responding to unanticipated demands and shifting priorities in the Stockpile Stewardship Program. Officials expressed concern that dissemination of more detailed Stockpile Life Extension Program information would encourage the Congress to cut the most expensive weapon system or systems. Furthermore, they asserted that eliminating a weapon system would not save all of the funds associated with that weapon system, because a certain portion would be fixed costs that would have to be transferred to the remaining users. During the course of our work, however, NNSA has begun to take action to produce a more comprehensive budget for the Stockpile Life Extension Program. Specifically, NNSA decided, after forming a task force to study the issue, to begin budgeting and managing by weapon system in the fiscal year 2004 budget. Starting with that budget, the agency supplied to the Congress a classified annex that allocated more of the costs that were in the multiple system line item to individual weapon systems. In addition, NNSA officials said that more than $100 million that had been included in the Readiness in Technical Base and Facilities activity was moved to the Directed Stockpile Work activity. However, for fiscal year 2004, no refurbishment-related work in the Campaigns activity has been moved. NNSA officials said that during the fiscal year 2005 budget cycle the agency will review the Readiness Campaigns activity to determine which portion of that activity could also be attributed to weapon systems. NNSA officials indicated the agency decided not to implement all budget changes in fiscal year 2004 in order to ensure that classification concerns are resolved, contractors have time to modify their accounting systems as needed, and NNSA has time to fully understand the costs and characteristics of managing, budgeting, and reporting by weapon system. NNSA’s budget requests for fiscal years 2003 and 2004 were not reliable because the data used to develop the budgets have not been formally reviewed—through a process known as validation—as required by DOE directive. Instead, NNSA has relied on a review that has become more informal and less consistent. Specifically, DOE Order 130.1, on budget formulation, requires budget requests to be based on estimates that have been thoroughly reviewed and deemed reasonable by the cognizant field office and headquarters program organization. The order further requires field offices to conduct validation reviews and submit documentation and to report any findings and actions to headquarters. A proper validation, as described by DOE’s Budget Formulation Handbook, requires the field office to review budget data submissions in detail, sampling 20 percent of the submissions every year such that 100 percent would be evaluated every 5 years. NNSA officials indicated that no formal validation has been done with respect to refurbishment research and development funding. With respect to refurbishment production funding, NNSA officials described their validation review as a “reasonableness” test regarding the budget’s support of a program’s needs based on a historical understanding of appropriate labor, materials, and overhead pricing estimates. NNSA officials acknowledged that, in recent years, the agency has not fulfilled the budget validation requirement as specified in DOE Order 130.1, and that the validation review that has been used has become increasingly less formal and less consistent. Prior to this reduction in the quality of the review process, the DOE Albuquerque Operations Office performed formal validation reviews at production plant locations through fiscal year 1996. Since then, the Albuquerque office has relied on a pilot project by which the four contractors directly under its jurisdiction—Sandia National Laboratories, Los Alamos National Laboratory, Kansas City plant, and the Pantex plant—submitted self-assessments for Albuquerque’s review. For the fiscal year 2003 and 2004 budgets, however, NNSA officials said headquarters no longer requested field validation as the agency commenced implementation of a new planning, programming, budgeting, and evaluation process. One NNSA field office, we found, still chose to perform validation reviews of the contractors under its jurisdiction. Specifically, the Oakland office performed a validation review of the Lawrence Livermore National Laboratory. However, other locations, such as the Kansas City plant, the Y-12 plant, and the Savannah River site did not have their budgets reviewed by any NNSA field office. We also were informed by NNSA officials that NNSA headquarters staff did not review the validation reports that were done, as required by DOE Order 130.1, before transmitting the fiscal year 2003 and 2004 budgets to DOE’s budget office, which then submitted them to the Office of Management and Budget. NNSA’s director of the Office of Planning, Programming, Budgeting, and Evaluation said that her office plans to introduce a formal validation process for the fiscal year 2005 budget cycle, adding that such a process was not used for the fiscal year 2004 budget cycle because of time constraints. NNSA documentation regarding the validation process to be used specifies that validation teams will be led by field federal staff elements working with headquarters program managers; the Office of Planning, Programming, Budgeting, and Evaluation staff; and others. However, NNSA documentation is silent on how the validation process will be conducted. Therefore, it is unclear if the validation process will be performed thoroughly and consistently across the weapons complex and if the process will be formally documented, as required by DOE Order 130.1. Once a budget is established, having reliable information on the cost of federal programs is crucial to the effective management of government operations. Such information is important to the Congress and to federal managers as they make decisions about allocating federal resources, authorizing and modifying programs, and evaluating program performance. The Statement of Federal Financial Accounting Standards (SFFAS) Number 4, “Managerial Cost Accounting Standards,” establishes the framework under which such cost information is gathered. In particular, the standard states that federal agencies should accumulate and report the costs of their activities on a regular basis for management information purposes. The standard sees measuring costs as an integral part of measuring the agency’s performance in terms of efficiency and cost- effectiveness. The standard suggests that such management information can be collected through the agency’s cost accounting system or through specialized approaches—known as cost-finding techniques. Regardless of the approach used, SFFAS Number 4 states that agencies should report the full costs of the outputs they produce. However, under Federal Acquisition Regulations and SFFAS Number 4, NNSA’s contractors do have the flexibility to develop the managerial cost accounting methods that are best suited to their operating environments. NNSA does not have a system for accumulating and tracking stockpile life extension program costs. Similar to its approach in the budget arena, NNSA currently does not collect cost information for the stockpile life extension program through the agency’s accounting system. This is because NNSA has defined its programs and activities, and thus the cost information it collects, at a higher level than the stockpile life extension program. Specifically, DOE collects cost information to support its Defense mission area. The Defense mission area includes the types of broad activities mentioned earlier, such as Campaigns, Directed Stockpile Work, and Readiness in Technical Base and Facilities. Moreover, DOE’s current accounting system does not provide an adequate link between cost and performance measures. Officials in DOE’s Office of the Chief Financial Officer recognize these shortcomings and are considering replacing the agency’s existing system with a system that can provide managers with cost information that is better aligned with performance measures. Moreover, NNSA does not accumulate life extension program cost information in the agency’s accounting system because NNSA does not require its contractors to collect information on the full cost of each life extension by weapon system. Full costs include the costs directly associated with the production of the item in question—known as direct costs—as well as other costs—known as indirect costs, such as overhead—that are only indirectly associated with production. SFFAS Number 4 states that entities should report the full cost of outputs in its general-purpose financial reports. General-purpose financial reports are reports intended to meet the common needs of diverse users who typically do not have the ability to specify the basis, form, and content of the reports they receive. Direct costs are captured within NNSA’s Directed Stockpile Work activity and include such things as research and development or maintenance. However, NNSA’s Directed Stockpile Work activity also includes indirect costs that benefit more than one weapon system or life extension. Examples of indirect costs within Directed Stockpile Work include evaluation and production support costs. Indirect costs are also found within Campaigns and Readiness in Technical Base and Facilities activities. Specifically, as noted earlier, NNSA’s budget justification identifies certain Campaign activities, which represent an indirect cost, that support individual life extensions. A portion of both of these sources of indirect costs could be allocated to individual weapon systems; however, NNSA does not currently require such an allocation by its contractors. It is important to recognize that under SFFAS Number 4, NNSA’s contractors do have the flexibility to develop the cost accounting methodologies that are best suited to their operating environments. The contractors involved in the life extension program are structured differently and have different functions. For example, Lawrence Livermore National Laboratory is run by the University of California and conducts mostly research that may or may not produce a tangible product. In contrast, the production plants are run by private corporations which produce parts, as is the case at the Kansas City or Y-12 plants, or assemble the parts into a completed weapon, as is done at the Pantex plant. As a result, even if NNSA required contractors to report the full cost of individual refurbishments, some differences in the data, which reflects the contractor’s different organizations and operations, would still exist. While the agency’s accounting system does not accumulate and report costs for the Stockpile Life Extension Program or its individual refurbishments, NNSA has developed several mechanisms to assist the Congress and program managers who oversee the life extension effort. Specifically: In previous years, NNSA has requested that its contractors provide supplemental data on actual costs by weapon system. These data have been used to respond to congressional information requests. However, similar to the way NNSA addresses its budget request, NNSA has not required its contractors to allocate the supplemental cost information in the multiple system category to individual refurbishments. In addition, also similar to the way it approached its budget presentation, NNSA has not required its contractors to include the costs for supporting activities, such as Campaigns and Readiness in Technical Base and Facilities in the reports. Some life extension program managers require their contractors to provide them with status reports on the individual refurbishments they are overseeing. However, these reports are prepared inconsistently or are incomplete. For example, while the W-76 program manager requires monthly reports, the B-61 program manager requires only quarterly reports. In contrast, the W-80 and W-87 program managers do not require any routine cost reporting. NNSA is trying to develop a consistent method for its life extension program managers to request cost information; however, NNSA officials have stated that NNSA has to first define what its needs are. Similar to the supplemental cost data described above, these status reports do not contain all of the costs for supporting activities, such as Campaigns and Readiness in Technical Base and Facilities. Finally, as part of the production process, NNSA’s contractors prepare a report known as the Bill of Materials. The Bill of Materials accumulates the materials, labor, and manufacturing costs of the production of a weapon, starting with an individual part and culminating in the final assembly of a complete weapon. NNSA uses the resulting Master Bill of Materials to record—capitalize—the production costs of each weapon system in its accounting system. However, the costs accumulated by the Bill of Materials include only production costs and do not include costs such as related research and development costs or costs associated with Campaigns and Readiness in Technical Base and Facilities. Finally, despite the importance of reliable and timely cost information for both the Congress and program managers, similar to the situation we found with the budget, life extension program costs are not independently validated either as a whole or by individual weapon system. Specifically, neither the DOE Inspector General nor DOE’s external auditors specifically audit the costs of the life extension program. While both parties have reviewed parts of the life extension program—for example, the Inspector General recently reviewed the adequacy of the design and implementation of the cost and schedule controls over the W-80 refurbishment—their work has not been specifically intended to provide assurance that all life extension program costs are appropriately identified and attributed to the life extension program as a whole or to the individual refurbishments. The management of critical programs and projects has been a long- standing problem for DOE and NNSA’s Office of Defense Programs. According to NNSA’s fiscal year 2001 report to the Congress on construction project accomplishments, management costs on DOE projects are nearly double those of other organizations, and DOE projects take approximately 3 years longer to accomplish than similar projects performed elsewhere. As a result, NNSA has repeatedly attempted to improve program and project management. For instance, in September 2000, the Office of Defense Programs initiated an improvement campaign to develop solutions to its project management problems and to enact procedural and structural changes to the Defense Programs’ project management system. Later, in August 2002, the Office of Defense Programs established a project/program management reengineering team. As the basis for assembling that team, its charter noted that NNSA does not manage all projects and programs effectively and efficiently. However, despite these NNSA attempts at improvement, management problems associated with the stockpile life extension program persist. Front-end planning is, in many ways, the most critical phase of an activity and the one that often gets least attention. The front-end planning process defines the activity. The decisions made in this phase constrain and support all the actions downstream and often determine the ultimate success or failure of the activity. NNSA, we found, does not have an adequate planning process to guide the individual life extensions and the overall program. Specifically, NNSA has not (1) established the relative priority of the Stockpile Life Extension Program against other defense program priorities, (2) consistently established the relative priority among the individual refurbishments, (3) developed a formalized list of resource and schedule conflicts between the individual refurbishments in order to systematically resolve those conflicts, and (4) finalized the individual refurbishment project plans on a timely basis. Priority ranking is an important decision-making tool at DOE. It is the principal means for establishing total organizational funding and for making tradeoffs between organizations. DOE uses such a ranking at the corporate level to make departmental budget decisions. To perform that ranking, DOE formally requires each of its organizational elements to annually submit to the DOE Office of Budget reports that provide a budget year priority ranking and a ranking rationale narrative. In discussing this matter with an NNSA budget official, we found that NNSA had not submitted these priority-ranking reports for fiscal years 2002, 2003, and 2004, and this official was also unable to explain why. NNSA officials, in commenting on our report, indicated that NNSA is not required to follow the DOE requirement regarding priority budget ranking; however, these officials could not provide us with any policy letter supporting their position that NNSA has been officially exempted from this requirement. Prioritization is also an important part of NNSA’s strategic planning process. According to that process, priorities must be identified in an integrated plan developed by each major NNSA office. This integrated plan links sub-office program plans, such as the plan for refurbishing the B-61, to NNSA’s strategic plan. With respect to the Office of Defense Programs, however, we found that this office has not finalized an integrated plan. According to an NNSA official, Defense Programs developed a draft plan in January 2002 but has not completed that plan and has instead devoted itself to working on the sub-office program plans. Absent a finalized integrated plan, it is unclear how sub-office program plans could be developed and properly linked to NNSA’s strategic plan. According to the director of Defense Programs’ Office of Planning, Budget, and Integration, prioritizing Defense Programs activities is essential. This is because the priorities of Defense Programs, its contractors, and the Department of Defense, which is Defense Programs’ customer for life extension refurbishments, may not necessarily be the same. In this official’s view, the issue of setting priorities needs to be addressed. This official indicated that the Office of Defense Programs developed a draft list of activities in August 2001, but did not prioritize those activities. Included among those activities were efforts to continue stockpile surveillance activities and to complete planned refurbishments on schedule. For fiscal years 2003 and 2004, according to this official, Defense Programs published budget-related guidance regarding priorities, but he did not believe the guidance was specific enough. This official added that, for fiscal year 2005, the guidance would have sufficient detail. While prioritizing work among Office of Defense Programs activities such as stockpile surveillance and refurbishment is important, it is also important to prioritize work within those activities. In the competition for budget funds, the Office of Defense Programs must continually ask which of the three refurbishments undergoing research and development work is a higher priority and should be given funding preference. However, NNSA has not taken a consistent position on prioritizing the life extensions. For instance, in October 2002, NNSA indicated by memorandum that, because of the continuing resolution for fiscal year 2003, the priority order for the three refurbishments would be the W-76, followed by the B-61, followed by the W-80. In November 2002, however, NNSA indicated by memorandum that the three refurbishments had the same priority. In neither memorandum did NNSA identify the criteria or reasons for these two contradictory decisions. According to NNSA officials, no priority criteria have been developed, and each of the three refurbishments is equal in priority. This lack of a definitive decision on the priority of the three refurbishments has caused confusion. For example, the Los Alamos National Laboratory decided in early calendar year 2002 to unilaterally transfer funds from the W-76 refurbishment to the B-61 because Los Alamos believed that the B-61 work was more important. As a result of that decision, the W-76 had to slip a research reactor test from fiscal year 2002 to fiscal year 2003. Although this test was not on the critical path for completing the W-76 refurbishment, NNSA had identified the reactor test as a fiscal year 2002 metric for measuring the refurbishment’s progress. In February 2002, NNSA questioned Los Alamos regarding its decision. In its March 2002 reply, Los Alamos indicated that it had found a mechanism to fully fund the W-76 refurbishment. However, because the reactor test had been cancelled, Los Alamos indicated that it was no longer possible to complete the test in fiscal year 2002, as planned. Therefore, Los Alamos stated that its goal was to begin this test in the first part of fiscal year 2003. In another case, the Y-12 plant decided to suspend or not initiate four projects at the beginning of fiscal year 2003 in support of the W-76 refurbishment because Y-12 believed that these projects were a lower priority than other work to be conducted. In a November 2002 memorandum, NNSA questioned this decision. NNSA indicated that these projects were integrated with another project, which was needed to ensure a complete special material manufacturing process capability in time to support the W-76 refurbishment. Accordingly, NNSA stated that it was providing $2.9 million in unallocated funds so that work on the projects could resume as soon as possible to support the refurbishment schedule. While these examples represent only two documented funding conflicts, according to each of the refurbishment program managers, additional resource and schedule conflicts exist among the three refurbishments. Specifically, the refurbishment program managers agreed that conflicts, or areas of competition, existed on many fronts, including budget resources, facilities, and testing. For example, the three refurbishments compete for certain testing facilities at Los Alamos National Laboratory and at the Sandia National Laboratories, and for the use of certain hardware at the Y-12 plant. Additional conflicts are also present that may affect only two of the three refurbishments. Those identified included such activities as campaign support, research, and development at the Los Alamos National Laboratory, and use of hardware production at the Y-12 plant. The Deputy Assistant Administrator for Military Application and Stockpile Operations confirmed that the areas of competition identified by the individual refurbishment program managers represented a fair portrayal of the conflicts that exist between the refurbishments. He indicated that while no formalized list of resource and schedule conflicts exist, the subject of refurbishment conflicts is routinely discussed at each refurbishment program review meeting. These meetings are held monthly to discuss one of the refurbishments on a rotating basis. Finally, fundamental to the success of any project is documented planning in the form of a project execution plan. With regard to the Stockpile Life Extension Program, NNSA has had difficulty preparing project plans on a timely basis. In its report on the lessons learned from the W-87 refurbishment, NNSA noted that one cause of the W-87’s problems was that the project plan was prepared too late in the development cycle and was not used as a tool to identify problems and take appropriate actions. As to the W-76, W-80, and B-61 refurbishments, we found that NNSA had not completed a project plan on time and with sufficient details, as stipulated in NNSA guidance for properly managing the reburbishments. According to NNSA’s June 2001 Life Extension Program Management Plan, a final project plan is to be completed at the end of Phase 6.2A activities (design definition and cost study). The Life Extension Program Management Plan offers numerous guidelines detailing the elements that should be included in the project plan. Those elements include, among others, team structure and the roles of each team and individual members; an integrated program schedule identifying all tasks to be accomplished for the success of the project; life cycle costs; and a documentation of the facility requirements needed to support all portions of the refurbishment. This management plan was issued as guidance, rather than as a formally approved requirements document, pending the resolution of role and responsibility issues within NNSA. Of the three refurbishments, only the B-61 has completed its project plan on schedule. According to NNSA documentation, the B-61 reached the end of phase 6.2A in October 2002. We confirmed that a project plan had been completed at that time, but the project plan did not include all life cycle costs, such as Campaign costs and Readiness in Technical Base and Facilities costs. In this regard, DOE’s project management manual defines life cycle costs as being the sum total of the direct, indirect, recurring, nonrecurring, and other related costs incurred or estimated to be incurred in the design, development, production, operation, maintenance, support, and final disposition of a project. Conversely, an assessment of the W-76 refurbishment indicates that the project plan for that refurbishment is 3 years late and also does not include all life cycle costs. According to NNSA documentation, the W-76 reached the end of phase 6.2A in March 2000. As of July 2003, a final project plan had not yet been completed. The W-76 project manager told us that he has been using a working draft of a project plan dated August 2001. He indicated that he did not finalize the project plan because the Life Extension Program Management Plan published in June 2001 had yet to be issued as a formal requirement. With the reissuance of the management plan as a requirement in January 2003, an NNSA official said that a finalized project plan should be completed by the end of fiscal year 2003. Likewise, an assessment of the W-80 refurbishment indicates that the project plan for that refurbishment is more than 2 years late and also does not include all life cycle costs. According to NNSA documentation, the W-80 reached the end of phase 6.2A in October 2000. As of July 2003, a complete project plan had not been prepared. According to the W-80 program manager, the refurbishment does not yet have an integrated project schedule as described in the Life Extension Program Management Plan. The W-80 program manager said that a finalized project plan with this integrated schedule, which shows all tasks associated with the refurbishment as well as all linkages, should be completed by mid-to-late summer 2003. The W-80 program manager added that this integrated schedule was not completed earlier because of personnel changes on this refurbishment. DOE’s portfolio of projects demands a sophisticated and adaptive management structure that can manage project risks systematically; control cost, schedule, and scope baselines; develop personnel and other resources; and transfer new technologies and practices efficiently from one project to another, even across program lines. With respect to the Stockpile Life Extension Program, NNSA does not have an adequate management structure which ensures rigor and discipline, fixes roles, responsibilities, and authority for each life extension, or develops key personnel. Specifically, NNSA has not (1) defined the life extensions as projects and managed them accordingly, (2) clearly defined the roles and responsibilities of those officials associated with the Stockpile Life Extension Program, (3) provided program managers with sufficient authority to carry out the refurbishments, or (4) given program and deputy program managers proper project/program management training. DOE projects commonly overrun their budgets and schedules, leading to pressures for cutbacks that have resulted in facilities that do not function as intended, projects that are abandoned before they are completed, or facilities that have been delayed so long that, upon completion, they no longer serve any purpose. The fundamental deficiency for these problems has been a DOE organization and culture that has failed to embrace the principles of good project management. The same can be said for NNSA’s view of the individual life extension refurbishments. Specifically, NNSA has not established that the individual refurbishments are projects and managed them accordingly. According to the DOE directive, a project is a unique effort that, among other things, supports a program mission and has defined start and end points. Examples of projects given in the DOE directive include planning and execution of construction, renovation, and modification; environmental restoration; decontamination and decommissioning efforts; information technology; and large capital equipment or technology development activities. To the extent that an effort is a project, the DOE directive dictates that the project must follow a structured acquisition process that employs a cascaded set of requirements, direction, guidance, and practices. This information helps ensure that the project is completed on schedule, within budget, and is fully capable of meeting mission performance and environmental, safety, and health standards. According to the Deputy Assistant Administrator for Military Application and Stockpile Operations, the individual life extension refurbishments are projects but have not been officially declared so. This official indicated that the primary reason for the lack of such a declaration is an organizational culture, including those working at NNSA laboratories, which often does not grasp the benefits of good project management. This official also said that the organization is moving in the direction of embracing project management but is doing so at an extremely slow pace. If NNSA declared the individual life extension refurbishments to be projects, many useful project management tools would become available to the NNSA program mangers who are overseeing the refurbishments. Those tools include, for example, conducting an independent cost estimate, which is a “bottom-up” documented, independent cost estimate that has the express purpose of serving as an analytical tool to validate, cross- check, or analyze cost estimates developed by the sponsors of the project. Another tool is the use of earned value reporting, which is a method for measuring project performance. Earned value compares the amount of work that was planned at a particular cost with what was actually accomplished within that cost to determine if the project will be completed within cost and schedule estimates. A further tool is the reporting of project status on all projects costing over $20 million to senior DOE and NNSA management using DOE’s Project Analysis Reporting System. NNSA refurbishment program managers with whom we spoke indicated that management of the refurbishments would be improved if tools such as independent cost estimates and earned value reporting were used. With respect to roles and responsibilities, clearly defining a project’s organizational structure up front is critical to the project’s success. In a traditional project management environment, the project manager is the key player in getting the project completed successfully. But other members of the organization also play important roles, and those roles must be clearly understood to avoid redundancy, miscommunication, and disharmony. With respect to the Stockpile Life Extension Program, NNSA has yet to clearly define the roles and responsibilities of all parties associated with the program. NNSA’s Life Extension Program Management Plan dated June 2001 was the controlling document for defining refurbishment roles and responsibilities from its issuance through calendar year 2002. Our review of that plan, however, found a lack of clarity regarding who should be doing what. For instance, the plan is unclear on which NNSA office is responsible for each phase of the 6.X process. Illustrating that point, refurbishment program managers with whom we spoke generally said there is confusion as to which NNSA office—either the Office of Research, Development, and Simulation or the Office of Military Application and Stockpile Operations— has the primary responsibility when the refurbishment moves to phase 6.3 (development engineering) of the 6.X process. In addition, according to the plan, the program manager and deputy program manager have identical responsibilities. The plan states that the program manager and deputy program manager shall discuss significant aspects of the refurbishment with each other and should reach consensus concerning important aspects of the scope, schedule, and cost. The plan further states that absent consensus on an issue, the program manager may decide; however, any unresolved conflicts between the two can be addressed to senior management for resolution. Further, the plan is silent on the roles and responsibilities of the NNSA program and deputy program managers versus the project manager at a laboratory or at a production plant site. What actions the laboratory or plant project managers can take on their own, without NNSA review and concurrence, are not specified in the plan. Instead, the plan simply states that laboratory and plant project managers provide overall management of life extension refurbishment activities at their facilities. In January 2003, NNSA reissued the Life Extension Program Management plan after making only minor changes to the document. The reissued management plan indicates that the program manager’s role will transition from the NNSA Office of Research, Development, and Simulation to the NNSA Office of Military Application and Stockpile Operations during phase 6.3. However, the reissued plan does not specify when, during phase 6.3, this transition will occur. In addition, the reissued plan does not further clarify the roles and responsibilities between the program and deputy program managers and the project manager at a laboratory or at a production plant site. In addition to clear roles and responsibilities, project managers must have the authority to see the project through. Regarding project management, authority is defined as the power given to a person in an organization to use resources to reach an objective and to exercise discipline. NNSA’s lessons learned report on the W-87 refurbishment noted that there was an air of confusion in resolving issues at the Kansas City plant because project leaders were not formally assigned and provided with the tools (authority, visibility, and ownership) necessary to properly manage the effort. Our report on the W-87 refurbishment prepared in calendar year 2000 found similar problems regarding the lack of authority. With respect to the Stockpile Life Extension Program, NNSA has still not yet given the program managers the authority to properly manage the refurbishments. Five of the six program or deputy program managers associated with the B-61, W-76, and W-80 refurbishments believed they had not been given the authority to properly carry out the refurbishments. For instance, one program manager said he has neither the control nor the authority associated with his refurbishment. He added that the program managers ought to be given the authority so that the laboratories report directly to them. As the situation currently stands, the laboratories will go over the heads of the program manager to senior NNSA management to get things done the laboratories’ way. According to a deputy program manager on another refurbishment, the program managers do not have enough authority and should have control of the refurbishments’ budgets. He elaborated by explaining how one laboratory unilaterally decided to take funds away from one refurbishment and give it to another without consulting with any of the program managers. In this deputy program manager’s view, if funds need to be transferred from one refurbishment to another, then the laboratories should be required to get the concurrence of NNSA management. A program manager on another refurbishment stated that he does not have sufficient authority because he lacks control of the budget. He indicated that funds for his refurbishment are allocated to the various laboratory and plant sites, but he is not included in the review and concurrence loop if the sites want to transfer funds from one activity to another. The Assistant Deputy Administrator for Military Application and Stockpile Operations said he recognized the program manager’s concerns and has advocated giving the program managers greater authority. He also indicated that greater authority might eventually be granted. However, he explained that at the moment, the Office of Defense Programs is focused on a recently completed NNSA reorganization. After that matter is sufficiently addressed, greater authority for the program managers may result. Turning to the issue of training, competent project management professionals are essential to successful projects. Other federal agencies and the private sector realized long ago that project management is a professional discipline that must be learned and practiced. To ensure that projects are well planned and properly executed, DOE created in 1995 a competency standard for project management personnel. According to this standard, it is applicable to all DOE project management personnel who are required to plan and execute projects in accordance with departmental directives regarding project management. The standard identifies four categories of competencies that all project management personnel must attain and states that attainment must be documented. The categories are (1) general technical, such as a knowledge of mechanical, electrical, and civil engineering theories, principles, and techniques; (2) regulatory, such as a knowledge of applicable DOE orders used to implement the department’s project management system; (3) administrative, such as a knowledge of the project reporting and assessment system as outlined in DOE orders; and (4) management, assessment, and oversight, such as a knowledge of DOE’s project management system management roles, responsibilities, authorities, and organizational options. Of the six program and deputy program managers assigned to the W-76, B-61, and W-80 refurbishments, NNSA records indicate that only one of the six (the program manager for the W-76) has achieved 100 percent attainment of the aforementioned standards. Regarding the other five, NNSA records indicate that the deputy program manager for the B-61 has achieved 30 percent attainment of the required competencies contained in the standard, while the remaining four are not enrolled under the qualification standards program. According to one of the three program managers with whom we spoke, the problems with the W-87 refurbishment were caused, in part, because the assigned program manager was not qualified to perform all required tasks. NNSA records confirm that that particular W-87 program manager was also not enrolled in the project management qualification program. Whereas NNSA program managers are required to meet qualifications standards to discharge their assigned responsibilities, contractor project management personnel we contacted are not required to meet any project management standards. According to W-76, B-61, and W-80 refurbishment project managers at the Sandia National Laboratories, Lawrence Livermore National Laboratory, and Los Alamos National Laboratory, their respective laboratories have no requirements that must be met before a person becomes a project manager, and none of the managers had attained project management certification through their previous work assignments and experiences. NNSA officials also acknowledge that neither DOE nor NNSA orders require contractor project management personnel to be properly trained and certified. Effective oversight of project performance is dependent on the systematic and realistic reporting of project performance data. Senior management need such data to be able to detect potentially adverse trends in project progress and to decide when intervention is necessary. With respect to the Stockpile Life Extension Program, NNSA does not have an adequate process for reporting life extension changes and progress, despite the fact that cost growth and schedule slippage are occurring. In July 2002, the Office of Defense Programs issued program review guidance to enable advance planning, provide consistency, set clearer expectations, and establish a baseline process on which to improve life extension, program reviews. Various review meeting formats were articulated including a full program review of each refurbishment to be conducted monthly on a rotating basis. The goals and objectives of the full program review were to inform management of project status, convince management that the refurbishment is well managed, gain management’s assistance in resolving issues that require its involvement, and identify management decision points and obtain authority to execute risk mitigation plans. Our review of the most recent program review reports prepared on the individual refurbishments showed that they contained limited information regarding cost growth and schedule changes against established baselines. These reports, which are prepared for senior NNSA management, show whether the respective refurbishment is on track to spend all fiscal year funding, but not whether the actual work completed has cost more or less than planned. For example: According to W-76 program review reports presented in November 2002 and February 2003, the refurbishment was on track to spend all funding allocated for fiscal year 2003. In addition, the refurbishment was slightly behind schedule but manageable and within budget. On the other hand, the presentations gave no specifics on how much the refurbishment is behind schedule or how well the refurbishment was progressing against a life cycle cost baseline. Specifically, costs associated with certain procurements, Campaign costs, Readiness in Technical Base and Facilities costs, construction costs, and transportation costs which make up the life cycle costs of the refurbishment were not included. The presentations also showed that the refurbishment had not met at least two commitments during fiscal year 2002. According to the W-80 program review report presented in December 2002, the refurbishment was on track to spend all funding allocated for fiscal year 2003. In addition, the refurbishment was within cost and within scope, but behind schedule. On the other hand, the report gave no specifics on how much the refurbishment was behind schedule or how well the refurbishment was progressing against a life cycle cost baseline. The presentation further mentioned that the refurbishment had high risks because, for instance, the Air Force was currently not funding certain work that must be performed in order to meet the established first production unit date of February 2006. As opposed to the above reports, the B-61 program review reports presented in January and March 2003 made no summary statements regarding the refurbishment’s cost and schedule status against established baselines. The presentations also indicated that the refurbishment is on schedule to spend all funding allocated for fiscal year 2003. On the other hand, the presentations showed that the refurbishment has already not met several commitments for fiscal year 2003, suggesting that the refurbishment may be behind schedule. Absent the periodic reporting of specific cost growth and schedule information to senior NNSA management, we interviewed cognizant NNSA officials to document any cost growth and schedule changes associated with the individual refurbishments. These officials recognized that certain cost growth and schedule changes had occurred for each of the refurbishments. These officials added that cost growth and schedule changes are routinely discussed during meetings on the refurbishments. According to the W-76 program manager, this refurbishment is slightly behind schedule. In particular, the W-76 did not conduct certain activities on schedule, such as deciding whether to reuse or remanufacture certain components, conduct a certain reactor test at Los Alamos National Laboratory, and construct certain facilities at the Y-12 plant. The reasons why these activities were late varied. For instance, the decision to reuse or remanufacture certain components did not occur on schedule, according to the W-76 program manager, primarily because the NNSA person assigned to do the necessary calculations neglected to perform that task. Conversely, the reactor test at the Los Alamos National Laboratory did not occur on schedule because the laboratory unilaterally transferred funds from the W-76 refurbishment to the B-61. As to cost growth, the W-76 will need about $10.75 million in additional funding in fiscal year 2004. The funding is necessary to purchase certain commercial off-the-shelf parts that were previously not authorized or budgeted for. According to NNSA field and Sandia National Laboratory officials, it is unlikely that the W-80 will meet its scheduled first production unit delivery date. Echoing those sentiments, according to the NNSA program manager, the W-80 was scheduled to enter phase 6.4 (production engineering) on October 1, 2002. Now, however, it is hoped that phase 6.4 will commence in 2003. The NNSA program manager indicated that the W-80 has been impacted by a lack of funding for the refurbishment from the Air Force. This lack of funding, the NNSA program manager said, has occurred because of a disconnect in planning between the 6.X process and the Department of Defense budget cycle. The Air Force had made no plans to allocate money for the W-80 in either its fiscal year 2001 or 2002 budgets. Therefore, several important joint NNSA and Air Force documents have not been completed. Certain ground and flight tests also lack funding and have been delayed. In addition, the W-80 will need an additional $8 million to $9 million in fiscal year 2003 to buy certain commercial off-the-shelf parts that had been planned but not budgeted for. According to the Air Force’s Lead Program Officer on the W-80, the Air Force, because of an oversight, had no money for the W-80 in its fiscal years 2001 and 2002 budgets. As a result, he anticipated that the first production unit delivery date will need to be slipped. He also indicated that he was working on a lessons learned report due in early 2003 to document the situation with the W-80 and help ensure that a similar funding problem does not occur with future refurbishments. This Air Force official added that in December 2002 the Air Force finally received the funding necessary to support the W-80 refurbishment. According to the NNSA director of the nuclear weapons stockpile, the W-80 will need to slip its first production unit date from February 2006 to April 2007. As a result, NNSA was rebaselining the W-80 refurbishment. As of July 2003, cost data submitted to NNSA headquarters from contractor laboratory and production site locations indicate that the cost to refurbish the W-80 may increase by about $288 million. NNSA officials were in the process of determining whether this cost increase was due to schedule slippage or other factors, such as the sites underestimating costs in the past. Finally, certain schedule slippage has already occurred for the B-61. According to NNSA’s June 2001 Life Extension Program Management Plan, the original first production unit delivery date was September 2004. Now, according to the B-61 program manager, the new delivery date is June 2006. The program manager indicated that this change was made because NNSA determined that the September 2004 date was not attainable. As it is, the B-61 program manager said, the June 2006 date represents an acceleration of the phase 6.X process where activities within phases 6.3 (design definition and cost study) and 6.4 (development engineering) will be conducted concurrently. Because of that, certain risks are involved. For instance, some design development will not be fully completed before production must be initiated to keep the refurbishment on schedule. The B-61 program manager indicated that the commencement date for phase 6.3 has already changed from August 2002 to December 2002 because of the Air Force’s lack of timely action in reviewing certain documentation. As to cost changes, a decision needs to be made regarding the production of a particular material. Two NNSA locations, which differ in cost, are being considered. If the location with the higher cost is selected, then an additional $10 million will be needed in fiscal year 2004 and beyond. To gauge the progress of the refurbishments within the Stockpile Life Extension Program, NNSA, like all federal agencies, uses performance measures. Performance measures, which are required by the Government Performance and Results Act of 1993, are helpful to senior agency management, the Congress, and the public. Performance measures inform senior agency management as to whether progress is being made toward accomplishing agency goals and objectives. They are also used by the Congress to allocate resources and determine appropriation levels. Performance measures are further used by American taxpayers as a means for deciding whether their tax funds are being well spent. Unfortunately, NNSA has not developed performance measures with sufficient specificity to determine the progress of the three refurbishments that we reviewed. As mentioned earlier, the agency’s current accounting system does not provide an adequate link between cost and performance measures. NNSA identifies performance measures for the W-80, B-61, and W-76 in three separate and distinct documents. One document is the narrative associated with NNSA’s fiscal year 2004 budget request for the Directed Stockpile Work account. Another is the combined program and implementation plans for the stockpile maintenance program for fiscal years 2002 through 2008. A third is the Future Years Nuclear Security Plan. Performance measures used in these documents do not identify variance from cost baselines as a basis for evaluating performance. Performance measures identified in NNSA’s fiscal year 2004 budget request are general in nature and provide no details regarding cost performance. According to that budget request, for instance, a performance measure listed for the B-61, W-76, and W-80 is to complete 100 percent of the major milestones scheduled for fiscal year 2004 to support the refurbishments’ first production unit date. None of the performance measures listed in the budget request mention adherence to cost baselines. Performance measures identified in the combined program and implementation plans for the Directed Stockpile Work maintenance program dated September 3, 2002, are equally minimal, vague, and nonspecific regarding refurbishment work. These plans identify performance measures at three levels—level 1, the Defense Program level, which is the highest level of actions/milestones/deliverables; level 2, which is the supporting level of actions/milestones/deliverables on the path toward achieving level 1 measures; and level 3, which is the site level of actions/milestones/deliverables on the site path toward achieving level 2 measures. According to these plans, there are no level 1 performance measures associated with the three refurbishments. For levels 2 and 3, the plans specify that the three refurbishments should meet all deliverables as identified in other NNSA documents. These plans, we noted, do not discuss adherence to cost baselines as a deliverable. Performance measures identified in the Future Years Nuclear Security Plan are also vague and nonspecific. This plan describes performance targets that NNSA hopes to achieve in fiscal years 2003 through 2007, but the plan does not associate funding levels with those targets. Some of the performance targets apply to the Stockpile Life Extension Program in general or to particular refurbishments. Regarding the latter, for example, in fiscal year 2003, NNSA intends to commence production engineering work (phase 6.4) for the B-61, W-76, and W-80 refurbishments, and to eliminate W-76, W-80, and W-87 surveillance backlogs. The plan, however, does not associate funding estimates with these performance targets. According to the Assistant Deputy Administrator for Military Application and Stockpile Operations, the refurbishment performance measures contained in the three aforementioned documents are admittedly not very good. He indicated that the Office of Defense Programs is moving toward linking key performance measures to appropriate NNSA goals, strategies, and strategic indicators. The Assistant Deputy Administrator stated that he hoped that the performance measures for fiscal year 2005 would provide a better basis for evaluating the refurbishments’ progress in adhering to cost baselines. While NNSA management problems are many and long-standing, so too have been NNSA attempts to effect improvement. NNSA has repeatedly studied and analyzed ways to ensure that mistakes made in the past regarding the safety of nuclear weapons, the security of nuclear facilities, and the protection of nuclear secrets are not repeated in the future. Accordingly, NNSA has various actions underway to fix its management problems. Foremost of those actions has been the December 2002 completion of a reorganizational transformation campaign. In announcing this reorganization, the NNSA administrator said the reorganization follows the principles outlined in the President’s Management Agenda, which strives to improve government through performance and results. The new reorganization will reportedly streamline NNSA by eliminating one layer of management at the field office level. It will also improve organizational discipline and efficiency by requiring that each element of the NNSA workforce will become ISO 9001 certified by December 31, 2004. ISO 9001 is a quality management standard that has been recognized around the world. The standard applies to all aspects necessary to create a quality work environment, including establishing a quality system, providing quality personnel, and monitoring and measuring quality. In concert with NNSA’s overall reorganization has been the creation of a program integration office in August 2002. This new office will be working to create better coordination and cooperation between NNSA Office of Defense Program elements. The new office is composed of three divisions: one that will be performing strategic planning and studies; one that will be looking at the strategic infrastructure; and one that will be doing planning, budgeting, and integration work. The implementation plan for this new office, as of July 2003, had not yet been approved and disseminated because of a major personnel downsizing that is underway. Nonetheless, this new office has already embarked on various initiatives. One initiative is to decide on a cost baseline for the Stockpile Life Extension Program. According to the Director of Defense Programs’ Office of Planning, Budgeting and Integration, a completion date for this work has not yet been set. A second initiative is to develop an integrated master schedule for the Stockpile Life Extension Program that will help identify and resolve schedule and resource conflicts. The director indicated that such a schedule should be available at the end of calendar year 2003. A third initiative is to develop consistent criteria for reporting schedule activities and critical milestones. The director indicated that without such criteria there is no assurance that consistent information is being reported on the individual refurbishments. The director indicated that these criteria would be developed during the summer of 2003. Of no less importance to the organizational changes, NNSA has implemented an overall planning, programming, budgeting, and evaluation process. The goal of this process is to obtain and provide the best mix of resources needed to meet national nuclear security objectives within fiscal restraints. Through planning, the process will examine alternative strategies, analyze changing conditions and trends, identify risk scenarios, assess plausible future states, define strategic requirements, and gain an understanding of the long-term implications of current choices. Through programming, the process will evaluate competing priorities and mission needs, analyze alternatives and trade-offs, and allocate the resources needed to execute the strategies. Through budgeting, the process will convert program decisions on dollars into multiyear budgets that further refine the cost of the approved 5-year program. Through evaluation, the process will apply resources to achieve program objectives and adjust requirements, based on feedback. This process was partially rolled out for the fiscal year 2004 budget cycle, with full implementation scheduled for fiscal year 2005. A separate effort has been the establishment of a project/program management reengineering team in August 2002. According to the team’s charter, NNSA does not manage all its programs effectively and efficiently. Therefore, the mission of this team was to develop a program management system, including policies, guides, procedures, roles, responsibilities, and definitions that would enable NNSA to achieve excellence in program management. The observations of the team, as of September 2002, were that the state of health of the NNSA program management processes is very poor, and this condition significantly affects the ability of NNSA to achieve its missions effectively and efficiently. In the words of the team, many essential elements of an effective program management system do not exist. Examples given included no documented roles and responsibilities and no documented overarching process for program management. According to the team leader, an implementation plan to improve NNSA program management was submitted to the administrator for approval in October 2002. As of July 2003, the implementation plan had not been approved. According to the Director of Defense Programs’ Office of Program Integration, no action has been taken on this implementation plan while NNSA has been addressing its recent reorganization. It is now hoped, according to this official, that project/program improvement actions can be identified and implemented by the start of fiscal year 2004. Extending the life of the weapons in our nation’s nuclear stockpile represents one of the major challenges facing NNSA. It will demand a budget of hundreds of millions of dollars annually for the next decade. Considerable coordination between the design laboratories and the production facilities will be necessary as the four life extensions compete for scarce resources. Where conflicts occur, trade-offs will be required— trade-offs that must be made by federal managers, contractors, and, ultimately, the Congress. All of these things cannot occur without sound budgeting. Likewise, all parties involved in the oversight of the Stockpile Life Extension Program must be able to determine the true cost to complete the life extensions throughout the refurbishment process, identify cost overruns as they develop, and decide when intervention in those cost overruns is necessary. This cannot occur without sound cost accounting. Finally, the life extensions must be properly managed because the consequences of less than proper management are too great. Those consequences, as seen on the W-87 life extension, include potential cost overruns in the hundreds of millions of dollars and refurbishment completion occurring beyond the dates required for national security purposes. To avoid these consequences, the life extensions must have adequate planning; a clear leadership structure which fixes roles, responsibilities, and authority for each life extension; and an adequate oversight process. While NNSA has begun to put in place some improved budgeting and management processes, additional action is necessary if it is to achieve the goal of a safe and reliable stockpile that is refurbished on cost and on schedule. To improve the budgeting associated with the Stockpile Life Extension Program, we recommend that the Secretary of Energy direct the NNSA Administrator to include NNSA’s stockpile life extension effort as a formal and distinct program in its budget submission and present, as part of its budget request, a clear picture of the full costs associated with this program and its individual refurbishments by including the refurbishment-related costs from Campaigns, Readiness in Technical Base and Facilities, and multiple system work, and validate the budget request in accordance with DOE directives. To improve cost accounting associated with the Stockpile Life Extension Program, we recommend that the Secretary of Energy direct the NNSA Administrator to establish a managerial cost accounting process that accumulates, tracks, and reports the full costs associated with each individual refurbishment, including the refurbishment-related costs from Campaigns, Readiness in Technical Base and Facilities, and multiple system work. To improve the management of the Stockpile Life Extension Program, we recommend that the Secretary of Energy direct the NNSA Administrator to: finalize the Office of Defense Programs’ integrated program plan and, within that plan, rank the Stockpile Life Extension Program against all other defense program priorities, establish the relative priority among the individual life extension refurbishments, and disseminate the ranking across the nuclear weapons complex so that those within that complex know the priority of the refurbishment work; develop a formalized process for identifying resource and schedule conflicts between the individual life extension efforts and resolve those conflicts in a timely and systematic manner; and finalize individual refurbishment project plans. With respect to management structure establish the individual refurbishments as projects and manage them according to DOE project management requirements; clearly define the roles and responsibilities of all parties associated with the Stockpile Life Extension Program; provide the life extension program managers with the authority and visibility within the NNSA organization to properly manage the refurbishments; and require that life extension program managers and others involved in management activities receive proper project/program management training and qualification. With respect to oversight of cost and schedule institute a formal process for periodically tracking and reporting individual refurbishment cost, schedule, and scope changes against established baselines, and develop performance measures with sufficient specificity to determine program progress. We provided NNSA with a draft of this report for review and comment. Overall, NNSA stated that it recognized the need to change the way the Stockpile Life Extension Program was managed and that it generally agreed with the report’s recommendations. For instance, NNSA stated that it had independently identified many of the same concerns, and, over the past 12 months, had made significant progress in implementing plans, programs, and processes to improve program management. NNSA indicated that full implementation of our management and budgeting recommendations will take several years; however, NNSA is committed to meeting these objectives. NNSA also provided some technical comments which it believed pointed out factual inaccuracies. We have modified our report, where appropriate, to reflect NNSA’s comments. NNSA’s comments on our draft report are presented in appendix I. We performed our work at DOE’s and NNSA’s headquarters and Sandia National Laboratories, Los Alamos National Laboratory, and the Kansas City plant from July 2002 through July 2003 in accordance with generally accepted government auditing standards. To determine the extent to which the Stockpile Life Extension Program’s budget requests for fiscal years 2003 and 2004 were comprehensive and reliable, we reviewed those requests as well as NNSA supporting documentation, such as guidance issued to develop those requests, information related to NNSA’s planning, programming, budgeting, and evaluation process, and budget validation reports. We also discussed those budget requests with DOE and NNSA budget officials and an official with the Office of Management and Budget. To determine the extent to which NNSA has a system for accumulating, tracking, and reporting program costs, we identified how cost data is tracked in DOE’s information systems and in selected contractors’ systems by interviewing key DOE, NNSA, and contractor officials responsible for the overall Stockpile Life Extension Program and the individual refurbishments and by reviewing pertinent documents. We also identified how DOE and NNSA ensure the quality and comparability of cost and performance data received from contractors by interviewing DOE and NNSA officials, DOE Office of Inspector General officials, and selected contractors’ internal auditors, and by reviewing pertinent documents including previously issued GAO and DOE Office of Inspector General reports. To determine the extent to which other management problems related to the Stockpile Life Extension Program exist at NNSA, we reviewed pertinent NNSA documentation, such as NNSA’s Strategic Plan, the Office of Defense Programs’ draft integrated plan, the Life Extension Program Management Plan, and project plans and variance reports required by the Life Extension Program Management Plan for the B-61, W-76, and W-80 refurbishments. We also interviewed key DOE, NNSA, and contractor officials involved with the Stockpile Life Extension Program, and, in particular, the program and deputy program managers of the B-61, W-76, and W-80 refurbishments. Finally, we attended the NNSA quarterly program review meetings on each of the refurbishments. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days after the date of this letter. At that time, we will send copies of the report to the Secretary of Energy, the Administrator of NNSA, the Director of the Office of Management and Budget, and appropriate congressional committees. We will make copies available to others on request. In addition, the report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841. Major contributors to this report are listed in appendix II. In addition to the individual named above, Sally Thompson, Mark Connelly, Mike LaForge, Tram Le, Barbara House, and Stephanie Chen from our Financial Management and Assurance mission team and Robert Baney, Josephine Ballenger, and Delores Parrett from our Natural Resources and Environment mission team were key contributors to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | As a separately organized agency within the Department of Energy (DOE), the National Nuclear Security Administration (NNSA) administers the Stockpile Life Extension Program, whose purpose is to extend, through refurbishment, the operational lives of the weapons in the nuclear stockpile. NNSA encountered significant management problems with its first refurbishment. NNSA has begun three additional life extensions. This study was undertaken to determine the extent to which budgetary, cost accounting, and other management issues that contributed to problems with the first refurbishment have been adequately addressed. GAO found that NNSA's budget for the Stockpile Life Extension Program has not been comprehensive or reliable. For instance, the fiscal year 2003 budget for this program was not comprehensive because it did not include all activities necessary to successfully complete each of the refurbishments. As a result, neither NNSA nor the Congress was in a position to properly evaluate the budgetary tradeoffs among the refurbishments in the program. NNSA does not have a system for tracking the full costs associated with the individual refurbishments. Instead, NNSA has several mechanisms that track a portion of the refurbishment costs, but these mechanisms are used for different purposes, include different types of costs, and cannot be reconciled with one another. As a result, NNSA lacks information regarding the full cost of the refurbishment work that can help identify cost problems as they develop or when management intervention in those cost problems may be necessary. Finally, NNSA does not have an adequate planning, organization, and cost and schedule oversight process. With respect to planning, NNSA has not, for instance, consistently developed a formalized list of resource and schedule conflicts between the individual refurbishments in order to systematically resolve those conflicts. Regarding organization, NNSA has not, for example, clearly defined the roles and responsibilities of those officials associated with the refurbishments or given the refurbishments' managers proper project/program management training required by DOE standards. Finally, NNSA has not developed an adequate process for reporting cost and schedule changes or developed performance measures with sufficient specificity to determine the progress of the three refurbishments that GAO reviewed. As a result, NNSA lacks the means to help ensure that the refurbishments will not experience cost overruns potentially amounting to hundreds of millions of dollars or encounter significant schedule delays. |
Traditionally, the Service’s local post offices deposited their daily remittances of coin, cash, and checks in accounts with local banks. Before 1997, according to the Service’s Assistant Treasurer for Banking, the thousands of individual retail postal units deposited their daily remittances in some 9,300 bank accounts with 5,500 banks across the country. In 1997, to help reduce banking costs and improve funds availability, the Postal Service implemented consolidated banking—a Service-wide process whereby the daily remittances from the Service’s retail postal facilities are consolidated and transferred by armed bank couriers to a relatively few commercial banks for deposit. The Postal Inspection Service, one of the nation’s oldest law enforcement agencies, is the Postal Service’s law enforcement arm. With a force of about 1,400 uniformed Postal Police Officers (PPOs), over 1,900 postal inspectors, and five forensic crime laboratories, the Postal Inspection Service is responsible for ensuring the safety and security of postal employees, facilities, and assets. PPOs provide security at postal facilities where the Postal Inspection Service has determined that risk and vulnerability demonstrate a need for this level of security. Postal inspectors help enforce and investigate infractions of over 200 federal laws applicable to crimes that adversely affect or involve fraudulent use of the U.S. mail. Matters investigated include criminal incidents, such as mail theft, robberies, burglaries, and embezzlement; and the criminal use of the mail for money laundering, fraud, child exploitation, and the movement of illegal drugs. The Postal Inspection Service periodically performs reviews of remittance- processing procedures at selected individual postal facilities or at a cross section of facilities within a postal district. These reviews are security reviews focused primarily on compliance with the applicable policies and procedures to help prevent instances of mishandling and losses. Results of these reviews are reported to the appropriate postal district manager, and districtwide review results are reported to the appropriate postal area manager. According to the Postal Inspection Service, although its reports apprise postal officials of its findings and recommend corrective action(s), district and area managers are not required to implement these recommendations. To meet our objectives, we obtained and reviewed Service policies and procedures for controlling and securing remittances and for requiring background checks and training for employees who work with remittances. We used the Standards for Internal Control in the Federal Government, as well as the Internal Control Management and Evaluation Tool, to help assess the Service’s policies and procedures. We also obtained and reviewed applicable training manuals, orders, directives, and handbooks. Also, for each of our objectives, we discussed the applicable policies, procedures, and practices with appropriate Service officials, including headquarters officials in Postal Operations, Corporate Accounting, Corporate Treasury, Human Resources, Network Operations Management, and the Postal Inspection Service. Further, we had similar discussions with postal facility managers, supervisors, employees who process remittances, and Postal Inspection Service inspectors in the various field locations we visited. We did not review all aspects of the Service’s internal controls or its systems for accounting for remittances. For example, we did not evaluate the Service’s assessment of the risks that it faces from both internal and external sources or perform a comprehensive assessment of the Service’s security for its post offices or processing centers. To help determine how well the Service’s policies, procedures, and control activities are working, we obtained and reviewed Postal Inspection Service reports of investigations and other Postal Inspection Service reports dating back to fiscal year 1999. We judgmentally selected reports for review on the basis of our discussions with Postal Inspection Service officials about the remittance loss history of the various postal districts and to provide geographic dispersion. We also visited Service facilities at six locations—three in Arizona; one in Texas; one in Maryland; and one in Washington, D.C.—to observe Service policies and procedures in practice.The facilities in Arizona were chosen because they were at or near the location where a postal employee stole a substantial amount of remittances in June 2001. The other facilities were chosen to provide geographic dispersion for our observations. We performed our work between July 2001 and November 2002 in accordance with generally accepted government auditing standards. We obtained comments on a draft of this report from the Postal Service. The Service’s policies and procedures include a number of remittance control activities that, if properly implemented, would help prevent remittance losses. However, Service employees are not always following established policies and procedures, and Postal Service management does not appear to have taken effective actions to address this problem. The GAO-issued Standards for Internal Control in the Federal Government defines the minimum level of acceptable internal controls in government. Although the standards provide a general framework, the management of each agency is responsible for developing the detailed policies, procedures, and practices to fit its operations. For example, one of the standards for internal control states that internal control activities— the policies, procedures, techniques, and mechanisms that enforce management directives—should be effective and efficient in accomplishing the agency’s control objectives. A key control activity includes establishing physical control to secure and safeguard vulnerable assets, including limiting access to such assets and ensuring that they are periodically counted and compared with control records. Thus, effective control activities at the Service would be expected to include such reasonable activities as are necessary to physically secure, safeguard, and account for its remittances. Service polices and procedures incorporate a number of activities for controlling and securing remittances that we believe, if effectively implemented, would help prevent loss of these assets. The Service’s control activities or procedures include, among others, the requirement that there is to be continuous individual accountability of remittances, including hand-to-hand exchanges at all transfers. In addition, employees are to document the progress of remittances moving through the system to banks and notify the Postal Inspection Service immediately when discrepancies or losses are noted. Finally, postal district accounting personnel are to reconcile the electronic record of each post office’s daily sales with the bank information showing remittance deposits received from each post office. Our observations and the findings of the Postal Inspection Service show that many of these policies, procedures, and activities for controlling and securing remittances are not always followed or practiced by employees at numerous postal facilities across the country. In addition, even though the Postal Inspection Service has brought this issue to the attention of Service management over a period of several years, it appears that the Service has not taken effective actions to address the problem. In July 2002, we visited three locations in the Service’s Arizona district. At each location, Postal Inspection Service officials accompanied us. The officials agreed with our observations that at each of these locations, Service employees carried out a number of practices that are inconsistent with Service policies and procedures for controlling and securing remittances. Each year, the Postal Inspection Service reviews control and security procedures at selected postal districts and facilities throughout the country to identify opportunities for security improvements and to measure and improve compliance with the Service’s policies and procedures. A primary goal of these reviews is to help protect remittances from mishandling, loss, and theft. The findings from these reviews are reported to Service management at the respective districts. We reviewed a number of Postal Inspection Service reports on the results of these reviews, which were completed in fiscal years 1999, 2000, 2001, and 2002. A May 1999 Postal Inspection Service report on performance audits performed at various postal facilities in the Service’s Northeast Area found that a number of policies and procedures for processing and securing remittances were not being followed. Specifically, the Postal Inspection Service found that individual accountability for remittances was not being maintained as required by Service policy and procedures. Also, in a February 1999 performance audit report, the Postal Inspection Service found that at a California postal district, postal personnel were handling remittances without signing for and accepting accountability for them. Postal Inspection Service reports of districtwide reviews conducted during May through July 2000 in six districts in the Service’s Western Area disclosed that local post offices in each of these districts often failed to take the appropriate actions required for ensuring that remittances could always be accounted for. For example, remittances were sometimes left unattended and unsecured. Also, certain restricted access areas were not locked, and unauthorized personnel were permitted entry. In August 2001, the Postal Inspection Service reviewed remittance- handling practices at 25 post offices in a district in the Service’s Southeast Area. Its report on these reviews stated that losses due to internal causes (employee theft and mishandling) in the district during fiscal year 2001 to date totaled about $150,000, an increase of 375 percent over the previous year’s total. The report said that these losses were attributable to practices in the district that failed to establish individual accountability for remittances and to properly secure them. During the period from early September through late December of 2001, the Postal Inspection Service reported visiting over 2,000 postal facilities nationwide and observed 252 lapses of remittance security. Its inspectors also found lapses in individual accountability of remittances and instances of unauthorized access into postal facilities through unsecured doors, including doors that were propped open and left unattended. An important internal control standard established in the Standards for Internal Control in the Federal Government pertains to the control environment. It states that management and employees should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. However, it appears that even though significant weaknesses in the Service’s controls over its remittances have been brought to the attention of Service management by the Postal Inspection Service numerous times over the last few years, Postal Service management has not taken effective action to address these weaknesses—leaving its remittances vulnerable to theft or loss. For example, during our visit to the Phoenix mail processing facility, a manager acknowledged that there had been deficiencies and that compliance with procedures had probably been lax for several years. In August 2001, after the theft in Phoenix, the Chief Postal Inspector, who is the head of the Postal Inspection Service, wrote to the Postal Service’s Chief Operating Officer about the Postal Inspection Service’s concern over continuing problems with remittances. The Chief Postal Inspector pointed out that losses had begun to increase in late fiscal year 1999 and were continuing to increase. He stated that since 1997, the Postal Inspection Service has frequently brought proper procedures and policy to the attention of postal management through its investigations and other reviews, and the intention of management to adopt the recommendations of the Postal Inspection Service appears genuine. He stated, however, that the implementation of proper procedures and ongoing security of remittances continue to erode and that causes could include the following: Postal Service management has been unable or unwilling to comply with procedures that will properly secure remittances. Some postal district managers have refused to comply with certain procedures to protect remittances. (In our discussions with the Postal Inspection Service, we were told that some postal district managers balked at implementing certain remittance handling procedures because they believed that the procedures would result in additional costs that would have to be absorbed by each Service district.) Other postal areas have not implemented Postal Inspection Service recommendations to improve the security of remittances. For example, in 2000 a report was issued to the manager of the Phoenix facility that detailed security issues that, had they been corrected, the Postal Inspection Service believes could have deterred or prevented the June 2001 loss of over $3 million. Historically, employees have not been held accountable for mishandling and subsequently losing remittances. For example, in the Postal Inspection Service’s Mid-Atlantic Division, an employee left remittances in a 24-hour lobby after the post office had closed. The remittances were stolen, but no disciplinary action was initiated against the employee. Employees may not be trained properly in processing remittances. For example, in a Washington, D.C., postal facility, a training course for clerks was discontinued in 1986. Although a reliable Service-wide sample has not been taken, the Postal Inspection Service sees this lack of training as a common inadequacy throughout the Postal Service. Certain reference material has not been updated since 1982. During a postal reorganization, a key supervisor position was abolished, leaving employees without direct supervision. The security of remittances or prevention of losses was never an established goal for postal management, unlike overnight and 2- to 3-day mail delivery scores. Therefore, compliance is not a priority. He said that a measurable security goal that includes the security of remittances should be considered. He closed by stating that the Postal Inspection Service is committed to assisting the Postal Service through aggressive remittance loss and other investigations and initiatives. He said that the Postal Inspection Service recognizes the risk this matter poses to Service employees and to the Service, and he wants to ensure this risk is minimized through proper remittance handling procedures. In December 2001, the Service’s Chief Operating Officer issued a memorandum to the Service’s Area Vice Presidents; District Managers; Processing and Distribution Center Plant Managers; and the Manager, Capital Metro Operations, stating that the Postal Inspection Service’s recent oversight work had repeatedly found that remittances were not always being controlled and protected as required. He said that this was not an issue of incidental oversight, but rather a condition in which “basic work processes were either not in place, or were being routinely compromised.” He concluded by pointing out that postmasters and postal employees are personally responsible for depredation or loss of remittances due to negligence or disregard of instructions and asking the addressees to ensure that appropriate controls are in place throughout their respective operations. The Service performs background checks for all prospective employees as part of a suitability test to identify applicants who possess the necessary skills, abilities, and qualifications to perform specific jobs in the Postal Service. The Service does not, however, require updated background checks for employees who are selected to process remittances. The employee suitability test is a two-part review. The first part includes an examination, which tests memorization and provides a behavioral rating; veterans’ preference check; criminal records check; military records review; drug screen; driving record review; and interview. The second part of the review takes place after the job offer and includes a medical assessment, fingerprint and Office of Personnel Management (OPM) Special Agency Check, and an evaluation that places the employee into a 90-day probationary period during which he/she receives orientation and whatever training is needed. Our review of summary documentation on remittance losses during 2001 and discussion with Postal Inspection Service officials did not indicate that updated background checks of employees selected to process remittances would have prevented any losses that have occurred. However, because employees’ background checks could have been performed years earlier, when they were initially hired, additional or updated background checks for employees before they are allowed to process remittances might reduce the risk of theft to an asset as vulnerable as cash. In our discussions of this issue with Postal Service officials, they pointed out that the Service had legal concerns about its authority to unilaterally impose a requirement for updated employee background checks on bargaining unit employees and that such a requirement would be subject to collective bargaining with various postal employee unions. They also said that certain employment law issues would have to be considered, such as the permissibility of drug testing and the use of arrest and conviction information. We agree that these types of issues would need to be considered in any reassessment the Service would do in connection with requiring updated background checks for employees processing remittances. However, we believe that such a reassessment is warranted given the higher risk levels associated with responsibilities that involve processing remittances. Furthermore, if the Service determines that requiring employees to undergo periodic background check updates is subject to the collective bargaining process, the issue could be raised by the Service during negotiation of the next collective bargaining agreement. The Service provides training to its employees who process remittances. However, the training materials for clerks and supervisors are outdated. In addition, the Postal Inspection Service has indicated that a lack of training could be contributing to remittance losses. Postal career employees who apply for the job are selected to process remittances on the basis of their seniority. There are no special skills or abilities required for selection other than those required of a mail distribution clerk. Those who are selected are to be trained in the specific roles they will be performing with remittances. Unit supervisors are responsible for ensuring that the employees in their units are trained. The training includes on-the-job training as well as a requirement that the employees complete self-paced training manuals pertaining to their specific responsibilities. Service training manuals have not been updated to address the processing of remittances. For example, one of the Service’s principal policy handbooks for operations that include processing remittances was updated in 1997; however, the training manual that addresses certain remittance processing procedures has not been similarly updated. Although we cannot say that the failure of employees to follow Service policies and procedures for securing remittances that we observed was the result of inadequate or incomplete training, the Chief Postal Inspector stated in his letter to the Postal Service’ Chief Operating Officer mentioned earlier that a lack of employee training in proper handling of remittances was among the conditions contributing to remittance losses. Within the last year, the Service established a team to develop a standard operating plan that provides detailed policies and procedures for processing remittances. According to the Service, the draft plan was recently completed, and the plan should be approved by, and implementation should begin in, November 2002. The plan is to become the national standard for processing remittances through the Service. The Service’s new policies and procedures do not address the issue of background checks for employees. According to the Service, Area managers will have responsibility for ensuring that field management train and hold accountable employees who process remittances, and all deviations from the plan are to be reported and approved by Area management. All approved deviations are to be submitted to Service headquarters. In addition, according to the Service, it is in the process of redesigning training for employees who process remittances. It said that the new training manuals and handbooks would incorporate the new policies and procedures for processing remittances. Finally, according to the Service, the Service’s Chief Operating Officer will disseminate the new plan through formal channels of the organization to the Area Vice Presidents, who will be instructed in their responsibility for providing the proper training in policies and procedures to employees who process remittances. In addition, the message to the Area Vice Presidents is to be reinforced by having employees watch a recently developed video that pertains to their areas of responsibility, and all training is to be documented by management. The establishment of these new policies and procedures for processing remittances is an important step in the right direction. Also, it is particularly encouraging that the Service plans to emphasize management accountability for implementing the new policies and procedures. However, until these new policies and procedures are finalized and address the remittance control problems we have identified, employees are trained in how to follow them, and they are effectively implemented, the Service’s remittances continue to be at risk. The Service has policies and procedures that, if properly implemented, would help to control and physically secure its remittances. However, the Service’s policies, procedures, and control activities are not consistently followed by employees and it appears that Service management has not taken effective actions to address the problem. The Chief Postal Inspector has cited a lack of (1) training, (2) adequate supervision, (3) postal management follow-through, and (4) accountability as contributing factors. The Service has not updated its relevant training manuals and does not update background checks for employees selected to process remittances—thus possibly subjecting the Service to increased vulnerability to the theft of its cash. Until Service management actively addresses the problems of controlling and securing its cash remittances, widely identified throughout the Service by the Postal Inspection Service and by us at locations we visited, its remittances will continue to be vulnerable to mishandling, loss, and theft. We recommend that the Postmaster General more rigorously reinforce to managers and employees at facilities throughout the Postal Service the importance of following Service policies and procedures for controlling and securing remittances; hold Service managers and employees accountable for following Service policies and procedures for controlling and securing remittances and correcting the control problems identified by the Postal Inspection Service; include adherence to policies and procedures for securing remittances and minimizing remittance losses in its organizational goals and performance management and pay systems and define and enforce supervisory responsibilities to achieve these reinforcement and accountability objectives; reassess current Service policy of not updating background checks of career employees prior to their being selected to process remittances; and update applicable training manuals that predate the Service’s adoption of its consolidated banking policy and determine whether additional training for managers and employees on the Service’s policies and procedures for physically controlling and securing remittances is needed and, if so, see that such training is developed and provided. We received written comments on a draft of this report from the Postal Service’s Chief Operating Officer. In his comments he stated that the Service appreciated the thoroughness of our review and the disclosure of some shortfalls in the physical security of postal remittances. He said that our findings are extremely serious, and the Service is committed to improving the current process. He said that to that end, management teams from several departments have already developed changes in procedures to address our findings, including improvements to the procedures to secure and account for remittances. He said that the Service is well on the way to implementing procedures that will fully address the recommendations contained in our report. For example, he said that the Service is establishing an internal control unit in each district office to assess risk and compliance with remittance handling as well as other financial and operational policies and procedures. He further said that the Area Vice Presidents will be held responsible for ensuring that field managers provide training to all employees who process remittances. Specifically regarding our recommendation that the Service reassess its current policy of not updating background checks of career employees prior to their being assigned to process remittances, he said the Service is already in continuing discussion with its General Counsel on the matter. He said that the Service has legal concerns about its authority to unilaterally require updated background checks on bargaining unit employees because such a requirement could be viewed as a “term or condition of employment” subject to collective bargaining. Also, he said that it would be very costly to periodically recheck the thousands of employees who process remittances. He stated that with implementation of the new standardized remittance-processing procedures, the Service believes that it will have in place compensating controls that will be more cost effective than periodic background checks. We agree that all of these issues are important issues for the Service to consider as it reassesses its background check policy. If the Service determines that requiring periodic updated employee background checks is subject to the collective bargaining process, that issue could be addressed when the collective bargaining agreement comes up for renewal. As to the issue of the cost involved in periodically rechecking the backgrounds of thousands of employees, we believe that the Service’s reassessment could include considering updating the background checks only for employees who process high-value remittances. As for the issue of the Service having in place compensating controls that will be more cost effective than periodic background checks, we plan to do future follow-up work to determine the effectiveness of the new standardized remittance- processing procedures once they are in place. (Written comments received from the Chief Operating Officer are not included in this report because they contain information the Service considers law enforcement sensitive.) Other Service officials suggested technical changes, including the exclusion of information that the Service considers law enforcement sensitive, to our draft report. We have made these changes where appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Ranking Minority Member of your Committee; the Chairman and Ranking Minority Member of the Senate Subcommittee on International Security, Proliferation, and Federal Services; the Chairman and Ranking Minority Member of the Senate Subcommittee on Treasury and General Government; the Chairman and Ranking Minority Member of the House Subcommittee on Treasury, Postal Service and General Government; and the Postmaster General. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Major contributors to this report are acknowledged in appendix I. If you have any questions about this report, please contact me on (202) 512-2834 or at ungarb@gao.gov. In addition to those individuals named above, Tyrone Griffis, Michael J. Fischetti, Gladys Toro, Heather Dunahoo, Walter Vance, and Donna Leiss made key contributions to this report. | In fiscal year 2001, the United States Postal Service reported that it lost about $6.3 million in remittances (cash and checks) to robberies, internal theft, and mishandling. One particular loss--a June 2001 theft of over $3.2 million from a Phoenix, Arizona, postal facility by a Career Service employee--received considerable media attention. Pursuant to the request of the Chairman, House Committee on Government Reform, we agreed to review Service policies and procedures for the security of remittances by addressing the following questions: (1) Does the Service have reasonable physical controls and security to safeguard its remittances? (2) Does the Service have policies for conducting background checks of employees who process remittances? (3) Does the Service provide training to its employees who process remittances? The Service has policies and procedures for physically controlling and securing remittances. These include a number of control activities that, if properly implemented, would be effective in helping to safeguard vulnerable assets, such as cash. The control activities include, among others, requirements for continuous individual accountability of remittances. However, Service management does not always provide appropriate oversight of these activities and Service employees do not always follow the Service's policies, procedures, and activities for controlling and physically securing remittances. The Service requires a background check as part of a suitability test for all prospective new employees. The background check includes a review of any criminal or military records, a fingerprint check by the Federal Bureau of Investigation (FBI), and a drug screening. The Service's training for postal employees who process remittances includes both on-the-job training and the use of self-paced training and development manuals related to the control and security of remittances. However, the Service's training manuals have not been appropriately updated, and in August 2001, the Service's Chief Postal Inspector cited the lack of training as a possible condition leading to the Postal Service's remittance losses. |
The JSF program is a joint program between the Air Force, Navy, and Marine Corps for developing and producing next-generation fighter aircraft to replace aging inventories. The program is currently in year 3 of an estimated 11-year development phase. The current estimated cost for this phase is about $40.5 billion. In October 2001 Lockheed Martin was awarded the air system development contract now valued at over $19 billion. Lockheed Martin subsequently awarded multi-billion-dollar subcontracts to its development teammates—Northrop Grumman and BAE Systems—for work on the center and aft fuselage, respectively. Lockheed Martin has also subcontracted for the development of major subsystems of the aircraft, such as the landing gear system. This is a departure from past Lockheed Martin aircraft programs, where the company subcontracted for components (tires, brakes, etc.) and integrated them into major assemblies and subsystems (the landing gear system). In addition to the Lockheed Martin contract, DOD has prime contracts with both Pratt & Whitney and General Electric to develop two interchangeable aircraft engines. Pratt & Whitney’s development contract is valued at over $4.8 billion. Rolls Royce plc (located in the United Kingdom) and Hamilton Sundstrand are major subcontractors to Pratt & Whitney for this effort. General Electric is currently in an early phase of development and has a contract valued at $453 million. Rolls Royce Corporation (located in Indianapolis, Ind.) is a teammate and 40 percent partner for the General Electric engine program. The General Electric/Rolls Royce team is expected to receive a follow-on development contract in fiscal year 2005 worth an estimated $2.3 billion. All the prime contracts include award fee structures that permit the JSF Program Office to establish criteria applicable to specific evaluation periods. If, during its regular monitoring of contract execution, the program office identifies the need for more emphasis in a certain area—such as providing opportunities for international suppliers or reducing aircraft weight—it can establish related criteria against which the contractor will be evaluated to determine the extent of its award fee. The Buy American Act and Preference for Domestic Specialty Metals clause implementing Berry Amendment provisions apply to the government’s purchase of manufactured end products for the JSF program. Currently, only one JSF prime contractor—Pratt & Whitney— will deliver manufactured end products to the government in this phase of the program. Under its current contract, Pratt & Whitney is to deliver 20 flight test engines, 10 sets of common engine hardware, and certain other equipment. The other engine prime contractor, General Electric, will not deliver manufactured end products under its current contract. However, its anticipated follow-on development contract will include the delivery of test engines that will be subject to Buy American Act and Specialty Metals requirements. Finally, Lockheed Martin will not deliver any manufactured end products under its development contract. The company is required to deliver plans, studies, designs, and data. Lockheed Martin will produce 22 test articles (14 flight test aircraft and 8 ground test articles) during this phase of the program, but these are not among the items to be delivered. Although the Buy American Act will apply to manufactured end products delivered to DOD during the JSF program, its restrictions will have little impact on the selection of suppliers because of DOD’s use of the law’s public interest exception. This exception allows the head of an agency to determine that applying the domestic preference restrictions would be inconsistent with the public interest. DOD has determined that countries that sign reciprocal procurement agreements with the department to promote defense cooperation and open up defense markets qualify for this exception. The eight JSF partners have all signed these agreements and are considered “qualifying countries.” Under defense acquisition regulations implementing the Buy American Act, over 50 percent of the cost of all the components in an end product must be mined, produced, or manufactured in the United States or “qualifying countries” for a product to qualify as domestic. Our analysis of JSF development subcontracts awarded by prime contractors and their teammates showed that nearly 100 percent of contract dollars awarded by the end of 2003 went to companies in the United States or qualifying countries. (See appendix II for Joint Strike Fighter System Development and Demonstration Subcontract Awards to the United States, Qualifying Countries, and Nonqualifying Countries). The Preference for Domestic Specialty Metals clause applies to articles delivered by Lockheed Martin, Pratt & Whitney, and General Electric under JSF contracts. Generally, this clause requires U.S. or qualifying country sources for any specialty metals, such as titanium, that are incorporated into articles delivered under the contract. This restriction must also be included in any subcontract awarded for the program. To meet Specialty Metals requirements, Lockheed Martin and Pratt & Whitney have awarded subcontracts to domestic suppliers for titanium; and Lockheed Martin has also extended to its subcontractors the right to buy titanium from its domestic supplier at the price negotiated for Lockheed Martin. General Electric does not exclusively use domestic titanium in its defense products. However, in 1996, the company received a class deviation from the clause that allows it to use both domestic and foreign titanium in its defense products, as long as it buys sufficient domestic quantities to meet DOD contract requirements. For instance, if 25 percent of the General Electric’s business in a given year comes from DOD contracts, then at least 25 percent of its titanium purchases must be procured from domestic sources. Similar to the Buy American Act, the Specialty Metals clause contains a provision related to “qualifying country” suppliers. It provides that the clause does not apply to specialty metals melted in a qualifying country or incorporated in products or components manufactured in a qualifying country. As a result, a qualifying country subcontractor would have greater latitude under the clause than a U.S. subcontractor. Specifically, the specialty metals incorporated into an article manufactured by a qualifying country may be from any source, while an article manufactured by a U.S. subcontractor must incorporate specialty metals from a domestic or qualifying country source. (See fig. 1.) The data we collected on JSF subcontracts show that by December 31, 2003, the prime contractors and their teammates had awarded over $14 billion in subcontracts for the development phase of the program. These subcontracts were for everything from the development of subsystems—such as radar, landing gear, and communications systems—to engine hardware, engineering services, machine tooling, and raw materials. The recipients of these contracts included suppliers in 16 foreign countries and the United States; 73.9 percent of the subcontracts by dollar value went to U.S. companies and 24.2 percent went to companies in the United Kingdom (the largest foreign financial contributor to the JSF program). (See appendix I for Joint Strike Fighter Partner Financial Contributions and Estimated Aircraft Purchases and appendix II for Joint Strike Fighter System Development and Demonstration Subcontract Awards). Finally, 2,597 of 4,488 subcontracts or purchase orders we obtained information on went to U.S. small businesses. Although these businesses received only 2.1 percent of the total dollar value of the subcontracts awarded, DOD and contractor officials have indicated that all companies in the development phase are in good position to receive production contracts, provided that cost and schedule goals are met. The gathering of these data, which most of the contractors have made available to the JSF Program Office and DCMA, has increased the breadth of knowledge available to DOD and the program office on the JSF supplier base. Neither DOD nor the JSF program office previously collected this information because, according to program officials, this information is not necessary in order to manage the program. At least one major subcontractor, on its own initiative, is now separately tracking JSF subcontracts on a monthly basis. While the JSF Program Office maintains more information on subcontractors than required by acquisition regulations, this information does not provide the program with a complete picture of the supplier base. The JSF Program Office collects and maintains data on subcontract awards for specific areas of interest—international suppliers and U.S. small businesses. The program office has used the award fee process to incentivize the prime contractors to report on both small business awards through the third tier and subcontract opportunities and awards to international suppliers. In addition, the program office has some visibility over certain subcontracts through mechanisms such as monthly supplier teleconferences, integrated product teams, informal notifications of subcontract awards, and DCMA reports on the performance of major suppliers. Finally, the JSF Program Office maintains limited information on the companies responsible for supplying critical technologies. The JSF Program Office’s information on the suppliers of key or critical technologies is based on lists that the prime contractors compile as part of the program protection strategy. These program protection requirements—not the supplier base—are the focus of DOD’s and the JSF Program Office’s approach toward critical technologies. DOD acquisition regulations require program managers to maintain lists of a program’s key technologies or capabilities to prevent the unauthorized disclosure or inadvertent transfer of leading-edge technologies and sensitive data or systems. The lists include the names of key technologies and capabilities, the reason the technology is sensitive and requires protection, and the location where the technology resides. The lists do not provide visibility into the lower-tier subcontracts that have been issued for developing or supplying these technologies. Given the limited supplier information these lists provide, the JSF Program Office is aware of two instances where a foreign company is the developer or supplier of an unclassified critical technology for the program. In both cases, a U.S. company is listed as a codeveloper of the technology. The JSF program has the potential to significantly impact the U.S. defense industrial base. Suppliers chosen during the JSF development phase will likely remain on the program through production, if they meet cost and schedule targets, and will reap the benefits of contracts potentially worth over $100 billion. Therefore, contracts awarded now will likely affect the future shape of the defense industrial base. The JSF supplier base information currently maintained by the JSF Program Office is focused on specific areas of interest and does not provide a broad view of the industrial base serving the program. In our July 2003 report, we recommended that the JSF Program Office assume a more active role in collecting information on and monitoring the prime contractors’ selection of suppliers to address potential conflicts between the international program and other program goals. DOD concurred with our recommendation, but did not specify how it plans to collect and monitor this information. Collecting this information will be an important first step for providing DOD with the knowledge base it needs to assess the impact of the program on the industrial base. We provided DOD a draft of this report for review. DOD provided only technical comments, which we incorporated as appropriate. To obtain information on the Buy American Act and the Preference for Domestic Specialty Metals clause implementing Berry Amendment provisions, we reviewed applicable laws and regulations. We interviewed DOD officials in the JSF Program Office, the Office of the Deputy Under Secretary of Defense (Industrial Policy), the Office of the Director of Defense Procurement and Acquisition Policy, and the Defense Contract Management Agency to obtain information on the applicability of the Buy American Act and other domestic source restrictions, critical foreign technologies, and DOD oversight of subcontracts. We reviewed prime contracts for the JSF program and met with JSF prime contractors, including Lockheed Martin and the engine contractors, Pratt & Whitney and General Electric, to discuss the applicability of the Buy American Act and other domestic source restrictions and to collect data on first-tier subcontract awards for the System Development and Demonstration phase. Furthermore, we collected data on subcontract awards for the JSF System Development and Demonstration phase from companies that were identified as partners or teammates by Lockheed Martin, Pratt & Whitney, and General Electric. These companies included Northrop Grumman, BAE Systems, Rolls Royce plc, Hamilton Sundstrand, and Rolls Royce Corporation. We did not independently verify subcontract data but, instead, relied on DCMA’s reviews of contractors’ reporting systems to assure data accuracy and completeness. We performed our review from August 2003 to March 2004 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. We will then send copies of this report to interested congressional committees; the Secretary of Defense; the Secretaries of the Navy and the Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-4841; or Thomas J. Denomme, Assistant Director, at 202-512-4287. Major contributors to this report were Robert L. Ackley, Shelby S. Oakley, Sylvia Schatz, and Ronald E. Schwenn. | As the Department of Defense's (DOD) most expensive aircraft program, and its largest international program, the Joint Strike Fighter (JSF) has the potential to significantly affect the worldwide defense industrial base. As currently planned, it will cost an estimated $245 billion for DOD to develop and procure about 2,400 JSF aircraft and related support equipment by 2027. In addition, the program expects international sales of 2,000 to 3,500 aircraft. If the JSF comes to dominate the market for tactical aircraft as DOD expects, companies that are not part of the program could see their tactical aircraft business decline. Although full rate production of the JSF is not projected to start until 2013, contracts awarded at this point in the program will provide the basis for future awards. GAO was asked to determine the limits on and extent of foreign involvement in the JSF supplier base. To do this, GAO (1) determined how the Buy American Act and the Preference for Domestic Specialty Metals clause apply to the JSF development phase and the extent of foreign subcontracting on the program and (2) identified the data available to the JSF Program Office to manage its supplier base, including information on suppliers of critical technologies. DOD provided technical comments on a draft of this report, which GAO incorporated as appropriate. The Buy American Act and Preference for Domestic Specialty Metals clause implementing Berry Amendment provisions apply to the government's purchase of manufactured end products for the JSF program. Currently, only one of the three JSF prime contractors is under contract to deliver manufactured end products to the government in this phase of the program. The Buy American Act will apply to manufactured end products delivered to DOD during subsequent phases, but it will have little impact on the selection of suppliers because of DOD's use of the law's public interest exception. DOD, using this exception, has determined that it would be inconsistent with the public interest to apply domestic preference restrictions to countries that have signed reciprocal procurement agreements with the department. All of the JSF partners have signed such agreements. DOD must also apply the Preference for Domestic Specialty Metals clause to articles delivered under JSF contracts. All three prime contractors have indicated that they will meet these Specialty Metals requirements. While the JSF Program Office maintains more information on subcontractors than required by acquisition regulations, this information does not provide the program with a complete picture of the supplier base. The program office collects data on subcontract awards for international suppliers and U.S. small businesses. In addition, it maintains lists of the companies responsible for developing key or critical technologies. However, the lists do not provide visibility into the lower-tier subcontracts that have been issued for developing or supplying these technologies. |
Select agent regulations do not mandate that specific perimeter security controls be present at BSL-4 labs, resulting in a significant difference in perimeter security between the nation’s five labs. According to the regulations, each lab must implement a security plan that is sufficient to safeguard select agents against unauthorized access, theft, loss, or release. However, there are no specific perimeter security controls that must be in place at every BSL-4 lab. While three labs had all or nearly all of the key security controls we assessed, our September 2008 report demonstrated that two labs had a significant lack of these controls (see table 1 below). Lab C: Lab C had in place only 3 of the 15 key security controls we assessed. The lab was in an urban environment and publicly accessible, with only limited perimeter barriers. During our assessment, we saw a pedestrian access the building housing the lab through the unguarded loading dock entrance. In addition to lacking any perimeter barriers to prevent unauthorized individuals from approaching the lab, Lab C also lacked an active integrated security system. By not having a command and control center or an integrated security system with real-time camera monitoring, the possibility that security officers could detect an intruder entering the perimeter and respond to such an intrusion is greatly reduced. Lab E: Lab E was one of the weakest labs we assessed, with 4 out of the 15 key controls in place. It had only limited camera coverage of the outer perimeter of the facility and the only vehicular barrier consisted of an arm gate that swung across the road. Although the guard houses controlling access to the facility were manned, they appeared antiquated and thus did not portray a strong, professional security infrastructure. The security force charged with protecting the lab was unarmed. Of all the BSL-4 labs we assessed, this was the only lab with an exterior window that could provide direct access to the lab. In lieu of a command and control center, Lab E contracts with an outside company to monitor its alarm in an off- site facility. This potentially impedes response time by emergency responders with an unnecessary layer that would not exist with a command and control center. Since the contracted company is not physically present at the facility, it is not able to ascertain the nature of alarm activation. Furthermore, there is no interfaced security system between alarms and cameras and a lack of real-time monitoring of cameras. Although the presence of the controls we assessed does not automatically ensure a secure perimeter, having most of these controls in place and operating effectively reduces the likelihood of intrusion. As such, we recommended that the Director of the CDC take action to implement specific perimeter controls for all BSL-4 labs to provide assurance that each lab has a strong perimeter security system in place. As part of this recommendation, we stated that the CDC should work with USDA to coordinate its efforts, given that both agencies have the authority to regulate select agents. In its response to the report, HHS agreed that perimeter security is an important deterrent against theft of select agents. HHS indicated that the difference in perimeter security at the five labs was the result of risk-based planning; however, they did not comment on the specific vulnerabilities we identified and whether these should be addressed. In regard to requiring specific perimeter controls for all BSL-4 labs, HHS stated that it would perform further study and outreach to determine whether additional federal regulations are needed. Significant perimeter security differences continue to exist among the nation’s five BSL-4 labs operational at the time of our most recent assessment. As of May 2009, CDC has taken limited steps to address our recommendation that it should take action to implement specific perimeter security controls for all BSL-4 labs. Since the release of our report in September 2008, CDC stated that the following actions have been taken: In late 2007, CDC, along with other federal agencies, established a U.S. Government Trans-Federal Task Force on Optimizing Biosafety and Biocontainment Oversight. The task force was formed to assess the current framework for local and federal oversight of high-containment laboratory research activities and facilities, including the identification and assessment of pertinent laws, regulations, policies, guidelines, and examination of the current state of biosafety oversight systems. The task force held a public consultation meeting in December 2008. According to CDC, the task force will communicate specific recommendations about the nation’s lab safety and security issues to the Secretaries of both HHS and USDA. CDC and USDA hosted a workshop series in Greenbelt, Maryland, in December 2008 for all of its registered entities and partners. CDC stated that it included several safety and security topics, including discussion of physical security and operational security. In January 2009, in response to Executive Order 13486, a federal working group (WG) was convened to review current laws, regulations, and guidelines in place to prevent theft, misuse, or diversion to unlawful activity of select agents and toxins. The WG is chaired by HHS and the Department of Defense (DOD) and includes representatives from several federal agencies and includes a subgroup that is focused on physical and facility security of biolabs. The WG is expected to issue its final report to the President by July 2009. Although CDC has taken some modest steps for studying how to improve perimeter security controls for all BSL-4 labs, CDC has not established a detailed plan to implement our recommendation. In addition, we requested documentation (e.g., minutes, interim reports) from the WG to substantiate whether progress was made in addressing our concerns. However, the WG responded to our request stating that they do not expect to make any interim reports, and they refused to provide us the minutes of their meetings. Without a detailed plan from CDC on what corrective actions are planned or information on any progress from the WG, it is impossible to monitor CDC’s progress in implementing our recommendation to improve perimeter security controls for all BSL-4 labs. The ability to monitor progress openly and in a transparent manner is especially important because a sixth BSL-4 lab recently became operational, as mentioned above, and CDC expects more BSL-4 labs to be operational in the future. Although CDC has taken limited action to address our original findings, the two deficient BSL-4 labs have made progress on their own. One BSL-4 lab made a significant number of improvements to increase perimeter security, thus reducing the likelihood of intrusion. The second one made three changes and formed a committee to consider and prioritize other changes. We confirmed the following improvements at Lab C: Visitors are screened by security guards and issued visitor badges. A command and control center was established. Camera coverage includes all exterior lab entrances. CCTV is monitored by the command and control center. The cameras currently cover the exterior of the building. Guards can control the cameras by panning, zooming, or tilting. One visible guard is present at the main entrance to the lab, but the guard is not armed. A guard mans the entrance 24 hours a day, 7 days a week. Although the guard is unarmed, this improvement does partially address the requirement for guard presence at lab public entrances. Lab officials described installing armed guards as cost prohibitive. While the loading dock is still located inside the footprint of the main building, Lab C improved its loading dock security by building a loading dock vehicle gate. Moreover, a pedestrian gate with a sign forbidding entry was built to prevent pedestrians from entering the building through the loading dock; pedestrians were previously allowed to enter the building through the loading dock as a way of taking a short-cut into the building. These new gates prevent individuals from walking into the building, or vehicles driving up to the building, unchallenged. Lab officials said additional enhancements would be completed by fall 2009. These include an active intrusion detection system that is integrated with CCTV and the addition of 14 new interior cameras with pan, tilt, and zoom capabilities. The new cameras will enhance the interior perimeter security of the lab. The command and control center also will have access to and control of these new cameras. After these improvements are finished, the lab will have 8 of the 15 controls we tested in place plus 2 others that were partially addressed. We verified three improvements were made at Lab E: heavy concrete planters were added as a vehicle barricade along the roadside adjacent to the building; the window was frosted to block sight lines into the lab from nearby rooftops; and a vehicle barricade is being constructed to block unauthorized access to the parking lot adjacent to the lab, thereby increasing the blast stand-off area. The lab also formed a committee to consider additional perimeter security measures such as widening buffer zones and increasing lighting at the perimeter fence. In all, the lab now has 6 of the 15 controls we assessed in place. Although lab officials made three improvements and are considering others, the lab’s head of research operations at the facility objected to the findings of our 2008 report and has challenged the 15 controls we deemed critical to strong perimeter security. He said that the officials from the lab were not afforded an opportunity to respond to the report and correct “inaccuracies.” Specifically, he made the following comments on our previous findings: He questioned the basis for our selection of the specific 15 controls we identified as critical to perimeter security, and noted that CDC also expressed similar concerns in its comments on our 2008 report. The lab windows do not provide direct access to the lab. He maintained that a number of features prohibited entry by these windows: the lowermost edge of the windows is more than 7 feet 8 inches above ground level; the windows are certified bulletproof glass and are equipped with inside bars; and breaching the integrity of the outer bulletproof glass triggers alarms for the local guard force. Furthermore, he said that having such a window was deemed programmatically important when the laboratory was designed in order to provide light-dark orientation for laboratory workers. Finally, he represented that a group of nationally recognized security experts has opined that the windows are not a security threat, but did not provide evidence of these experts’ assessment. Armed guards are present on the campus. He stated that a table in our 2008 report indicates that armed guards are not present on the campus, although a footnote on a subsequent page acknowledges that an armed security supervisor patrols the facility. A vehicle barrier does surround the perimeter of that portion of the laboratory building housing select agents, including the BSL-4 laboratory. He said it was recommended and approved by the Federal Bureau of Investigation during consultations on the safety of the building and installed in 1999 prior to initiation of research in this facility. We continue to believe that our assessment of perimeter controls at Lab E is accurate. Specifically, we disagree with Lab E’s position as follows: As stated in the report, we developed the 15 security controls based on our expertise in performing security assessments and our research of commonly accepted physical security principles. Although we acknowledge that the 15 security controls we selected are not the only measures that can be in place to provide effective perimeter security, we determined that these controls (discussed in more detail in app. I) represent a baseline for BSL-4 lab perimeter physical security and contribute to a strong perimeter security system. Having a baseline provides fair representation as to what key perimeter security controls do or do not exist at these facilities. The controls represent commonly accepted physical security principles. A lack of such controls represents a potential security vulnerability. For example, as mentioned above, at the time of our original assessment Lab E had only limited camera coverage of the outer perimeter of the facility. Camera coverage of a building’s exterior provides a means to detect and quickly identify potential intruders. As mentioned above, Lab E was the only lab with an exterior window that could provide direct access to the lab. This window allowed for direct “visual” access into the lab area from an adjacent rooftop. Lab E in essence acknowledged this when it informed us in a letter that it “Frosted the BSL-4 laboratory windows to block sight lines from adjacent rooftops.” While we credit Lab E for obscuring visual access to the lab by frosting this window, the window continues to pose a security vulnerability because it is not blast proof. Armed guards are not present on the campus. As mentioned above, Lab E’s head of research operations pointed out that our 2008 report acknowledged that an armed security supervisor patrols the facility. However, employing one armed security supervisor does not support the plural definition of “guards.” The supervisor also is not generally at the entrances to the facility. He normally responds to incidents and would not generally be in a position to confront an intruder at the point of attack. Furthermore, placing armed guards at entrances also functions as a deterrent. The vehicle barrier did not surround the full perimeter of the BSL-4 lab building as it adjoined another lab building at the time of our original assessment. The facility has since placed additional barriers as noted in this report to give full coverage, thus validating our original assessment. Furthermore, part of the barrier in the area between a small parking lot and the BSL-4 lab building did not provide an adequate blast stand-off area. The lab, as noted in this report, has since erected barriers to this parking lot to allow only deliveries into the area. During the course of our work, we made two additional observations that concern perimeter security differences among the nation’s five BSL-4 labs that were operational at the time of our assessment: All five BSL-4 labs operating in 2008 had a security plan in place when we assessed them. Yet significant perimeter security differences exist among these high-containment labs. A reason for the discrepancies can be found in the additional federal security requirements the three labs with strong perimeter security controls in place had to follow beyond the select agent regulations. For example, Lab B is a military facility subject to far stricter DOD physical security requirements. It had a perimeter security fence and roving patrol guards visible inside and outside this fence. Labs A and D also must meet additional mandates from the federal agencies that oversee them. A lack of minimum perimeter security requirements contributes to sharp differences among BSL-4 labs as well. CDC inspection officials stated their training and experience had been mainly in the area of safety. They also noted that their philosophy is a layered approach to security and safety. According to CDC officials, they are developing a comprehensive strategy for safety and security of biosafety labs and will adjust the training and inspection process accordingly to match this comprehensive strategy. We briefed CDC on the results of our work, and received comments from CDC by e-mail. In its response, CDC stated that it agrees that perimeter security is an important deterrent against theft of select agents and should be considered as one component of overall security at select laboratories. CDC stated that a comprehensive approach to securing select agents should be taken, and should include basic components such as physical security, personnel security, information security, transport security, and material control and accountability. CDC stated that its Select Agent Regulations reflect this comprehensive approach to securing agents and provide performance standards that entities must implement to protect agents from theft, loss, or release. CDC also stated that multiple groups are assessing the issue of laboratory security and developing related recommendations. CDC stated that it will consider our prior recommendation and the reports from the multiple groups together before developing a detailed plan to address security at select agent laboratories. As part of this commitment, CDC stated that it is in the process of hiring a Security Officer to ensure that CDC has a continuing focus on security at the laboratories. According to CDC, the Security Officer will work with USDA to consider the recommendations from us and others in developing the plan to enhance security at select agent laboratories. In addition, CDC stated that it, in coordination with USDA, will seek input as to the need and advisability of requiring by federal regulation specific perimeter controls at each registered entity having a BSL-4 laboratory. CDC will initiate this process once all of the recommendations from the aforementioned groups have been received. CDC’s stated intent to study our prior recommendation in improving laboratory security is an important response to the security issues that have been identified. We also provided officials from Lab C and Lab E with the pertinent sections of a draft of this report that covered the results of our most recent perimeter security assessment of their labs, to which they responded with comments. Lab C officials provided additional details about several changes they made or plan to make to the lab’s perimeter security controls, including changes to its CCTV, camera coverage, loading dock, barriers, and blast stand-off area. For example, Lab C officials said they are extending the sidewalks and installing landscaping features around the lab building to increase the size of the blast stand-off area. According to officials from Lab E, they plan to submit a grant application for additional perimeter security improvements, including an intrusion detection system at the perimeter fence and expanded CCTV coverage of key perimeter areas. We did not verify the perimeter security enhancements from Lab C and Lab E because these changes were made or planned subsequent to our most recent assessment. Officials from these labs also provided technical comments on the draft language from our report that we have incorporated throughout the report, as appropriate. As agreed with your office, unless you announce the contents of this report earlier, we will not distribute it until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Director of CDC, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. To perform our perimeter security assessment of biosafety level 4 (BSL-4) labs, we identified 15 key perimeter security controls. We based their selection on our expertise and research of commonly accepted physical security principles that contribute to a strong perimeter security system. A strong perimeter security system uses layers of security to deter, detect, delay, and deny intruders: Deter. Physical security controls that deter an intruder are intended to reduce the intruder’s perception that an attack will be successful—an armed guard posted in front of a lab, for example. Detect. Controls that detect an intruder could include video cameras and alarm systems. They could also include roving guard patrols. Delay. Controls that delay an intruder increase the opportunity for a successful security response. These controls include barriers such as perimeter fences. Deny. Controls that can deny an intruder include visitor screening that only permits authorized individuals to access the building housing the lab. Furthermore, a lack of windows or other obvious means of accessing a lab is an effective denial mechanism. Some security controls serve multiple purposes. For example, a perimeter fence is a basic security feature that can deter, delay, and deny intruders. However, a perimeter fence on its own will not stop a determined intruder. This is why, in practice, layers of security must be integrated in order to provide the strongest protection. Thus, a perimeter fence should be combined with an intrusion detection system that would alert security officials if the perimeter has been breached. A strong system would then tie the intrusion detection alarm to the closed-circuit television (CCTV) network, allowing security officers to immediately identify intruders. A central command center is a key element for an integrated, active system. It allows security officers to monitor alarm and camera activity—and plan the security response—from a single location. Table 3 shows 15 physical security controls we focused on during our assessment work. Gregory D. Kutz, (202) 512-6722 or kutzg@gao.gov. In addition to the contact named above, the following individuals made contributions to this report: Andy O’Connell, Assistant Director; Matt Valenta, Assistant Director; Christopher W. Backley; Randall Cole; John Cooney; Craig Fischer; Vicki McClure; Anthony Paras; and Verginie Tarpinian. | Biosafety laboratories are primarily regulated by either the Department of Health and Human Services (HHS) or the U.S. Department of Agriculture (USDA), depending on whether the substances they handle pose a threat to the health of humans or plants, animals, and related products, respectively. Currently, all operational biosafety level 4 (BSL-4) labs are overseen by HHS's Centers for Disease Control and Prevention (CDC). BSL-4 labs handle the world's most dangerous agents and toxins that cause incurable and deadly diseases. In September 2008, GAO reported that two of the five operational BSL-4 labs had less than a third of the key perimeter security controls GAO assessed and recommended that CDC implement specific perimeter controls for all BSL-4 labs. GAO was asked to (1) provide an update on what action, if any, CDC took to address the 2008 recommendation; (2) determine whether perimeter security controls at the two deficient BSL-4 labs had improved since the 2008 report; and (3) provide other observations about the BSL-4 labs it assessed. To meet these objectives, GAO reviewed CDC's statement to Congress as well as other agency and HHS documentation on actions taken or to be taken with respect to the 2008 recommendation, reviewed new security plans for the two deficient BSL-4 labs, and performed another physical security assessment of these two labs. GAO is not making any recommendations. Significant perimeter security differences continue to exist among the nation's five BSL-4 laboratories operational at the time of GAO's assessment. In 2008, GAO reported that three of the five labs had all or nearly all of the 15 key controls GAO evaluated. Two labs, however, demonstrated a significant lack of these controls, such as camera coverage for all exterior lab entrances and vehicle screening. As a result, GAO recommended that CDC work with USDA to require specific perimeter security controls at high-containment facilities. However, to date, CDC has taken limited action on the GAO recommendation. The two labs GAO found to be deficient made progress on their own despite CDC's limited action. One made a significant number of improvements, thus reducing the likelihood of intrusion. The second made a few changes and formed a committee to consider and prioritize other improvements. Two additional observations about BSL-4 labs concern the significant perimeter security differences among the five labs GAO originally assessed for its 2008 report. First, labs with stronger perimeter controls had additional security requirements mandated by other federal agencies. For example, one lab is a military facility subject to far stricter Department of Defense physical security requirements. Second, CDC inspection officials stated their training and experience has been focused on safety. CDC officials said they are developing a comprehensive strategy for safety and security of labs and will adjust the training and inspection process to match this strategy. In commenting on findings from this report, CDC and the two labs provided additional information on steps taken in response to GAO's prior recommendation and findings. |
Before restructuring, electric service was provided primarily by federal- and state-regulated investor-owned electric utilities. A utility typically owned the power plants, transmission system, and local distribution lines that supplied electricity to all of the consumers in a geographic area. Under this system, the Federal Energy Regulatory Commission (FERC) regulated, among other things, sales of electricity for resale and the transmission of electricity over high-voltage power lines in interstate commerce. The states regulated retail markets by participating with utilities in forecasting growth in demand, planning and building new power plants, reviewing and approving utility costs, and establishing rates of return. In response to the enactment of the Energy Policy Act of 1992, FERC has opened wholesale electricity markets across the country, and many states have also opened their retail markets to competition. In these competitive markets, consumers will eventually pay market-based electricity prices, and power plant developers are no longer guaranteed that construction costs will be repaid or that the electricity produced will be sold profitably. In these markets, it was expected that independent developers would individually assess the need for new generation and its potential profitability. These assessments would be made on the basis of market signals, such as the prices of electricity and other related products and forecasts of the generation required to meet growing demand. As shown in figure 1, the U.S. electricity transmission system consists of three connected, but independently operating systems: the western interconnect, the eastern interconnect, and the Texas interconnect. Each of these systems must maintain a constant balance between the amount of electricity supplied by power plants and the amount of electricity being used at homes and businesses. While little electricity moves from one system to another, electricity produced within each system can move throughout the system, subject to transmission system constraints that can limit or prevent the flow of electricity within certain regions of the system. The level of electricity demand varies considerably throughout the day, with the highest levels only reached during a small percentage of the hours during a year. In addition, unlike other commodities, electricity cannot easily or inexpensively be stored and must be instantly available whenever demand increases. Because these systems are interconnected, a change in the supply or demand in one part of the system can affect producers and consumers elsewhere. To ensure that supply exceeds the demand for electricity, utility systems have historically maintained additional power plants, as part of a reserve margin, above the amount needed to meet the highest level of expected demand. This reserve margin has enabled utilities to meet demand when a power plant was taken out of service or when demand rose more than expected. As part of the western interconnect, California has historically imported about 20 percent of the electricity that it consumes. While California’s utilities had owned power plants located in California and other states as part of their supply mix before restructuring, they have since sold most of these plants to private companies not regulated by California. In contrast, in recent years, Pennsylvania has exported more electricity than it has imported. Although some of the power plants owned by the state’s former utilities were sold as a result of restructuring, the plants have long-term contracts to sell electricity in Pennsylvania. Power plants in Texas generate nearly all of the electricity that the state consumes. The state’s utilities have leased access to generating capacity at some of their plants and some have been sold; however, the utility plants that are leased are operated by subsidiaries of the former utilities. As part of its efforts to restructure the industry, FERC issued regulatory orders that require transmission system owners to allow all parties, including new power plant developers, to transmit electricity under comparable terms and conditions. FERC has approved the formation of independent organizations to operate the transmission system in California and other states. An example of this new type of organization is the PJM Interconnect, which operates the transmission system in all or parts of Pennsylvania, New Jersey, Maryland, Delaware, and Washington, D.C. FERC also directed transmission system owners to create multistate regional transmission organizations to operate the systems independently of the transmission owners. To maintain the reliability of the transmission system, transmission owners and operators participate in the North American Electricity Reliability Council (NERC) through 10 regional reliability councils. These regions cooperate in planning and integrating the transmission system and study trends in long-term supply and demand. U.S. electricity markets have attracted significant planned investment to the nearly 770,000 megawatts of generating capacity already on-line at the end of 1995. Through the end of 2001, developers had proposed or added about 690,000 megawatts of new electricity generating capacity, of which about 114,000 megawatts were already built and another 123,000 megawatts were under construction. Industry data indicate that about 104,000 megawatts of proposed plants had been either tabled or cancelled—with the remainder in various stages of planning or development. About 40 percent of the proposed generating capacity was planned for states identified as active in implementing restructured electricity markets, and 20 percent for states that have actively pursued electricity restructuring but have either delayed or suspended further actions. While coal, nuclear power, water (hydroelectric dams), and oil are the primary fuels for older power plants, natural gas-fueled power plants accounted for over 80 percent of the generating capacity added from 1995 through 2001 and a similar percentage of the plants proposed for construction through the end of 2001. About 62 percent of the gas-fired plant capacity proposed through 2001 would use highly energy-efficient combined-cycle technologies, and 35 percent would use simple-cycle technologies. Both types of power plants rely on large gas turbines, also called combustion turbines, with combined-cycle units adding a steam generator and a steam turbine to convert waste heat in the exhaust stream to electricity. In general, both types of plants are more fuel efficient, less costly to operate, and less polluting than many existing power plants. Because of their higher efficiency and relatively low operating costs, combined-cycle power plants are often used to generate electricity through large portions of the day. In contrast, simple-cycle power plants typically are used to generate electricity only during periods of high demand because they cost more to operate. These plants are useful in meeting sudden changes in demand because they can reach full output in as little as 10 minutes. In general, simple-cycle power plants can be constructed in about 6 to 9 months after regulatory approvals, while combined-cycle power plants need from 18 to 28 months. Electricity demand in Texas, California, and Pennsylvania grew faster from 1995 through 2001 than NERC had forecast in 1995. In response, in Texas, developers added the most new capacity—-about 16,200 megawatts, or more than double the forecasted need through 2004. In contrast, in California, developers added about 4,600 megawatts, or 25 percent of the forecasted need for capacity through 2004, and in Pennsylvania, developers added about 2,100 megawatts, or less than half of its forecasted need through 2004. More recently, each state has seen significant cancellations and postponements of projects, with California experiencing the greatest drop. Developers and investment firms noted that events in the past year—the economic downturn, the terrorists’ attacks on September 11, and the collapse of the Enron Corporation— contributed to the cancellation of many proposed projects in the United States and the world. In 1995, when U.S. electricity markets were beginning to restructure, NERC forecast that already planned new plant construction would adequately meet the needs of the regional markets that include each of the three states through 2004. Specifically, NERC forecast the following for each of the reliability regions encompassing the states we reviewed: For California, the 16,800 megawatts of additional planned capacity would adequately meet an estimated 1.8 percent growth in peak demand per year. This added capacity included 13,600 megawatts of generating capacity and 3,200 megawatts of reduced demand to be achieved through the utilities’ conservation and load management programs. For Pennsylvania, the 5,700 megawatts of additional planned generating capacity would adequately meet an estimated 1.3 percent growth in peak demand per year. For Texas, the 6,600 megawatts of additional planned generating capacity would adequately meet an estimated 2.1 percent growth in peak demand per year. Texas’ planned new power plants included 5,300 megawatts of new gas-fueled simple-cycle and combined-cycle power plants. Since NERC’s 1995 report, electricity demand in each market has grown more than expected. Specifically, in 2001, the data for the three reliability regions reflected the following annual average growth: 4.7 percent for California, 2.1 percent for Pennsylvania, and 4.9 percent for Texas. NERC also reported that independent developers would need to continue to add new power plants in order to meet demand over the next 10 years. According to industry data through 2001, developers had announced proposals to build about 118,000 megawatts of new generating capacity in California, Pennsylvania, and Texas—substantially more than NERC’s projection of about 26,000 megawatts by 2004. Figure 2 shows that nearly half of this new capacity was proposed for Texas, while 35 percent was proposed for California and 17 percent for Pennsylvania. In addition, developers generally proposed power plants earlier in Texas than in the other two states. Specifically, 69 percent of the new power plant projects that began the regulatory process in Texas were proposed to regulators before 2000, while 75 percent of the projects in California and Pennsylvania were proposed to regulators in 2000 and 2001. This early interest in entering the electricity market in Texas led to earlier consideration by regulatory agencies involved in the siting approval process. Partly because developers had proposed new power plants earlier, they had built more generating capacity in Texas than in the other two states by the end of 2001. In total, Texas accounted for about 71 percent, or 16,000 megawatts, of the 23,000 megawatts of generating capacity built in the three states from 1995 through 2001. California accounted for 20 percent, or 4,500 megawatts, and Pennsylvania accounted for only 9 percent, or 2,000 megawatts, of generating capacity. In addition to plants already built by the end of 2001, developers had more capacity under construction in Texas than in either of the other two states. Total capacity under construction in the three states was almost 26,700 megawatts: almost 13,000 megawatts, 48 percent, in Texas; about 7,500 megawatts in California; and about 6,400 megawatts in Pennsylvania. As of December 2001, developers had cancelled or postponed over 22,600 megawatts of capacity previously announced for the three states, according to industry data. In particular, 59 proposed power plants were reported cancelled or postponed in California, amounting to about 11,500 megawatts of generating capacity. Although California accounted for only 35 percent of proposed new capacity for the three states from 1995 through 2001, it accounted for 51 percent of the cancelled or delayed capacity. Just as the emergence of the electricity shortfalls and high prices in California in 2000 led to an influx of proposals to build new power plants, the subsequent drop in electricity prices preceded the cancellations in the state. While cancelled or postponed projects represented about 28 percent of proposed additions to total generating capacity in California as of December 31, 2001, cancelled or postponed projects represented only about 13 percent of the total additions to capacity proposed in Pennsylvania and about 15 percent of proposed capacity in Texas. Senior electricity industry analysts at investment firms told us that the combination of three events during the past year—the national economic slowdown, the terrorists’ attacks on September 11, and the collapse of Enron Corporation—have further limited developers’ near-term ability to propose and build new power plants because the international capital markets are less willing to invest in energy projects. They explained that the slowdown has reduced economic growth and expected growth in electricity demand. The terrorist attacks have, among other things, made insuring and re-insuring all power plants more difficult and more expensive. In addition, they said, the collapse of Enron, while not specifically hurting energy markets, has increased concern about the financial condition of energy companies and led to, among other things, (1) higher lending standards, (2) lower levels of allowed borrowing, and (3) higher interest rates for borrowing. In addition, the stock prices of many major independent developers have dropped substantially, further limiting their ability to raise capital. In the three states we reviewed, state and local agencies responsible for air and water quality and land use decisions review applications for constructing and operating power plants to ensure compliance with relevant laws and regulations. In addition, California requires the California Energy Commission (CEC) to approve all power plant projects with at least 50 megawatts of capacity. Because most developers in California and Pennsylvania have chosen sites for new plants in areas that have poor air quality, environmental agencies generally conducted more comprehensive reviews and required stricter limits on emissions. Both California and Texas provide enhanced public participation during the application review process, which can add time to the approval process to address sensitive issues. In California, 1 of 35 regional air districts and one of 9 regional water boards, or EPA’s Region 9 in some parts of the state, review the application to assess the proposed project’s compliance with air and water quality requirements. Local governments review the applications for compliance with land use and zoning requirements. If applicable, state and federal agencies review the application for compliance with the Endangered Species Act. In addition to these reviews, CEC must approve new power plant projects above 50 megawatts before they can be built, adding another layer of review. According to the state, CEC exists to ensure that needed energy facilities are authorized in an expeditious, safe, and environmentally acceptable manner. As part of its role, CEC oversees compliance with the California Environmental Quality Act, which requires an evaluation of the environmental impact of state-approved projects planned for the state. CEC decisions can overturn the permitting decisions of other state and local agencies. In one case, for example, CEC approved a power plant even though the local community had refused to grant a land-use zoning permit. CEC also analyzes other aspects of the project, which may not be examined by other agencies, including the plant’s technical design, fuel use and efficiency, transmission equipment, and socioeconomic impacts. The CEC certification process allows for public participation throughout the application review process. (See app. II.) In California, the average period for approval was 14 months, excluding smaller plants that were approved under the state’s temporary 21-day emergency siting process. Approvals for large plants—those with generating capacity of more than 200 megawatts—took about 16 months. Pennsylvania has no single state agency specifically responsible for approving new power plant projects. As with other industrial projects, power plant developers must work through (1) the Pennsylvania Department of Environmental Protection to obtain air quality and water quality permits and (2) local government agencies to obtain zoning and other land-use permits. In addition, developers in eastern or central Pennsylvania would have to obtain permits from the Delaware River Basin Commission or the Susquehanna River Basin Commission, respectively, for access to river water. If applicable, federal and state agencies review the application for compliance with the Endangered Species Act. (See app. III.) The primary permit needed for approval to construct a power plant is the air quality permit, and from 1995 through 2001, the average time needed to obtain this permit was about 14 months. Approvals for plants larger than 200 megawatts took about 13 months. Similarly, Texas has no single state agency specifically responsible for approving new power plant projects. Instead, the Texas Natural Resource Conservation Commission is responsible for approving environmental permits and in some cases, municipal governments regulate land use through the zoning process. If applicable, federal and state agencies review the application for compliance with the Endangered Species Act. (See app. IV.) For plants approved from 1995 through 2001, developers obtained an air quality permit—the primary permit required—in 8 months in Texas. Approvals for plants larger than 200 megawatts also took about 8 months. Table 1 shows the time it has taken to complete the approval process in each of the three states. As the table shows, the time to complete the review process was less predictable in California than in the other two states—approval for 5 of California’s 21 medium- to large-scale projects took 18 months or longer. The gas-fired power plants now being built emit nitrogen oxides, which directly contribute to ozone pollution. To control these emissions, air pollution control requirements for these power plants vary according to the planned location and the amount of the plants’ emissions, as well as whether a state has stricter standards than the federal standards. In general, large power plants planned for an area that does not meet federal air quality standards—known as non-attainment areas—must obtain a Non-Attainment New Source Review permit. This permit requires a new power plant to install the most advanced pollution control equipment and offset the new plant’s emission of pollutants by reducing emissions elsewhere in the area. The new power plant could, for example, buy emission reduction credits, called offsets, from another industrial facility that has closed or adopted less polluting technology beyond what is required under regulations. The advanced pollution control equipment and the purchase of these offsets from another company can add substantially to a power plant’s costs compared with the requirements in an attainment area. In attainment areas—areas that meet federal air quality standards— plants can obtain a Prevention of Significant Deterioration permit, which requires less stringent technologies to control emissions. As shown in figure 3, all three states have non-attainment areas for EPA’s ozone standard. Substantial portions of California and Pennsylvania are non-attainment areas with many areas of either extreme or severe air quality impairment. In addition, because Pennsylvania is part of a regional ozone transport area, the entire state must be treated as a non-attainment area. In contrast, only the Dallas, Houston, Beaumont, and El Paso metropolitan areas are non-attainment areas for ozone in Texas. Overall, 65 percent of the approved plants in California and about 60 percent of the approved plants in Pennsylvania were required to obtain air permits requiring more stringent controls, primarily because power plant projects for California and Pennsylvania generally were proposed for sites in non-attainment areas for ozone. In contrast, in Texas, only 18 percent of the approved plants had to use more stringent controls, partly because 64 percent of the approved plants were located in attainment areas. California has led other states in requiring pollution reduction beyond what is federally required. Specifically, California has a 1-hour ozone standard of 0.09 parts per million, as compared with EPA’s 0.12 parts per million standard—which causes more areas of the state to be judged as having poor air quality. With this standard, power plants in almost all areas of the state must install some pollution controls. California requires that smaller gas-fired power plants must limit their emissions—even those with significantly lower quantities of emissions. Plants emitting more than 10 pounds per day of pollutants, or approximately 1.8 tons per year, must evaluate pollution controls. In contrast, EPA has a minimum threshold of 10 tons per year for plants located in areas with the worst air quality. Because California’s standards are more stringent than EPA’s, 9 of the 31 power plant projects approved in California since 1995 had to install pollution control equipment to lower their emissions, which EPA would not have required. Furthermore, while EPA’s standards for new plants apply in all states, the approved emissions level for a plant depends on how the state applies EPA’s regulations. California generally required new power plants to reduce emissions to lower levels than did other states. These lower levels subsequently are considered by other states in setting their own BACT and LAER standards. Each of the three states allows for public involvement at several stages in the permit review process, including the local community’s consideration of zoning and other land-use permits and the state agency’s consideration of environmental permits. Permitting decisions also can be appealed to the state courts and, in some cases, to a state or federal agency. In addition, both California and Texas allow members of the public to become formal participants in the process for a power plant application. In California, CEC can designate them as approved “intervenors,” which enables them to request data from the applicant, file motions, testify, and conduct cross-examinations in formal hearings. Intervenors often have included local interest groups, labor unions, and environmental interest groups. In California, of 72 applications filed with CEC from 1995 through 2001, 39 have had intervenors. In Texas, members of the public meeting certain requirements may request a “contested evidentiary hearing” before an administrative law judge. In these proceedings, parties may present testimony, offer evidence, cross-examine other parties’ witnesses, and object to the introduction of evidence. The administrative law judge then makes a recommendation to the permitting agency. Since 1995, 15 of 84 air permit applications in Texas had a request for a contested hearing. Two requests resulted in hearings. The emergence of substantial local opposition to a new plant is a significant factor in receiving necessary approvals, delaying regulatory decisions in many cases, according to regulators in each of the three states. As a result, developers told us that they look for locations where their project will receive local community support because its economic benefits to the local community outweigh its negative effects, such as increased air pollution. Texas permitting officials told us that communities generally welcome new natural gas-fired power plants because they add to the community’s tax base and pose few environmental concerns. The market rules for connecting a new power plant to the local transmission system (referred to as interconnection) in Texas differs markedly from those in California and Pennsylvania. In Texas, interconnection costs can be significantly lower for developers because consumers directly pay, through a charge on their electricity bills, for upgrades to the electric transmission system that are required with the addition of the new plant. In California and Pennsylvania, under current FERC-approved rules, developers pay for the system upgrades with the expectation that they will recoup these costs through electricity sales. Furthermore, in Texas, developers of new power plants sign standard interconnection agreements that specify the terms and conditions of connecting the new plant to the transmission system, which speeds up the negotiation process; California and Pennsylvania do not have such agreements. In November 2001, FERC requested comments and suggestions from interested parties for developing a standard interconnection agreement. Under Texas’ restructuring rules, developers building plants must only pay for direct interconnection costs (switchyard, substation improvements, line extension—if applicable). Under these rules, all electricity consumers directly pay for the entire transmission system including the costs to upgrade the system to carry the additional electricity produced at the new power plant. The interconnection of a new plant can affect transmission lines located elsewhere on the system, requiring the system be upgraded. The state made this decision, according to officials at the Texas Public Utility Commission (PUC), to provide a level playing field on which new power plants can compete against existing plants. This rule emerged after the Texas PUC found, in assessing competitiveness in the wholesale market, that the financial responsibility for needed transmission system upgrades was not clearly defined. Lack of clear definitions, it concluded, could lead to conflicts and delays, and discourage the development of new privately owned power plants. The Texas PUC has addressed cost allocation issues through the Electric Reliability Council of Texas (ERCOT) by clarifying the rules for allocating system upgrade costs. Under these rules, PUC allocates the annual cost of the transmission costs including these transmission system upgrades and related maintenance to the entities selling directly to consumers, on the basis of their total electric demand and passes these costs on to consumers through a per-kilowatt-hour fee. As a result of these cost allocation rules, interconnection costs to developers are well defined and known early in the development process. To connect a power plant project to the transmission system, developers must (1) request an interconnection from ERCOT, (2) pay for two ERCOT studies on the proposed plant’s potential impact on the transmission system, and (3) provide a security deposit for any costs incurred by the transmission service provider. ERCOT representatives said that they conduct these studies in the order received and completion times vary depending on the application. Generally, the first screening study is completed within 90 days and the more detailed analysis in another 60 days. Developers said that because they do not pay for transmission upgrades, they can locate plants outside of areas with congested transmission systems, such as Dallas. As a result, power plants in Texas generally have been located outside non-attainment areas. According to Texas PUC and ERCOT officials, substantial upgrades to the transmission system were underway because many new power plants are being located in areas in which the existing transmission system could not adequately transmit the added capacity. PUC officials believe that transmission improvements will lead to improved competition in the long-term and noted that ERCOT has given priority to addressing bottlenecks in the transmission system to ensure that all the markets in the state have access to these new supplies of electricity. In contrast, developers in Pennsylvania pay for both the transmission system upgrades and the direct interconnection costs. Requiring developers to pay for system upgrades acts as an incentive for proposing plants in locations that do not require substantial transmission system improvements or the addition of new power lines, according to staff at PJM Interconnect, Pennsylvania’s transmission system operator. Developers must also pay a deposit for PJM Interconnect to complete interconnection studies—as much as $7.5 million in one case for one of the three studies. PJM Interconnect conducts transmission studies for power plant projects as a group—all proposals received within a specific time period are analyzed together. According to PJM Interconnect staff, they need to study the system impacts of all the applications received to accurately assess the interactive implications of multiple new power plants, even though some of the power plants in several of the groups may never be built. Similarly, developers in California pay for both the direct interconnection costs and upgrades. However, in California, the local transmission system owner determines the cost of the system upgrades, with limited oversight by California’s transmission system operator. To connect to the local system, a developer submits an interconnection request to the transmission system owner and the operator. To assess the work and associated costs for the interconnection, the transmission system owner studies the impact of the proposed plant on the transmission system to identify potential reliability problems. If this study identifies reliability problems, the developer may request the transmission system owner to perform a detailed facilities study to determine the measures needed to mitigate those impacts and to identify their associated costs. Current rules require the power plant developer to pay the costs of the interconnection studies and the system improvements required to mitigate reliability problems. The California transmission operator critiques these studies, primarily by evaluating their assumptions and the role of other plants expected on-line. To foster competition and facilitate negotiations, Texas requires developers and the local transmission owners to use a standard interconnection agreement to (1) assign responsibility for paying the costs of any upgrades to the transmission system needed for carrying the new plant’s added electricity capacity, (2) allocate ownership interests in these assets, and (3) assign responsibility for liability associated with plant and interconnection facility operations. In establishing this process, the Texas PUC sought to (1) ensure coordinated planning for transmission systems, (2) eliminate delays in the interconnection process, and (3) remove incentives for the transmission providers to favor their own power plants. The standard interconnection agreement, a contract between the power plant developer and the owner of the local transmission system, includes standard terms and conditions and sets specific deadlines for the local transmission system owner to complete the connection and for the developer to start plant operations. The agreement also provides rights to either party to terminate the agreement if the other fails to meet its deadline. Developers told us that the Texas process is much faster to negotiate because, to the extent that the cost allocations can be determined ahead of time, many issues are removed from the business negotiations. Accordingly, both developers and ERCOT staff said that the use of a standard interconnection agreement has worked well in Texas. In contrast, in California and Pennsylvania, developers and the local transmission system owner do not use a standard agreement and therefore must negotiate the terms and conditions of the interconnection agreement, which typically adds time to the process. Developers in California said that they have to accommodate differences in interconnection policies among transmission owners. These differences, which can occur because different transmission owners interpret the FERC-approved rules differently, have resulted in interconnection disputes between the transmission owners and developers that create barriers or delays to building new power plants. The developer and the transmission owner can either resolve these disputes or appeal to FERC for resolution, which would add even more time. PJM Interconnect staff plan to develop a pro forma interconnection agreement because it appears to offer advantages over a lengthy negotiation process. The staff believe that FERC wants the operator of the regional transmission system to sign the agreement, but the staff would prefer to keep the agreements between the developer and the transmission owner, citing concerns about PJM Interconnect’s potential liability if FERC requires it to sign. They added that, if required, PJM Interconnect would become a party to the agreement but would need to purchase liability insurance with these costs passed on to consumers. We found that reaching agreement on interconnection was substantially faster in Texas than in the other two states. Specifically, it took 11 months, on average, in Texas, compared with 28 months in California and 30 months in Pennsylvania. In November 2001, FERC published an Advance Notice of Proposed Rulemaking in the Federal Register requesting that affected parties provide suggestions and comments for developing a standard interconnection agreement. FERC noted that it had previously required local transmission system owners to provide non-discriminatory, or comparable, access to transmission service and established standard terms and conditions for the service provided by the transmission system owner. However, this requirement did not directly address power plant interconnections. In this advance notice, FERC also provided the views of both the independent developers and transmission system owners. According to FERC, developers have asserted that, among other things, (1) the treatment they receive is not comparable to the treatment the transmission provider receives for the power plants it owns, (2) system upgrade costs charged to developers are sometimes not related to the interconnection, and (3) delays and uncertainties occur because the transmission owner’s rules do not specify binding commitments and firm deadlines for completion of specific actions. In contrast, FERC reported that transmission owners believe that, among other things, they need minimum financial commitments from developers seeking interconnection to weed out plants that are unlikely to be built. The financial commitments are intended to minimize the number of plants they will have to study so that they can accurately assess how much total generating capacity will be added to the system. Transmission owners also want assurance that consumers in their local transmission system will benefit from, or at least not be burdened by, adding power plants, particularly when a developer seeks to locate a plant in one system that would primarily sell electricity to consumers in an adjacent system. Restructured markets change the context for investment by enabling developers to broaden the number of markets they consider and by requiring them to make financial commitments long before they actually build a power plant, according to the developers we interviewed. In this context, they generally propose power plant projects in markets where prices are high enough to expect that plants will be profitable. However, they actually build plants in markets where expected profits outweigh possible risks that could reduce a plant’s profitability—such as changes in the state, regional, or national rules for the electricity market. In restructured markets, developers told us, several conditions have changed the basis for their decisions to build or not to build power plants. Restructured markets, unlike regulated markets, require developers to independently assess the need for new power plants and their potential profitability. Restructuring allows them to compare opportunities to build plants across multiple markets—state and regional markets as well as international markets. If they decide that a particular market will not be profitable, they will build elsewhere, according to the developers we spoke with. Furthermore, they propose building power plants at three or more sites for each plant that they actually intend to build. Multiple proposals ensure that at least one site will be ready to receive a turbine and other power plant equipment at a specific date. Uncertainty about market conditions at each site and about whether and when they will obtain the necessary permits and approvals to begin construction dictate this multiple site approach, according to developers. Industry analysts noted that because developers have proposed many more project sites than they intend to build, future market prices are less predictable than they otherwise would be. These market uncertainties have been further complicated by an increased worldwide demand for turbines and financing, forcing developers to compete for these resources. Specifically, because of the increased demand, developers said they made financial commitments to purchase combustion turbines several years before they expect to receive them in order to ensure that they will have turbines when they need them. These commitments can tie up substantial amounts of capital: large turbines can cost $50 million or more, while even small turbines can cost $16 million. Moreover, in restructured markets, without the regulated market’s guarantee that investors will have their loans repaid, developers have to compete for investment capital. Bank executives told us they evaluate each power plant project alongside other potential investments, including power plant projects in other states and countries. General market conditions and specific site conditions affect expected profitability, according to developers we interviewed. With respect to general market conditions, they first seek opportunities for new investment by analyzing future electricity prices and—to a lesser extent— opportunities to sell other products. In estimating the prices that new power plants may receive in a restructured market, developers evaluate market signals, including current electricity prices and prices in the forward or futures market. Developers then review information about potential competitors in a given market, including the type and age of existing plants and their estimated production costs, as well as economic growth projections that affect demand increases. Finally, developers estimate the overall profitability of selling electricity in a market by comparing the estimated future electricity prices with the estimated cost to generate electricity, based on fuel cost estimates in the area and other variable production costs. For example, industry analysts told us that while actual production costs will vary, typical fuel costs for a new combined-cycle power plant are about 2.1 cents per kilowatt-hour— substantially less than the 3.7 cents per kilowatt-hour cost of some existing gas-fired power plants. Once they identify a potentially profitable market, developers told us, they look for suitable power plant sites and evaluate the sites’ estimated development costs. For gas-fired combined-cycle power plants, developers prefer locations that are near the intersection of a large natural gas pipeline and high voltage transmission lines and that have access to an adequate source of cooling water. Developers analyze each site’s potential for receiving state and local regulatory approval and for minimizing construction, interconnection, and operating costs. Developers then seek to acquire the right to develop the property—by either purchasing the land or obtaining an option to purchase the land—and then may begin pursuing regulatory and interconnection approvals for the site. In restructured markets, developers said, they regularly analyze each power plant project’s market and regulatory risks to determine whether these risks could significantly reduce expected profitability. Market risks include the possibility that electricity prices will be lower than expected and/or that production costs will be higher than expected. Regulatory risks include the possibility that the rules for the electricity market will change or that the rules governing power plant operations will change.Developers reevaluate market and regulatory risks as the project moves forward to determine whether to continue the project. Higher risk levels can cause developers and commercial banks to delay investment until expected profits outweigh the increased risk, according to developers. Assessing risk is important, developers said, because a new power plant is expensive to build—costs could exceed $500 million—and operates for 20 years or more. Some developers and commercial banks prefer investment opportunities with lower levels of risk, such as when they can sell a substantial portion of the plant’s electricity production through long- term contracts with set prices and terms. Other developers said that they will invest in riskier projects if expected profits are higher. Developers also told us that regulatory risks, such as lengthy and uncertain state approval processes and stringent environmental compliance requirements, were not, by themselves, obstacles to building a power plant in a state. Rather, they said, these factors can increase a project’s risk because it is more costly to build and operate and because long-term projections about market conditions are less reliable. For example, plants subject to more stringent environmental standards need more costly emissions-reduction equipment and have less operating flexibility to respond to changes in demand, according to a turbine manufacturer. Furthermore, limiting a plant’s ability to respond to changes in demand can reduce its profitability. In restructured states, market rules, which set the terms for buying and selling electricity and related products, can affect the potential volatility of electricity prices. For example, prohibiting the use of long-term contracts exposes buyers and sellers to the risk of rapidly fluctuating prices. Alternatively, a state with a price cap could expose power plants to the risk that electricity sales will be unprofitable under certain circumstances. Given the importance of market rules, developers prefer stable and transparent rules that clearly describe the opportunities and risks inherent in a state’s market. They told us that they conduct a detailed analysis of the rules and participants for each market that they may enter because market rules vary. For example, restructuring created some multistate regional markets, while other markets are still dominated by regulated utilities and are subject to substantial state control. Furthermore, developers said that they prefer rules that provide clear and direct opportunities to manage the risk of volatile electricity market prices. Often, developers can reduce their exposure to this risk by (1) buying natural gas at fixed prices through long-term contracts and/or (2) selling the plant’s future output through long-term contracts that generally set a future sales price. Several developers told us that they seek to commit at least 50 percent of a new plant’s output to long-term sales contracts. Lenders and staff at investment ratings companies also told us that long-term contracts with financially sound purchasers are important tools to lower risks when financing new power plants. They noted that long-term contracts with fixed prices and terms enable developers to obtain more favorable financing terms because selling a portion of the plant’s future output reduces the project’s market risk. While transparent market rules can improve the investment climate for a specific market, some developers were also concerned about whether the rules were consistent and equally enforced. Operators of regional transmission systems, transmission system owners, and federal and state regulators are each responsible for enforcing market rules. Developers said that restructured markets were generally improving their treatment of independent developers. However, some developers were still concerned about the administration of the transmission system and the potential for unequal access to market information in markets where they compete with power plants owned by transmission system owners. California, Pennsylvania, and Texas, with different market and regulatory environments, illustrate how developers weigh profitability and risk. According to electricity industry analysts, profitability and risk considerations in California delayed proposals to build power plants in the state. Developers cited the following profitability concerns before prices began rising dramatically in May 2000: (1) the state required its three largest utilities to use only the short-term electricity market to buy nearly all of the electricity sold to their customers and (2) electricity prices in the short-term markets averaged 2.9 cents per kilowatt hour, which was generally lower than prices in other U.S. markets, and, as a result, offered lower potential profits than in other markets. The market rules limiting the use of long-term contracts in California effectively increased the risk of building power plants in that state. One power plant developer told us that because California did not have a robust and predictable market for long-term electricity sales, it could evaluate only the prices in the short-term electricity market, which exposed the developer to more risk without the expectation of higher profits. However, developers told us that once prices began to rise, they began to propose building more power plants in the state. From May 2000 through June 2001, electricity prices increased fourfold, on average, to 13.4 cents per kilowatt-hour. In response to the electricity crisis during 2000 and 2001, California took several actions that increased its involvement in its electricity markets. First, in January 2001, the state replaced the governing board of its transmission system operator with members appointed by the Governor. Second, the state created the California Power Authority, which can, among other things, finance up to $5 billion for power plants. Senior state officials have said that the electricity market would not be sufficiently competitive until an excess capacity of 15 percent was located in the state and that state financing provided one way to increase in-state generating capacity. However, according to investment analysts and developers, the potential that the state might build up to 15 percent excess generating capacity increases the risk and uncertainty for investing in California’s electricity market. Third, California entered into long-term contracts to buy electricity and bought electricity day-to-day in short-term markets because the state’s two largest utilities faced severe financial problems and difficulty purchasing electricity. Taken together, these actions have created concerns among developers about whether the operator of the California transmission system will provide equal treatment for market participants. Specifically, employees for the state agency responsible for buying electricity had access to the transmission system operator’s control center and may have had access to real-time data not provided to other market participants, even though the transmission system operator’s rules prohibit such treatment for market participants. Audits of the transmission system’s operations identified several other violations of the rules. Although FERC ordered state staff to leave the operations room, developers remain concerned that the state may receive special treatment from the transmission operator. This concern continues because the state has so much potential influence over the market, which raises the risk of entering the market for independent developers. Furthermore, investment analysts told us that some investors are even more cautious about investments that rely on California’s electricity markets. The lack of stable market rules presents uncertainty regarding the eventual market in the state. In addition, the perception that the state is seeking to abrogate the long-term contracts it signed last year has raised concerns about the finances of some projects. These analysts explained that, due to the risks in the current market, energy investments in California may require higher returns and/or more stringent loan terms,as well as management of risks through, for example, the use of long-term contracts with purchasers other than the state as a basis for obtaining loans. In Pennsylvania, developers proposed building relatively few power plants because while the risks were manageable, the profits were too low, according to developers. In addition, the transmission interconnection process was protracted, with uncertainty regarding the capital investment needed to fund transmission upgrades. The market rules have permitted power plant developers to enter into contracts to sell electricity for delivery at a future date. These long-term contracts enable developers to manage their risk by providing fixed prices and terms for electricity sales. However, electricity prices were too low to attract investment. Low-cost existing generating capacity was available because the state’s industrial base has declined as many steel plants and other industries that consumed substantial quantities of electricity closed or moved out of state, according to Pennsylvania PUC officials. However, developers said that Pennsylvania has attracted some investment because of its access to other markets such as those in northeastern electricity systems in New York State and New England, which have had relatively high prices. In Texas, risks were manageable and profits were attractive. As discussed earlier, the market rules in Texas reduced risk through its (1) relatively faster regulatory approval process and (2) interconnection rules, which lowered development costs and simplified the administrative process. In addition, the rules in Texas allowed developers to manage their risk through long-term contracts. Furthermore, developers invested in Texas during the initial operation of its wholesale electricity market because the market appeared to be profitable. The electricity prices and the cost of production at existing plants were relatively high compared with the estimated cost of producing electricity at new plants. While Texas significantly increased its generating capacity, several developers and lenders expressed concern that the Texas market may soon have too much new capacity. As restructuring broadens electricity markets to span multiple states, states will become more interdependent for a reliable supply of electricity—one state’s problems can affect its neighbors. In this context, restructured electricity markets rely on the investment decisions of individual developers. Consequently, the reliability of the electricity system—and the success more generally of restructuring—now hinges on whether these developers choose to enter a market and how quickly they are able to respond to the need for new generation capacity. Developers decide on which markets to enter by balancing profitability and risk—that is, by considering how the regulatory processes and markets rules affect risk in a market and to a lesser extent, the profitability of building a plant in that market. FERC’s decisions on market rules and the states’ decisions on regulatory rules can affect the balance of profitability and risk in a state. The experiences of California, Pennsylvania, and Texas show how these considerations have played out. The high levels of perceived risk and low levels of estimated profitability in California appear to have resulted in lower levels of early investment in new power plants in that state. On the other hand, the experience in Texas illustrates that the ability to manage risk and higher levels of estimated profitability combined to attract significant investment into new power plants from 1995 through 2001. The experience in Pennsylvania illustrates that while risk may be manageable, estimated profits also have to be high enough to attract investment. Developers can be deterred from building a power plant if the market has lengthy delays between making the proposal and selling electricity. These delays increase a developer’s uncertainty whether the proposed project will be approved and whether additional costs will be incurred that reduce the plant’s profitability. In this context, interconnection agreements are critical in assessing profit and risk. Lengthy negotiations over interconnection terms and conditions can increase the risk of developing a new power plant because forecasts of market conditions in the more distant future are less reliable than near-term forecasts. Texas was able to reduce delays in negotiating these agreements, in part because the Texas PUC’s standard agreement already specified many of the parties’ responsibilities. In contrast, under rules approved by FERC, California and Pennsylvania allowed developers and transmission system owners to negotiate their responsibilities, which has resulted in a lengthy process— more than twice as long as in Texas. A standard agreement also provides better assurance that transmission owners will treat all developers of new power plants equally. In addition, Texas’ rules provided a clear method for allocating costs associated with upgrading the transmission system, which appear to have sped negotiations because the amount and allocation of these costs are not contested. To facilitate development of power plants needed in restructured markets and to provide comparable treatment for all developers, we recommend that the Chairman of the Federal Energy Regulatory Commission, in consultation with transmission system owners, power plant developers, and lenders, (1) develop and require the use of a standardized interconnection agreement and (2) clarify how transmission system upgrade costs are allocated. We provided FERC with a draft of this report for review and comment. The Chairman of FERC agreed with our recommendation, noting that FERC had issued a Notice of Proposed Rulemaking on April 24, 2002, which would require transmission system owners under FERC’s jurisdiction to use a standardized interconnection agreement. FERC developed the proposed agreement in consultation with industry participants. (See app. V for FERC’s comments.) In addition, FERC provided comments to improve the report’s technical accuracy, which we incorporated as appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees, the Federal Energy Regulatory Commission, the Director of the Office of Management and Budget, and other interested parties. We will make copies available to others on request. If you or your staff have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix VI. To compare the electricity needs of California, Pennsylvania, and Texas, we examined reliability reports prepared by the North American Electric Reliability Council and the three regional councils that include most of the area of the states that we studied—the Western System Coordinating Council for California, the Mid Atlantic Area Council for Pennsylvania, and the Electric Reliability Council of Texas (ERCOT) for Texas. To assess the extent to which these states have added new power plants or received proposals to add power plants, we used industry databases from Resource Data International (RDI). We used RDI’s PowerDat database to identify new generating units that began operation between 1995 and 2001. RDI obtains data for the PowerDat database from a range of public filings to the Energy Information Administration, the Federal Energy Regulatory Commission, and other entities. We also used RDI’s NewGen database to identify proposals to build new power plants, as well as construction, cancellations and postponements of new power plants. RDI obtains data for the NewGen database from various sources, including developers, government agencies, banks, trade journals, and newspapers. Data on proposals may not fully reflect all capacity that has been proposed at a point in time. We did not verify the databases provided by RDI. To compare the regulatory processes for approving new power plants, we reviewed reports, interviewed officials in the states, and examined data. We reviewed reports prepared by the California State Auditor, the California Energy Commission (CEC), and industry summaries of the permitting process prepared for the Edison Electric Institute, an industry trade association. We visited California, Pennsylvania, and Texas to interview federal and state regulatory and permitting officials to assess (1) each agency’s responsibilities; (2) each state’s implementation of the Clean Air Act and Clean Water Act, as well as Endangered Species Act; (3) each state’s process for public participation; and (4) the amount of time required for approval. The state agencies we interviewed in California included CEC, the Electricity Oversight Board, the Governor’s Green Team, and the California Environmental Protection Agency, as well as two regional air quality districts. In Texas, we interviewed officials of the Texas Natural Resource Conservation Commission (TNRCC), which is responsible for issuing permits for air quality and water quality. For Pennsylvania, we interviewed officials at the Pennsylvania Department of Environmental Protection (DEP) and the Delaware River Basin Commission, which manages the Delaware River System, including eastern Pennsylvania. We also interviewed officials at the U.S. Environmental Protection Agency (EPA) and the U.S. Fish and Wildlife Service at their Washington, D.C., headquarters offices and their regional offices in each state. To calculate the duration of each state’s regulatory review process for approved power plants, we compared the time from when each application was deemed administratively complete to the date CEC approved the project in California, TNRCC approved pre-construction air permits in Texas, and the Pennsylvania DEP approved pre-construction air permits in Pennsylvania—the air permit is the primary regulatory process in Texas and Pennsylvania for gas-fired power plants. We compared approved permits from January 1, 1995, to December 31, 2001. To compare the implementation of the Clean Air Act standards for approved permits, we identified the location of the plant (whether in an attainment area or a non-attainment area), the type of permit required, and the emissions limits. To compare the extent of formal public participation prior to permit decisions, we compared the number of requests for contested hearings and the number of contested hearings in Texas with the number of permit applications with intervenors in California for permit applications submitted between January 1, 1995, and December 31, 2001. Pennsylvania’s only mechanisms for formal public participation prior to permit decisions are the public notification and comment process and through public hearings. To compare the processes for connecting new power plants with local electricity transmission systems, we visited each of the three states and interviewed officials at the transmission system operator serving the state: we interviewed officials at the California independent system operator in California; the PJM Interconnect in Pennsylvania; and ERCOT in Texas. In addition, we interviewed officials at one of the California’s three major utilities, which play a large role in completing the studies in that state. To determine the amount of time needed to reach an interconnection agreement, we examined data that the three states provided to us. To determine the time that the process took in each state, we examined data provided by (1) owners of transmission lines for plants larger than 50 megawatts in California, (2) PJM Interconnect in Pennsylvania, and (3) ERCOT in Texas. We also met with officials of the Federal Energy Regulatory Commission and the Edison Electric Institute. To identify the key factors that developers consider in deciding where to propose and build new power plants, we examined reports prepared by industry experts and we met with senior executives of three large and three smaller independent power plant developers to discuss the key elements in their investment decisions. To learn more about the current technologies of power plants being built in the United States and the market for turbines, we interviewed executives of a large manufacturer of turbines and toured a combined-cycle power plant. To identify what factors are important to the financial markets, we interviewed energy market investment analysts of two investment ratings companies serving the financial markets and executives of four investment banks that lend money to power plants developers. We examined the approval process for building a new natural gas-fueled power plant because these types of plants are the most common plants being proposed in the United States. However, as agreed with your office, we did not address related issues, such as the process for obtaining rights of way for connecting to a nearby natural gas pipeline or the local transmission lines. We conducted our work from August 2001 through April 2002 in accordance with generally accepted government auditing standards. Before a developer can begin to construct a new power plant project, California’s CEC must approve the project, which incorporates all of its required state and local permits. While CEC conducts its review, each project is also reviewed by (1) 1 of 35 regional air districts and 1 of 9 regional water boards, or by EPA’s region 9 in some parts of the state, for compliance with air and water quality requirements; (2) local governments for compliance with land use and zoning requirements; and (3) if applicable, state and federal agencies for compliance with the Endangered Species Act. The CEC certification process allows for public participation through the intervenor process, a public advisor, as well as by planned public participation throughout the application review process. CEC must certify all power plant projects with a generating capacity of 50 megawatts or more before they can be built and operated. As shown in table 2, CEC has established time frames for each phase of its certification process in order to approve or reject a project within 1 year after a developer’s application is deemed “data adequate.” While CEC receives information from other state and local agencies, it conducts an independent assessment of each proposed project’s environmental impacts; public health and safety; compliance with any applicable local, regional, state and federal laws, ordinances, and regulations; efficiency; and reliability. However, CEC does not assess the need for each proposed new plant. As the lead agency for certification, CEC issues all required state and local permits and is authorized to override the permitting decision of a state or local government agency. In early 2001, in response to the electricity crisis, the Governor of California authorized CEC to replace the process described in table 2 with the following expedited reviews of new power plant projects: 21-day process for small power plants that operate only during peak demand periods, provided that the plants could begin operating by September 30, 2001; 4-month process for power plants using simple-cycle natural gas turbines that could begin operating by December 31, 2002; and 6-month process for combined-cycle and steam power plants, with no adverse environmental impacts, for which applications have been submitted by January 1, 2004. CEC identified potential sites to minimize the effect of limited environmental reviews and reduced opportunity for public participation. As of December 31, 2001, CEC had approved 11 small power plant projects under the 21-day process, taking 22 days on average; 2 simple-cycle power plant projects under the 4-month process; and 1 combined-cycle power plant project under the 6-month process. As part of its EPA-approved plan to implement the Clean Air Act, California has 35 regional air districts responsible for attaining state and federal ambient air quality standards within their regions. Each air district adopts its rules and own permitting process and establishes and enforces air pollution regulations for stationary sources that are at least as stringent as federal requirements and that address the particular air quality problems in its region. As a result, the application process for federal and state air quality permits can vary. Most of California’s densely populated areas are non-attainment areas for ozone. Nitrogen oxides, which combine with other pollutants to form ozone, are emitted by power plants. Building a new power plant in these areas is more costly because the plant must (1) achieve low nitrogen oxide emission levels by adding pollution control devices and (2) offset its nitrogen oxide emissions by acquiring emissions credits. California issues emissions credits when emissions from existing sources are reduced. Power plant developers have found that these credits, which can be traded or sold, are difficult or costly to obtain in many non-attainment areas because of their scarcity. According to CEC officials, the lack of emissions reduction credits for offsetting a new project’s emissions could limit the number of new gas-fired power plants in the state. As part of its EPA-approved plan to implement the Clean Water Act, California’s nine regional water quality control boards are responsible for attaining state and federal water quality standards. Each water board may establish and enforce water pollution regulations that are at least as stringent as federal requirements. As a result, the application process for federal and state water quality permits can vary, making the siting process more complex. Under the Endangered Species Act, California has the second highest number of endangered or threatened species in the country behind Hawaii, increasing the likelihood that a new power plant project may affect the habitat of a listed species. EPA’s region 9, which includes California, routinely notifies the U.S. Fish and Wildlife Service about new power plant projects because it considers air and water quality permits that it, or a delegated district, issues are federal actions that trigger notification under the Endangered Species Act. A power plant developer must address any applicable local and state laws, ordinances, regulations, standards, plans and policies as part of its CEC application. Although CEC issues all state and local permits as part of the overall certification, it is legally required to ensure that a proposed project complies with all regulations and laws that would be enforced by any other local or state agencies. Exceptions to this requirement could occur if CEC finds that (1) the project is needed for public convenience and necessity and (2) no more prudent and feasible means of achieving such public convenience and necessity exists. The power plant application must be tailored specifically to address the project’s location. Among other things, the application typically has to address (1) land use and zoning plans, including development restrictions under the California Coastal Act and the Delta Protection Act; (2) public health; (3) worker safety and fire protection; (4) transmission system engineering and safety; (5) traffic and transportation plans and policies; (6) noise; (7) visual considerations; (8) socioeconomic issues, including impacts on local school districts and environmental justice issues; and (9) biological resource protection, including county open space and conservation plans and state law protecting wildlife habitat, endangered species, and native plants. CEC allows any person to petition to become involved in the certification process for a new power plant project as an intervenor. Government agencies, community groups, interest groups, labor unions, businesses (including applicant’s power plant competitors), and individuals can become intervenors. An intervenor is a full, legal party to the proceedings with the same rights and obligations as other parties in the proceeding, including CEC staff and the applicant. CEC can use evidence provided by intervenors as the basis for any part of its final decision. Intervenors have the right to (1) obtain information from the other parties in the proceeding, (2) receive all documents filed in the case, (3) present evidence and witnesses, and (4) cross-examine the witnesses of the other parties at public hearings. Correspondingly, intervenors have the obligation to send copies of all filings to the other parties, answer data requests from other parties, and allow other parties to cross-examine their witnesses. Intervenors can play an important role in the certification process—as many as 16 intervenors have participated in CEC’s consideration of an application; can add a considerable amount of time to the certification process; and can potentially kill a project, according to CEC officials. In addition to allowing intervenors, CEC’s certification process has a strong public participation component. The Warren-Alquist Act requires that CEC ensure meaningful public participation in power plant certification. CEC has a public advisor, an attorney who serves as an advisor to both the public and CEC to ensure full and adequate public participation. CEC conducts public hearings and workshops at several points in the certification process. Also, the public can submit written comments to CEC about a power plant application. Pennsylvania has no overall state agency responsible for approving new power plant projects. Power plant developers must work through (1) the Pennsylvania DEP to obtain air quality and water quality permits and (2) local government agencies to obtain zoning and other land use permits. In addition, developers in eastern or central Pennsylvania have to obtain permits from the Delaware River Basin Commission or the Susquehanna River Basin Commission, respectively, for access to river water. Since 1995, the average time needed to obtain a pre-construction air permit for power plant projects was about 14 months. EPA has approved Pennsylvania’s program for issuing New Source Review air quality permits. Almost all air quality permits are issued by DEP’s six regional offices or the County Health Departments in Allegheny (Pittsburgh) and Philadelphia counties, which are DEP authorized air pollution control agencies. DEP has overall approval of the permits prepared by these counties. For permitting purposes, DEP treats the whole state of Pennsylvania as an ozone non-attainment area because it is an ozone transport region as defined under the Clean Air Act. As a result, new power plant projects must install control technology that meets the lowest achievable emission rate for nitrogen oxides. Improved technology has enabled approved nitrogen oxide emissions levels to drop from 4.5 parts per million to 2.5 parts per million in recent years. New power plant projects also have to offset their nitrogen oxide emissions with emissions reduction credits, which can be obtained from either in-state or out-of-state sources. According to DEP officials, the vast majority of emissions reduction credits have resulted from the shutdown of facilities. DEP keeps an online registry of offsets, but companies typically purchase offsets through brokers at about $10,000 to $12,000 per ton. DEP officials noted that it is more difficult to obtain emission offset credits for use in the severe ozone non-attainment areas of the state. In 1995, the Governor of Pennsylvania established a “money-back guarantee” permit review program that would return an applicant’s fees if DEP did not meet established time frames for issuing environmental permits—1 year for a power plant’s air quality permit. (The fee for a new source review permit is $18,000.) The 1-year time frame includes only DEP’s review and excludes other agencies’ review or the time required to hold a public meeting or hearing. Processing time is calculated from date of application receipt to date of final decision, minus time used by the applicant to correct deficiencies. DEP officials told us that the program was initiated to demonstrate DEP’s commitment to timely consideration of permit applications. They noted that missing a final date does not force DEP to approve a permit and added that they have yet to give money back because of delays in issuing a power plant permit. In 1978, EPA authorized DEP to administer the National Pollutant Discharge Elimination System (NPDES), which controls discharges of pollutants to surface waters. DEP’s six regional offices issue NPDES permits. According to a DEP Water Division official, the time frame for reviewing NPDES permits ranges from 120 to 200 days from application to decision. The Water Division has not had to return money to applicants under the state’s money-back guarantee program for permit reviews, according to DEP officials. DEP’s administrative completeness review determines whether all necessary information and forms are provided without assessing an application’s technical quality. DEP has 20 days to review an application for completeness and notify the applicant whether the application (1) has been accepted, (2) has minor deficiencies that are identified, or (3) is being returned for being severely deficient. Applicants are given one opportunity to correct any administrative deficiencies. DEP’s preliminary and final technical reviews analyze the proposal for potential adverse environmental impacts; check for completeness, clarity and soundness of engineering proposals; ensure conformance with applicable statutes and regulations; and analyze public comments. If DEP finds technical deficiencies, it outlines the specific problems that must be corrected, citing the statutory or regulatory authority that provides the basis for the deficiency. If the applicant fails to respond within a reasonable period of time, the applicant waives all rights under DEP’s money-back guarantee program. If the material submitted in response to the deficiency letter still fails to meet DEP requirements, DEP sends a second, pre-denial letter. This letter allows the applicant a last opportunity to correct the remaining technical deficiencies. DEP will deny the application if the applicant fails to address the deficiencies. Alternatively, instead of responding to a deficiency letter, the applicant has the option of asking DEP to make a decision based on the available information. If DEP denies the application, the applicant may appeal the decision or file a new application. DEP renders a final decision on the application based on its assessment of the technical information, including consideration of reviews required by other federal or state agencies. Either the applicant or the public may appeal this decision to the Pennsylvania Environmental Hearing Board, and the Environmental Hearing Board’s decisions may be appealed to the Pennsylvania Commonwealth Court. Pennsylvania requires opportunities for public participation in DEP’s permitting process through written comments, public meetings, and public hearings. DEP may also invite additional public participation at its discretion. DEP provides opportunities for public involvement by (1) making available a copy of the permit application, emissions data, and other information related to a permit application; (2) receiving comments and answering questions at public meetings; (3) in many cases, holding a hearing to document public concerns as an official part of the public notice process; and (4) soliciting written comments from the general public on its draft permit. The need for a hearing depends on the quantity and nature of comments—DEP typically holds a hearing for large power plant projects or for projects with a lot of public opposition. DEP considers both solicited and unsolicited comments in reviewing a permit application. DEP makes its draft permit available for public review and comment and considers revisions to the permit based on the comments received. Concurrent with public review and comment, DEP also sends the draft permit to EPA for its review and comment in accordance with applicable state and federal requirements. Although members of the public can participate in DEP’s public hearings, they cannot intervene in the administrative appeal process until the permit has been issued. After a permit has been issued, the permittee or the public can appeal the issuance of the permit to the Environmental Hearing Board. If a power plant proposed for the eastern or central part of Pennsylvania would withdraw more than 100,000 gallons of water a day from a river basin for operations, the developer must obtain permit approval from the Delaware River Basin Commission or the Susquehanna River Basin Commission. The Delaware River Basin Commission’s review of a water use application in eastern Pennsylvania often takes between 6 months and 1 year, according to commission officials. Developers can apply for a permit while their other permit applications are being considered. However, the commission cannot issue a permit until DEP has issued all water quality permits. Commission officials said that processing the permit usually takes about 60 days once DEP has issued the water permits. Three Pennsylvania state agencies are responsible for protecting endangered and threatened species: (1) the Fish and Boat Commission is responsible for fish, other aquatic organisms, reptiles, and amphibians; (2) the Game Commission is responsible for birds and mammals, including 14 endangered species; and (3) the Department of Conservation and Natural Resources is responsible for native wild plants. The Department of Conservation and Natural Resources maintains the Pennsylvania Natural Diversity Inventory, which includes all of the department’s lists of where threatened and endangered species, critical habitats, and areas of critical dependence are known to occur. The U.S. Fish and Wildlife Service and Pennsylvania’s Fish and Boat Commission provide DEP with additional listings of species and habitat ranges. Permit applicants are required to (1) conduct a database search of the Pennsylvania Natural Diversity Inventory to determine the potential presence of a listed species in the vicinity of the permit application area and (2) check any other readily available sources provided by the natural resource agencies. If the applicant finds that the project might affect a habitat area, the applicant is responsible for contacting the responsible natural resource agency. The agency then provides advice about species presence, critical habitat, and critical dependence issues. If the activity may harm the species, the applicant must work with the natural resource agency to conduct surveys, modify the project, or devise any other relevant actions to protect the species and its critical habitat. An applicant submitting its permit application to DEP must provide proof of coordination. Alternatively, the applicant must provide documentation if no habitats for listed species were found in the affected area. In addition, the public may identify threatened or endangered species issues not previously addressed when DEP made the draft permit available for comment. Pennsylvania does not consider the air and water quality permits to be federal actions that trigger notification of the U.S. Fish and Wildlife Service. While DEP does not specifically consult with the U.S. Fish and Wildlife Service about individual permit applications, the Fish and Wildlife Service may provide comments during the comment period. TNRCC is responsible for approving environmental permits in Texas. TNRCC must issue air and water quality permits to an applicant that has demonstrated compliance with federal and state requirements. EPA has delegated responsibility for approving air quality permits to TNRCC, which has 16 regional offices throughout the state. All air pollution sources are required to obtain an operating permit, unless they are a “grandfathered” facility in existence on the effective date of the Texas New Source permit program in 1971 and have not increased the emissions of any air pollutant. TNRCC’s Air Permits Division conducts a new source review of all major industrial projects—in both non-attainment and attainment areas. The extent of and time frame for TNRCC’s review depend on (1) the ambient air quality around the proposed project, (2) whether the project is a major or minor source of emissions, and (3) the amount and type of public participation. The Dallas-Fort Worth, Houston-Galveston, Beaumont-Port Arthur and El Paso metropolitan areas are non-attainment areas in Texas. If a project is in a non-attainment area and emits more than federally defined levels of the relevant pollutant, TNRCC must consult with EPA’s region 6 and the developer typically would have to install advanced emission control technologies and purchase emissions credits to offset added pollution. A proposed power plant project in an attainment area generally would qualify for minor source permitting if it emits less than the federally defined level of any criteria pollutant. Alternatively, if the proposed project is in an attainment area and emits more than federally defined levels of the relevant pollutant, it would have to comply with a “prevention of significant deterioration” permit. TNRCC generally approves an air quality permit within 6 to 9 months and an amendment to a permit within 4 to 6 months. To comply with a prevention of significant deterioration permit, applicants reduce pollutant emissions using best available control technology— developers generally use selective catalytic reduction technology to reduce nitrogen oxide pollution. TNRCC recommends nitrogen oxide limits of 5 parts per million as best available control technology for natural gas-fired combined-cycle operations. TNRCC staff told us that Texas uses “not to exceed” emissions limits based upon a 1-hour averaging time period. For example, to meet very low emissions limits, some applicants seek to average emissions levels over a longer period—which can range from 1 hour to 30 days. The longer period provides a buffer for the plant’s actual operations—certain conditions, such as startup and cycling, force emissions higher over a short period. TNRCC also does not recommend lower nitrogen oxide limits because reduction controls involve trade offs with increased ammonia slip, a contaminant under the Texas Clean Air Act. TNRCC’s recommended carbon monoxide limits range from 9 to 25 parts per million as best available control technology for all gas-fired turbines. TNRCC is responsible for issuing water quality permits under the Clean Water Act. TNRCC’s Water Quality and Water Supply Divisions are responsible for the quality, quantity, and availability of water in Texas. In 1998, EPA authorized TNRCC to administer certain permitting processes under the Texas Pollutant Discharge Elimination System, instead of EPA’s National Pollutant Discharge Elimination Program. TNRCC staff said it takes about 9 months to 1 year to obtain a water permit. TNRCC staff assist developers in preparing applications by providing pre- application consultations and guidance documents. TNRCC’s permits and modeling groups consult with developers about 3 months before the application is submitted. Once it receives a permit application, TNRCC reviews it for administrative completeness. If the application is incomplete and additional information is necessary, this review takes about 30 days. Once it considers an application as complete, TNRCC requires the developer to (1) notify the public of the project by publishing notices in local newspapers and posting a sign at the proposed site and (2) perform air dispersion modeling for all emission sources using EPA-approved computer-based mathematical models. TNRCC staff audit the modeling and evaluate the resulting predicted off-property impacts. TNRCC generally completes its technical review and prepares a draft permit within 90 days and mails the draft permit to the applicant for comment and negotiation, which takes about 30 days. Local and county officials, federal officials, and other interested persons then receive a second public notice announcing the draft permit and providing a 30-day comment period. TNRCC sends each draft permit to EPA. EPA has 30 days to provide comments, although it may ask for an additional time to address comments it receives from the public. In addition to giving members of the public the opportunity to submit written or oral comments about a proposed project, Texas allows individuals who oppose an application and who meet certain requirements to request to participate in a contested evidentiary hearing before an administrative law judge. In such hearings, parties have the right, for example, to present testimony, offer evidence, cross-examine other parties’ witnesses, object to the introduction of evidence, and file legal motions. The administrative law judge issues a formal recommendation to the TNRCC commission, which issues a final decision. TNRCC officials told us that a contested permit application could add from 1 to 3 years to the project. Since 1995, 15 of 84 air permit applications in Texas had requests for contested hearings. Two requests resulted in hearings, and three requests were denied a hearing. Of the remaining requests, seven were withdrawn, one was pending, and two were relocated. TNRCC makes its draft permit available for public comment for a 30-day period by providing notice in a widely read local newspaper and directly notifying the local mayor and other local government officials, the county judge, EPA, the U.S. Fish and Wildlife Service, the Advisory Council on Historical Preservation, the Texas Historical Commission, and the Texas Parks and Wildlife Department. If TNRCC receives a request for a hearing, it determines whether it should hold a hearing, which it does generally about 30 days after the request. TNRCC may adopt the proposed permit, adopt the proposed permit with changes, or deny the permit application. Appeals may be filed with TNRCC once it makes a final decision on permit issuance. Texas requires a water rights permit for the use of state surface water. TNRCC typically approves a permit for water rights in from 9 months to 1 year for an uncontested application. Each application for a permit is reviewed for administrative completeness; applicants have 30 days to respond if the application is deficient. The technical review, which may take 180 days, evaluates impact on other water rights, bays and estuaries, conservation, and water availability through modeling. Once the administrative process is complete, TNRCC provides notice to the public and gives other water rights holders the opportunity for a hearing. Permits may be issued in perpetuity, for a limited number of years, or for temporary uses. Because of increasing water demands for municipal, industrial, and other uses, TNRCC grants new water rights only where normal flows and levels are sufficient to meet demand. As a result, some power plant developers have looked for alternative options to meet their water needs. For example, a company recently negotiated a contract to obtain surface water from a nearby city. When the city submitted an application to amend its water rights permit, opponents to the sale asked for hearings to contest the permit. The company then decided to use another city’s existing water right and effluent for the power plant cooling towers. In another case, a company purchased the water rights from another holder to appropriate water from the Colorado River instead of applying for new water rights permit. The ownership transfer was completed in 30 days. An application to amend the water rights to include industrial use was completed 3 months later. The Texas Pollutant Discharge Elimination System requires that permits and water quality standards protect the environment, including habitats for endangered and threatened species. Texas does not consider the air and water quality permits to be federal actions that trigger notification of the U.S. Fish and Wildlife Service. However, if the Endangered Species Act is a concern for a permit, TNRCC notifies the U.S. Fish and Wildlife Service, the National Marine Fisheries Service, and the Texas Parks and Wildlife Department and asks for their comments. According to TNRCC officials, an Endangered Species Act concern also automatically triggers EPA oversight under the Memorandum of Agreement between TNRCC and EPA. Before the permit application is submitted to TNRCC, the applicant usually visits the community where it plans to locate the power plant to determine if the local government and community will support or oppose the power plant project. The applicant is responsible for ensuring that the proposed site is properly zoned, or can be rezoned within acceptable time frames. Most communities generally have welcomed gas-fired power plants because they provide a large tax base for the communities and pose few environmental concerns. Similarly, environmental groups have not opposed power plants because natural gas is a low-pollution fuel. In addition to those named above, Jon Ludwigson, Ilga Semeiks, Frank Rusco, Carol Herrnstadt Shulman, Leigh White, and Cleo Zapata made key contributions to this report. | Twenty-four states and the District of Columbia have restructured electricity markets by shifting from service provided through a regulated monopoly to service provided through open competition among the local utilities and their competitors. The restructuring was intended to boost competition and expand consumer choice, increase efficiency, and lower prices. Of the three states GAO studied, Texas had the greatest need for additional electric power, and it added the most new capacity from 1995 through 2001. In contrast, California added 25 percent of the forecasted need for capacity over this period. Although Pennsylvania added less than half of its forecasted need for capacity, the state continues to be a net exporter of electricity to nearby states. The three states have similar processes for approving applications to build and operate new power plants. In all three states, state and local agencies must review the applications to ensure that the developer complies with environmental, land use, and other requirements before issuing the permits necessary to build and operate a power plant. California also has a state energy commission that reviews each power plant application to determine whether the benefits of additional electricity outweigh its likely negative environmental or other effects. Texas' rules for connecting new power plants to the electricity transmission system are less costly for independent developers and are administratively simpler than the approaches used in California and Pennsylvania. In deciding where to build new power plants, independent developers said they weigh a market's risks, including uncertainty about changes in a state's market rules, against expected profits. Higher risks require higher expected profits. |
The gross tax gap is an estimate of the difference between the taxes— including individual income, corporate income, employment, estate, and excise taxes—that should have been paid voluntarily and on time and what was actually paid for a specific year. Of the estimated $345 billion tax gap for tax year 2001, IRS estimated that it would eventually recover about $55 billion of that through late payments and enforcement actions, for a net tax gap of $290 billion. The estimate is an aggregate of estimates for the three primary types of noncompliance: (1) underreporting of tax liabilities on tax returns; (2) underpayment of taxes due from filed returns; and (3) nonfiling, which refers to the failure to file a required tax return altogether or on time. We have made many recommendations over time that could address the tax gap. IRS’s tax gap estimates for each type of noncompliance include estimates for some or all of the five types of taxes that IRS administers. Underreporting of tax liabilities can occur when a taxpayer underreports income earned or overclaims deductions from income. As shown in table 1, underreporting of tax liabilities—particularly for the individual income tax—accounted for most of the tax gap estimate for tax year 2001. We have encouraged regular tax gap measurements, and IRS officials have indicated that they will be updating their tax gap estimates later in 2011 or early 2012. We believe that these estimates are important to gauge progress in addressing the tax gap and because analyzing the data used to estimate it can help identify ways to improve tax compliance. Taxpayers who underreported the amount of individual income tax they owed represented an estimated $197 billion of the 2001 tax gap, and $165 billion of that amount was due to individual tax filers underreporting their income. As shown in table 2, underreporting of individuals’ business income and nonbusiness income accounted for $109 billion and $56 billion, respectively, of the 2001 tax gap. IRS has concerns with the certainty of the tax gap estimate for tax year 2001 in part because some areas of the 2001 estimate rely on data originally gathered in the 1970s and 1980s. IRS has no estimates for other areas of the tax gap, and it is inherently difficult to measure some types of noncompliance. Some analysts believe the 2001 estimate likely underestimated the tax gap and that in absolute dollars it is likely larger now than in 2001. IRS’s overall approach to reducing the tax gap consists of improving service to taxpayers and enhancing enforcement of the tax laws. IRS seeks to improve voluntary compliance through efforts such as education and outreach programs and tax form simplification. It also uses its enforcement authority to ensure that taxpayers are reporting and paying the proper amounts of taxes through efforts such as examining tax returns and matching the amount of income taxpayers report on their tax returns to the income amounts reported on information returns it receives from third parties. In spite of IRS’s efforts to improve taxpayer compliance, the rate at which taxpayers pay their taxes voluntarily and on time has tended to range from around 81 percent to around 84 percent over the past three decades. The sum of the estimated revenue loss due to tax expenditures was over $1 trillion in 2010. Tax expenditures are often aimed at policy goals similar to those of federal spending programs. Existing tax expenditures, for example, help students and families finance higher education and provide incentives for people to save for retirement. Because tax expenditures result in forgone revenue for the government, they have a significant effect on overall tax rates—all else equal, for any given level of revenue, tax expenditures mean that overall tax rates must be higher than a tax system with no tax expenditures. In 2005, we recommended that the federal government take several steps to ensure greater transparency of and accountability for tax expenditures by reporting better information on tax expenditure performance and more fully incorporating tax expenditures into federal performance management and budget review processes. The federal tax system contains complex rules. These rules may be necessary, for example, to ensure proper measurement of income, target benefits to specific taxpayers, and address areas of noncompliance. However, these complex rules also impose a wide range of record keeping, planning, computational, and filing requirements upon businesses and individuals. Complying with these requirements costs taxpayers time and money. As shown in figure 1, these costs to taxpayers are above and beyond what they pay to the government in taxes. Estimating total compliance costs is difficult because neither the government nor taxpayers maintain regular accounts of these costs, and federal tax requirements often overlap with record keeping and reporting that taxpayers do for other purposes. Although available estimates are uncertain, taken together, they suggest that total compliance costs are large. For example, in 2005 we reviewed existing studies and reported that even using the lowest available compliance cost estimates for the personal and corporate income tax, combined compliance costs would total $107 billion (roughly 1 percent of gross domestic product ) per year; other studies estimate costs 1.5 times as large. The tax system also results in economic efficiency costs, which are reductions in economic well-being caused by changes in behavior due to taxes, government benefits, monopolies, and other forces that interfere in the market. Efficiency costs can take the form of lost output or consumption opportunities. For example, economists generally agree that the favorable tax treatment of owner-occupied housing distorts investment in the economy, resulting in too much investment in housing and too little business investment. Estimating efficiency costs associated with the tax system is challenging because it has extensive and diverse effects on behavior. In fact, in a 2005 report, we found no comprehensive estimates of the efficiency costs of the current federal tax system. The two most comprehensive studies we found suggest that these costs are large—on the order of magnitude of 2 to 5 percent of GDP each year (as of the mid- 1990s). However, the actual efficiency costs of the current tax system may not fall within this range because of uncertainty surrounding taxpayers’ behavioral responses, changes in the tax code and the economy since the mid-1990s, and the fact that the two studies did not cover the full scope of efficiency costs. Tax software and the use of paid tax return preparers may mitigate the need for taxpayers to understand complexities of the tax code. In 2010, IRS processed about 137 million returns. As we have previously reported, about 90 percent of returns are prepared by individual taxpayers or paid preparers using professional or commercial software. Software companies and paid preparers often act as surrogate tax administrators in that they keep abreast of tax law changes. A participant at the 2007 Joint Forum on Tax Compliance stated that taxpayers receiving assistance in preparing their individual tax returns, either from paid preparers or tax preparation software, are somewhat insulated from tax code complexity. However, while many paid tax preparers help taxpayers by using their expertise to help ensure that complex laws are understood, others may introduce their own mistakes. For example, in a limited investigation in 2006, all 19 of the tax return preparers who prepared returns for our undercover investigators produced errors, some with substantial consequences. IRS’s review of 2001 tax returns also found that tax returns prepared by paid preparers contained a significant level of errors. IRS audits of returns prepared by a paid preparer showed a higher error rate—56 percent—than audits of returns prepared by the taxpayer—47 percent. Income measurement is straightforward for a large proportion of the individual taxpayer population: those who earn only labor and interest income and capital income within a retirement account generally have their income reported to them (and to the IRS) by the source of the income. However, substantial numbers of taxpayers who receive income from capital gains, rents, self-employment, and other sources often deal with complex tax laws, complicated calculations, and detailed record keeping. While complexities lead some taxpayers to make mistakes when reporting their income, some misreporting is due to intentional acts of tax evasion. For example, IRS studies show that the majority of capital asset transactions and capital gains and losses were for securities transactions such as sales of corporate stock, mutual funds, bonds, options, and capital gain distributions from mutual funds. Taxpayers are required to report securities transactions on their federal income tax returns. To accurately report securities sales, the taxpayer must have records of the dates they acquired and sold the asset; sales price, or gross proceeds from the sale; cost or other basis of the sold asset; and resulting gains or losses. They must report this information separately for short-term transactions and long-term transactions. Further, before taxpayers can determine any gains or losses from securities sales, they must determine if and how the original cost basis of the securities must be adjusted to reflect certain events, such as stock splits, nontaxable dividends, or nondividend distributions. Complex income-reporting requirements for securities transactions may contribute to taxpayers’ misreporting their income. In 2006, we estimated that 8.4 million of the estimated 21.9 million taxpayers with securities transactions misreported their gains or losses for tax year 2001. A greater estimated percentage of taxpayers misreported gains or losses from securities sales (36 percent) than capital gain distributions from mutual funds (13 percent), and most of the misreported securities transactions exceeded $1,000 of capital gain or loss. This may be because taxpayers must determine the taxable portion of securities sales’ income whereas they need only add up their capital gain distributions. Furthermore, about half of these taxpayers who misreported failed to accurately report the securities’ basis, sometimes because they did not know the basis or failed to adjust the basis appropriately. Although we were not able to estimate the capital gains tax gap for securities, we were able to determine the direction of the misreporting. For securities sales, an estimated 64 percent of taxpayers underreported their income from securities (i.e., they understated gains or overstated losses) compared to an estimated 33 percent of taxpayers who overreported income (i.e., they overstated gains or understated losses). For both underreported and overreported income, some taxpayers misreported over $400,000 in gains or losses. Small businesses—which include sole proprietorships and S corporations, among other entities—are subject to multiple layers of filing, reporting, and deposit requirements. These requirements reflect IRS’s administration of a variety of tax and other policies, including income, employment, and excise taxes, as well as pension and other employee benefit programs. In considering the number of requirements, it is important to note that the requirements reflect many decisions and compromises made by Congress and administrations to accomplish their policy goals, including those that may benefit small businesses and other taxpayers. Sole proprietors face significant complexities in reporting income. This complexity may contribute to the estimated $68 billion of the tax gap caused by sole proprietors underreporting their net business income, which can stem either from understated receipts or overstated expenses. For example, sole proprietors report their business-related profit or loss on their individual income tax return, and they can use their losses to offset other categories of income on their returns in the year that they incur the loss. Identifying which of a sole proprietor’s payments qualify as business expenses and the amount to be deducted can be complex. For example, two types of payments—costs of goods sold and capital improvements—must be distinguished from other types of payments because they are treated differently under tax rules. Expenses that are used partly for business and personal purposes can be deducted only to the extent they are used for business. Individual taxpayers who are shareholders in S corporations may also experience difficulty because of complexity in income measurement. An S corporation is a federal business type that provides tax benefits and limited liability protection to shareholders. S corporations are not generally taxed at the entity level: income, losses, and deduction items pass through to the individual shareholders’ income tax returns, and the shareholders are taxed on any net income. S corporations are to provide their shareholders and IRS with information on the allocation of income, losses, and other items. As we have previously reported, one source of complexity for S corporation shareholders may arise when calculating basis—their ownership share of the corporation—in order to claim losses and deductions to offset other earned income. Shareholders generally can only claim losses and deductions up to the amount of basis the shareholder has in the S corporation’s stock and debt. While the S corporation is required to send shareholders some information that can be used to calculate basis, S corporations are not required to report any basis calculations to shareholders. IRS officials and S corporation stakeholder representatives told us that calculating and tracking basis was one of the biggest challenges in complying with S corporation rules. In 2009, we recommended that Congress require S corporations to calculate shareholder’s stock and debt basis as completely as possible and report the calculation to shareholders and IRS. In an analysis of IRS’s annual examinations of individual tax returns that closed for fiscal years 2006 through 2008, we found the amount of the misreported losses that exceeded basis limitations was over $10 million, or about $21,600 per taxpayer. The growing number of tax expenditures is among the causes of tax code complexity. Between 1974 and 2010, tax expenditures reported by the Department of the Treasury more than doubled in overall number from 67 to 173. Tax expenditures are an important means the government uses to address a wide variety of social objectives, from supporting educational attainment, to providing low-income housing, to ensuring retirement income, and many others. However, tax expenditures add to tax code complexity in part because they require taxpayers to learn about, determine their eligibility for, and choose between tax expenditures that have similar purposes. Tax expenditures also complicate tax planning, as taxpayers must predict their own future circumstances as well as future tax rules to make the best choice among provisions. Savings incentives within the tax code illustrate how tax expenditures add to complexity. While the tax code includes numerous types of savings incentives—including those for healthcare and higher education—my statement will focus on retirement savings as a key example. Taxpayers can choose between traditional Individual Retirement Arrangements (IRA) and Roth IRAs for retirement savings. Although the tax rules for distributions diverge for traditional and Roth IRAs, taxpayers may not know that a 10 percent early withdrawal penalty, with some exceptions, applies to both IRA types. Taxpayers also get confused over which IRA early withdrawals are not subject to penalties, in part because the exceptions differ for employer pension plans. Additionally, both types of IRAs have rules governing eligibility to contribute, and contributions to each are subject to an annual limit. However, taxpayers may not understand that the annual contribution limit applies across traditional IRAs and Roth IRAs in combination, which may lead them to overcontribute. With regard to record-keeping burden, taxpayers with traditional or Roth IRAs must track the total amount of contributions in a given year and reasons for distributions to accurately report this information on their tax returns. Frequent changes to IRA rules (such as increasing contribution limits and allowing workers to tap IRA assets for certain nonretirement purposes without an early withdrawal penalty) have also made tax planning more difficult for taxpayers. As we reported in 2008, IRS research and enforcement data show that—in the aggregate—many taxpayers misreported millions of dollars in traditional IRA contributions and distributions on their tax returns. We reported that in tax year 2001 the following occurred: Of the taxpayers who made deductible traditional IRA contributions, an estimated 14.8 percent (554,657 taxpayers) did not accurately report the IRA deduction on their individual tax returns—10.4 percent overstated their deductible contributions (that is, exceeded the applicable limit) and 4.4 percent underreported their deductible contributions (that is, reported less on their returns than they actually could deduct). The understated net income due to these misreported traditional IRA contribution deductions was $392 million, including both taxpayers who either overstated or understated their contribution deductions to a traditional IRA. Of the taxpayers who had taxable traditional IRA distributions, an estimated 14.6 percent (1.5 million taxpayers) misreported withdrawals from their traditional IRA distributions—13.7 percent understated (that is, reported an amount less than what the taxpayer withdrew) and 0.9 percent overstated IRA distributions (that is, reported an amount greater than what the taxpayer withdrew). The underreported net income due to misreported IRA distributions was $6.3 billion, including taxpayers who failed to report early distributions and the associated tax. Taxpayers also make costly mistakes when choosing higher-education tax incentives. In a 2008 testimony, we reported that among tax filers who appeared to be eligible for a tax credit or tuition deduction in tax year 2005, about 19 percent, representing about 412,000 returns, failed to claim any of them. The amount by which these tax filers failed to reduce their tax averaged $219; 10 percent of this group could have reduced their tax liability by over $500. In total, including both those who failed to claim a tax credit or tuition deduction and those who chose a credit or a deduction that did not maximize their benefit, we found that in 2005, 28 percent, or nearly 601,000 tax filers, did not maximize their potential tax benefit. Some tax expenditures also provide taxpayers who intend to evade taxes with opportunities to do so. For example, the Treasury Inspector General for Tax Administration (TIGTA) reported in 2011 that the First-time Homebuyer Credit (FTHBC) and the subsequent changes made to the credit have confused taxpayers and allowed individuals to make fraudulent claims for the refundable credit. For example, TIGTA reported many taxpayers claiming the credit appeared not to be first-time homebuyers because tax information indicated they had owned homes within 3 years prior to their new home purchase. The 2008 FTHBC provided taxpayers a refundable credit of up to $7,500 that must be repaid in $500 increments each year over 15 years beginning in the 2011 filing season. According to recent IRS data, the total amount to be repaid by taxpayers is $7 billion. The American Recovery and Reinvestment Act of 2009 increased the maximum FTHBC credit to $8,000, with no payback required unless the home ceases to be the taxpayer’s principal residence within 3 years. In 2009, we testified that IRS faced significant challenges in determining if taxpayers were complying with the numerous conditions for the credit. For example, to determine eligibility, IRS had to verify that taxpayers had not owned a house in the previous 3 years and verify the closing date on home purchases. Other challenges included enforcing the $500 per year payback provision in the 2008 credit. Multiple approaches are needed to reduce the tax gap. No single approach is likely to fully and cost-effectively address noncompliance since the noncompliance has multiple causes and spans different types of taxes and taxpayers. While the tax gap will remain a challenge into the future, the following strategies could help. These strategies could require actions by Congress or IRS. Enhancing information reporting can reduce complexity for taxpayers. It can also reduce the opportunities available for taxpayers to evade taxes by, for example, underreporting business income or filing fraudulent claims for tax credits. Generally, new requirements on third parties to submit information returns would require statutory changes, whereas improvements to existing information-reporting forms may be done administratively by IRS. The extent to which individual taxpayers accurately report the income they earn has been shown to be related to the extent to which the income is reported to them and IRS by third parties or taxes on the income are withheld. For example, employers report most wages, salaries, and tip compensation to employees and IRS through Form W-2. Also, banks and other financial institutions provide information returns (Forms 1099) to account holders and IRS showing the taxpayers’ annual income from some types of investments. Findings from IRS’s study of individual tax compliance indicate that nearly 99 percent of these types of income are accurately reported on individual tax returns. For types of income for which there is little or no information reporting, individual taxpayers tend to misreport over half of their income. One area where improved information reporting could help is higher- education expenses. Eligible educational institutions are required to report information on qualified tuition and related expenses for higher education to both taxpayers and IRS so that taxpayers can determine the amount of educational tax benefits that can be claimed. However, the information currently reported by educational institutions on tuition statements sent to IRS and taxpayers (on Form 1098-T) may be confusing for taxpayers who use the form to prepare their tax returns and not very useful to IRS. IRS requires institutions to report on Form 1098-T either the (1) amount of payments received, or (2) amount billed for qualified expenses. IRS officials stated that most institutions report the amount billed and do not report payments. However, the amount billed may not equal the amount that can be claimed as a credit. In order to reduce taxpayer confusion and enhance compliance with the eligibility requirements for higher- education benefits, in 2009 we recommended that IRS revise Form 1098-T to improve the usefulness of information on qualifying education expenses. Another area where improved information reporting could improve compliance is rental income. In 2008, we estimated that at least 53 percent of individual taxpayers with rental real estate misreported their rental real estate activities for tax year 2001, resulting in an estimated $12.4 billion of net misreported income. IRS enforcement officials cited limited information reporting as a major challenge in ensuring compliance because without third-party information reporting, it is difficult for IRS to systematically detect taxpayers who fail to report any rent or determine whether the rent and expense amounts taxpayers report are accurate. In 2008, we recommended that IRS require third parties to report mortgaged property addresses to help IRS identify who may have misreported their rental real estate activity, but IRS did not adopt our recommendation because of third-party burden and a lack of an IRS compliance program to use such information. We made a similar recommendation in a 2009 report, which IRS is still evaluating as of December 2010. While information reporting reduces the complexity of reporting income for individual taxpayers, this tool can create costs for the third parties responsible for reporting the income to the taxpayer and IRS. For example, we previously reported that expanding information reporting on securities sales to include basic information would involve challenges for brokers and the IRS. In particular, brokers would bear costs and burdens—even as taxpayers’ costs and burdens decrease somewhat—and many issues would arise about how to calculate adjusted basis, which securities would be covered, and how information would be transferred among brokers. In some cases it is difficult to identify third parties for whom a reporting requirement could be enforced without an undue burden on both the third parties and IRS. In a 2009 report, we found that a major reason why little information reporting on sole proprietor expenses exists is because of the difficulty identifying third parties. For example, there is no third party who could verify the business use of cars or trucks by sole proprietors. Ensuring high-quality services is a necessary foundation for voluntary compliance, so action by IRS to improve the quality of services provided to taxpayers would be beneficial. High-quality services can help taxpayers who wish to comply but do not understand their obligations. IRS taxpayer services include education and outreach programs, simplifying the tax process, and revising forms and publications to make them electronically accessible and more easily understood by diverse taxpayer communities. For example, if tax forms and instructions are unclear, taxpayers may be confused and make unintentional errors. Ensuring high-quality taxpayer services would also be a key consideration in implementing any of the approaches for tax gap reduction. For example, expanding enforcement efforts would increase interactions with taxpayers, requiring processes to efficiently communicate with taxpayers. Changing tax laws and regulations would also require educating taxpayers about the new requirements in a clear, timely, and accessible manner. For example, we previously reported that while taxpayers’ access to telephone assistance in tax year 2009 was better than the previous year, it remained lower than in 2007, in part because of calls about tax law changes. Despite heavy call volume, the accuracy of IRS responses to taxpayers’ questions remained above 90 percent. Congressional efforts to simplify the tax code and otherwise alter current tax policies may help reduce the tax gap by making it easier for individuals and businesses to understand and voluntarily comply with their tax obligations. One way to simplify the tax code is to eliminate or combine tax expenditures, thereby helping reduce taxpayers’ unintentional errors and limiting opportunities for tax evasion. As we have previously testified, the Government Performance and Results Act (GPRA) Modernization Act of 2010 (GPRAMA) could help inform reexamination or restructuring efforts and lead to more efficient and economical executive-branch service delivery in overlapping program areas. The act is intended to identify the various agencies and federal activities—including spending programs, regulations, and tax expenditures—that contribute to crosscutting outcomes. While simplification can have benefits, it can also have drawbacks. Eliminating tax expenditures would reduce the incentives for the activities that were encouraged. Also, in 2005, we stated that changes to the tax system can create winners and losers. The government may attempt to mitigate large gains and losses by implementing transition rules. Deciding if transition relief is necessary involves how to trade off between equity, efficiency, simplicity, transparency, and administrability. Similar trade-offs exist with possible fundamental tax reforms that would move away from an income tax system to some other system, such as a consumption tax, national sales tax, or value-added tax. Fundamental tax reform would most likely result in a smaller tax gap if the new system has few tax preferences or complex tax code provisions and if taxable transactions are transparent. However, these characteristics are difficult to achieve in any system and experience suggests that simply adopting a fundamentally different tax system, whatever the economic merits, may not by itself eliminate any tax gap. For example, in 2008, we reported that some available data indicate a value-added tax may be less expensive to administer than an income tax. However, we found that like other systems, even a simple value-added tax—one that exempts few goods or services— has compliance risks and, largely as a consequence, generates administrative costs and compliance burden. Similar to other taxes, adding complexity through exemptions or reduced rates for some goods or services generally decreases revenue and increases compliance risks because of the incentive to misclassify purchases and sales. Such complexity also increases the record-keeping burden on businesses and increases the government resources devoted to enforcement. Any tax system could be subject to noncompliance, and its design and operation, including the types of tools made available to tax administrators, will affect the size of any corresponding tax gap. Further, the motivating forces behind tax reform include factors beyond tax compliance, such as economic effectiveness, equity, and burden, which could in some cases carry greater weight in designing an alternative tax system than ensuring the highest levels of compliance. Policymakers may find it useful to compare any proposed changes to the tax code based on a set of widely accepted criteria for assessing alternative tax proposals. These criteria include the equity, or fairness, of the tax system; the economic efficiency, or neutrality, of the system; and the simplicity, transparency, and administrability of the system. These criteria can sometimes conflict, and the weight one places on each criterion will vary among individuals. Our publication, Understanding the Tax Reform Debate: Background, Criteria, and Questions, may be useful in guiding policymakers as they consider tax reform proposals. Devoting additional resources to enforcement has the potential to help reduce the tax gap by billions of dollars. However, determining the appropriate level of enforcement resources to provide IRS requires taking into account factors such as how effectively and efficiently IRS is currently using its resources, how to strike the proper balance between IRS’s taxpayer service and enforcement activities, and competing federal funding priorities. If Congress were to provide IRS more enforcement resources, the amount that the tax gap could be reduced depends in part on factors such as the size of budget increases, how IRS manages any additional resources, and the indirect increase in taxpayers’ voluntary compliance resulting from expanded enforcement. Providing IRS with additional funding would enable it to contact millions of potentially noncompliant taxpayers it currently identifies but cannot contact given resource constraints. However, devoting additional resources to enforcement will not completely close the tax gap. For example, in a 2009 report, we reported that IRS’s compliance programs focused on sole proprietors’ underreporting of income addressed only a small portion of sole proprietor expense noncompliance. Despite investing nearly a quarter of all revenue agent time in 2008, IRS was able to examine (audit) about 1 percent of estimated noncompliant sole proprietors. These exams are costly and yielded less revenue than exams of other categories of taxpayers, in part because most sole proprietorships are small in terms of receipts. IRS could reduce the tax gap by expanding compliance checks before issuing refunds to taxpayers. In April 2011, the Commissioner of Internal Revenue talked about a long-term vision to increase compliance activities before refunds are sent to taxpayers. In one example, IRS is exploring a requirement that third parties send information returns to IRS and taxpayers at the same time as opposed to the current requirement that some information returns go to taxpayers before going to IRS. The intent is to move to matching those information returns to tax returns during tax return processing. IRS currently matches data provided on over 2 billion information returns to tax returns only after the normal filing season. Matching during the filing season would allow IRS to detect and correct errors before it sends taxpayers their refunds, thereby avoiding the costs of trying to recover funds from taxpayers later. This approach could also allow IRS to use its enforcement resources on other significant compliance problems. However, the Commissioner made clear that his vision for more prerefund compliance checks will take considerable time to implement. One prerequisite would be a major reworking of some fundamental IRS computer systems. To the extent that implementing this vision would require additional budgetary resources or changes in tax policies, Congress would play a key role. If Congress changed the law to include more consistent definitions across tax provisions, then taxpayers could more easily understand and comply with their obligations. Higher-education tax preferences provide an example of inconsistent definitions for qualified education expenses. What tax filers are allowed to claim as a qualified higher-education expense varies between some of the various savings and credit provisions in the tax code. For example, while Coverdell education savings accounts and qualified tuition programs under section 529 of the Internal Revenue Code permit tax filers to include room and board as qualified expenses if the student is enrolled at least half time, the American Opportunity Credit and the Lifetime Learning Credit do not. These dissimilar definitions require that tax filers keep track of expenses separately, applying some expenses to some tax preferences, but not others. There are no easy solutions to the tax gap, but addressing the tax gap is as important as ever before in the face of the nation’s fiscal challenges. Innovative thinking and the combined efforts of IRS and Congress will be needed now and in the years to come. Chairman Baucus, Ranking Member Hatch, and Members of the Committee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For further information on this testimony, please contact Michael Brostek at (202) 512-9110 or brostekm@gao.gov. In addition to the individual named above, David Lewis, Assistant Director; Shannon Finnegan, analyst- in-charge; Sandra Beattie; Amy Bowser; Barbara Lancaster; John Mingus; Erika Navarro; Melanie Papasian; and Jonathan Stehle made key contributions to this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Taxes are necessary because they fund the services provided by government. Several years ago, the Internal Revenue Service (IRS) estimated that the gross tax gap--the difference between taxes owed and taxes paid on time--was $345 billion for 2001. In the face of large and growing deficits, it is important to seek out potential causes and solutions to the tax gap. Achieving high levels of voluntary compliance is made more challenging as the tax code expands. Tax expenditures--preferential provisions in the code such as exemptions, exclusions, deductions, credits, and deferral of tax liability--have expanded the tax code, more than doubling in number since 1974. GAO's statement focuses on four key areas: (1) how complexity adds to taxpayer burden and economic efficiency costs; (2) how complexities in reporting income contribute to the tax gap; (3) how tax expenditures add complexity and contribute to the tax gap; and (4) possible strategies for addressing the tax gap. The statement is based largely on GAO's previous work conducted on tax compliance issues affecting individual taxpayers from 2005 through 2011. The federal tax system contains complex rules. These rules may be necessary, for example, to ensure proper measurement of income, target benefits to specific taxpayers, and address areas of noncompliance. However, these complex rules also impose a wide range of recordkeeping, planning, computational, and filing requirements upon businesses and individuals. Complying with these requirements costs taxpayers time and money. In 2005 GAO reviewed existing studies and reported that even using the lowest available compliance cost estimates for the personal and corporate income tax, combined compliance costs would total $107 billion (roughly 1 percent of gross domestic product) per year; other studies estimate costs 1.5 times as large. Economic efficiency costs, which are reductions in economic well-being caused by changes in behavior due to taxes, are estimated to be even larger. Although many taxpayers have simple forms of income, others do not--especially those who receive income from capital gains, rents, self-employment, and other sources--and they may be required to do complicated calculations and keep detailed records. This complexity can engender errors and underpaid taxes. For example, GAO has documented millions of taxpayer errors in following complex rules for determining taxpayers' "basis"--generally the taxpayer's investment in a property--in securities they sold or corporations they own. Tax expenditures add to tax code complexity in part because they require taxpayers to learn about, determine their eligibility for, and choose between tax expenditures that have similar purposes. Tax expenditures also complicate tax planning, as taxpayers must predict their own future circumstances as well as future tax rules to make the best choice among provisions. Taxpayer errors contribute to the tax gap. For example, in 2001 taxpayers underreported $6.3 billion in net income due to misreported Individual Retirement Arrangement (IRA) distributions. But taxpayers also may underclaim benefits to which they are entitled. According to GAO's past analysis, of tax filers who appeared to be eligible for a higher-education tax credit or tuition deduction in tax year 2005, about 19 percent, representing about 412,000 returns, failed to claim any of them. No single approach is likely to fully and cost-effectively address the tax gap, but several strategies could improve taxpayer compliance. These strategies could require actions by Congress or IRS. For example, Congress can simplify the tax code by eliminating some tax expenditures and by making definitions more consistent across the tax code. IRS and Congress could take steps to enhance information reporting by third parties or expand compliance checking before refunds are issued. GAO does not make any new recommendations in this testimony. |
VHA provides health care services to veterans at VHA’s 152 VAMCs and associated outpatient clinics. Such care can include providing surgical implants, where required—including biological implants, such as skin grafts, and non-biological implants, such as cardiac pacemakers.Surgeons and other clinicians at VAMCs determine veterans’ needs for surgical implants and provide VAMC prosthetics purchasing staff with documentation showing which item will be implanted in the veteran or has already been implanted in the veteran (if the surgery has been completed). The prosthetics purchasing system is used to record the purchase of all prosthetics, including surgical implants. Clinicians at four VAMCs cited patient need and their clinical expertise in using available surgical implants as the most important factors influencing their decisions of which surgical implants to use. In addition, certain clinicians stated that the availability of committed use contracts also influenced their decisions, whereas the availability of FSS contracts generally did not. Many clinicians we interviewed at the four selected VAMCs stated that the specific needs of the patient were one of the two main factors that influenced their decision of which surgical implant to purchase and use. Clinicians stated that even though different types of surgical implants may appear to be identical to the lay person, there may be differences that can affect patient outcome. For example, one clinician stated that a flexible stent available through only one vendor was necessary to meet the needs of a patient with arteries that were difficult to navigate. Many of the clinicians we interviewed stated that the other main factor influencing their decision of which surgical implant to use was their clinical expertise, which they acquired through their training and clinical experience, review of relevant literature, and interaction with vendors who supply surgical implants: Training and experience: Many clinicians we interviewed stated that their experiences in using certain types of surgical implants during their medical training and/or as a practitioner significantly impacted their decision of which item to purchase and use. For example, one clinician stated that his knowledge of different types of surgical implants, including the different features of various implants, is the result of 30 years of training and experience. Review of relevant literature: Several clinicians stated that they regularly reviewed relevant literature to help identify the surgical implants with the greatest clinical efficacy. One clinician told us that he recently switched to a different type of skin graft because a review of the literature suggested that this item was more effective than the one that he had been using. Interaction with vendors: Several clinicians told us that they work closely with vendor representatives who supply surgical implants (for example, when implanting a pacemaker, a vendor representative may be involved in testing and programming the device). Further, some clinicians with whom we spoke said that their past experience working with a particular vendor, including whether they believe that the vendor representative is knowledgeable and reliable, helps drive their decision of which implant to use. For example, a clinician stated that the reliability of vendors was important in deciding which implant to use, since some implants are not stocked at a VAMC and it is important that implants are delivered by vendors in time for scheduled surgical procedures. Clinicians who performed surgical procedures for which implants were available on a national committed-use contract stated that they typically used one of the implants available on this type of contract, unless the use of these items was not appropriate, for example, for certain complex clinical indications. As of September 2013, nine types of surgical implants—cardiac pacemakers and leads, implantable cardioverter- defibrillators, cardiac resynchronization treatment devices, remote monitoring devices, coronary stents, artificial hips, artificial knees, cochlear implants, and intraocular lenses—were available through national committed-use contracts. In developing national committed-use contracts, VHA solicits input from a group of VHA clinicians to determine which items to place on national committed-use contracts and to review them for clinical efficacy. Several different models of each type of implant are available on national committed-use contracts—for example, the contracts for pacemakers include models from three different vendors—thereby providing clinicians with a choice of which implant to use. In 2011, VHA established a program executive office, including six program management offices. Each of the six program management offices (surgery, medical, clinical support, ancillary, advanced systems, and prosthetics) is expected to collaborate with teams of clinicians and other stakeholders, who can provide insight on identifying certain medical devices for standardization. The program management offices also coordinate with VA’s contracting offices, which are responsible for establishing national committed-use contracts for standardized items. a plan or timeline for establishing new contracts. Without the establishment of national committed-use contracts for a greater number of surgical implants, VHA is not fully leveraging its purchasing power as one of the nation’s largest health care systems. While availability of implants via national committed-use contracts positively influenced their use by clinicians we interviewed, the availability of implants via FSS contracts rarely did so for a number of reasons. Clinicians at the four VAMCs we visited were often not aware of what surgical implants were available on an FSS contract, even though these contracts are preferred to open market purchases under the VAAR. For example, some of these clinicians mistakenly believed that a biological implant they were using was on an FSS contract, when in fact it was not. Clinicians at the four VAMCs we visited also lacked awareness of how to obtain information on which implants were available on an FSS contract. In addition, network prosthetics purchasing officials told us that it is difficult for both clinicians and prosthetics purchasing agents to use VHA’s national database of FSS contracts to determine what specific items are available on an FSS contract. For example, the officials stated that very specific terms must be entered in the database’s search field in order to obtain a useful list of items available on FSS contracts. Some clinicians stated that it would be helpful to have a user-friendly list of surgical implants available on FSS contracts, including the implant’s attributes— such as its shelf life and approved clinical indication—for each surgical specialty. Clinicians at the four VAMCs we visited noted that such a list would allow them to consider FSS contract availability in making their decisions on which implant to use. We found that one VAMC we visited had begun developing a list of skin grafts available on FSS contracts, which included their attributes—such as whether the grafts had to be stored in a freezer—to help raise awareness of items available on FSS contract. Some clinicians at the four VAMCs we visited also stated that if there is not sufficient data to support the clinical efficacy of an implant on an FSS contract, they would not feel comfortable using it. For example, at one VAMC, a clinician noted that a specific vendor who had an implant on an FSS contract questioned why the implant was not being purchased. The clinician asked for clinical efficacy data, which—according to the clinician—was never provided, and therefore the clinician decided not to use the item. Senior VHA prosthetics and contracting officials stated that clinicians are not involved in determining which surgical implants should be available on FSS contracts, and therefore the items available on FSS contracts may not always be those that clinicians prefer to use based on their expertise and experience. Additionally, these officials expressed concern that an evaluation of the clinical efficacy of an item is not conducted in awarding FSS contracts. These officials further stated that VHA prefers to develop national committed-use contracts, for which a clinical efficacy evaluation is conducted. The director of a VHA clinical program that uses surgical implants further emphasized that clinician decisions on which implants to use should be based on proven clinical efficacy rather than inclusion on an FSS contract. We found that among the four VAMCs we visited: (1) none fully complied with requirements for obtaining waivers for open-market purchases of surgical implants; (2) none fully complied with additional requirements for documenting open-market purchases, which are part of VHA’s new process for surgical implant purchases over $3,000; and (3) three of the four did not comply with a requirement related to consignment agreements with surgical implant vendors. None of the four VAMCs we visited fully complied with VHA requirements for obtaining waivers for open-market purchases of surgical implants. We found that one VAMC partially complied with each requirement, while the others did not comply. Current VHA requirements applicable to the four VAMCs stipulate that all open-market purchases of non-biological implants require a waiver approved by the VAMC Chief of Staff—regardless of the purchase price—when a comparable item would have been available through a national committed-use contract, and all open-market purchases of biological implants require a waiver approved by VHA’s Procurement and Logistics Office—regardless of the purchase price—when a decision is made to purchase an item from the open market rather than from an FSS contract. On each of these waivers, VAMCs are required to document why the clinician chose not to use a surgical implant that is available on an existing, higher-priority contract—for example, because the clinician did not believe that the implant met a patient’s need or because the clinician had concerns about the quality of the items available on contract. Specifically, compliance with these waiver requirements was as follows at the four VAMCs we visited: We found that only one of the four VAMCs we visited partially complied with the waiver requirement for open-market purchases of non-biological implants, when a comparable item was available through a national committed-use contract. At this VAMC, we found that waivers were obtained for 27 of the 30 purchases we reviewed; however, 20 of the 27 waivers were incomplete, as they lacked the approving signature from the VAMC Chief of Staff. A VAMC official told us that the Chief of Staff had granted certain clinicians the authority to approve their own waivers for open-market purchases; however, VHA prosthetics officials told us that the Chief of Staff may not delegate this authority and the VAMC was unable to provide documentation to support this claim. The other three VAMCs did not comply with the requirement, meaning that they did not obtain waivers for any of the purchases we reviewed. Two of these three VAMCs had a relatively high percentage of open-market purchases of non- biological implants in the first two quarters of fiscal year 2013 based on VHA data (33 percent and 35 percent, respectively) and one had a low percentage of such purchases (2 percent). Officials at two of the VAMCs told us that these waivers were not routinely obtained because they were focusing on other priorities instead, such as implementing VHA’s new process for surgical implant purchases over $3,000. At the third VAMC, a prosthetics official told us that clinicians did not always fill out a waiver when asked to do so. Furthermore, because certain clinicians practice at the VAMC on a very limited basis, the official said it is difficult to ensure that they are aware of the waiver requirement and comply with it. We found that only one of the four VAMCs we visited partially complied with the waiver requirement for biological implants when a decision is made to purchase an item from the open market rather than from an FSS contract, whereas the other three VAMCs did not comply. Officials from the three VAMCs that did not comply with the requirement told us that they did not obtain these waivers because they did not understand under what circumstances a waiver is required or were focusing on other priorities instead—such as implementing VHA’s new process for surgical implant purchases over $3,000. The VAMC that partially complied with the waiver requirement for the purchases we reviewed had waivers on file, but the waivers did not include the required justification as to why a waiver was needed and did not have the required approval from VHA’s Procurement and Logistics Office. Prosthetics purchasing officials at this VAMC told us that they had only recently begun to establish compliance with this waiver requirement. Without consistent and appropriate completion of waivers for open market purchases of surgical implants, VHA lacks information regarding why surgical implants were not being purchased from higher-priority sources such as a national committed use contract or an FSS contract. This information would help VHA determine whether clinicians raised concerns about the quality or efficacy of the items available through these existing contracts, which could help VHA improve future implant purchases, and could provide a basis for holding clinicians accountable for complying with VHA requirements and procurement best practices. The FAR provides several different methods for determining whether the vendor’s price is fair and reasonable when using simplified acquisition procedures. Where possible, the contracting officer is to base price reasonableness on competitive quotations or offers. However, if only one response is received, the FAR requires a statement of price reasonableness in the contracting file, which may be based on such things as conducting market research, comparing the quoted price with the price of similar items in a related industry, or comparing the quoted price with prices found reasonable on a previous purchase, among other methods. See 48 C.F.R. § 13.106-3(a). (2) VA’s special authority under 38 U.S.C. § 8123 to acquire prosthetics without regard to other provision of law. Depending on the VAMC, these documentation requirements were either completed by VAMC prosthetics purchasing staff or a contracting officer at the corresponding NCO. Compliance with these requirements was as follows at the four VAMCs we visited: Fair and reasonable price determination: The fair and reasonable price determination was not on file for between 7 and 36 percent of the applicable purchases we reviewed at three of the four VAMCs. It was on file for all applicable purchases at the fourth VAMC. However, we found that the documentation on file did not provide reasonable assurance that the price was fair and reasonable for the majority of purchases we reviewed. We found multiple instances where the documentation indicated that the determination was based on “prior experience purchasing similar items” but did not cite any prior pricing information. Not citing any prior pricing information leaves open questions about the thoroughness of the analysis conducted to determine price reasonableness. We also identified cases where the contracting officer or prosthetics purchasing agent documented the price as “fair and reasonable” when it fell within a broad range of prices. For example, a contracting officer determined that about $6,000 for a bone graft purchased from the open market was fair and reasonable because the sales prices for “similar” items was typically between $3,000 and $20,000. VAMC and NCO officials we interviewed stated it is difficult to determine price reasonableness for surgical implants, in part, because they need to be knowledgeable of the exact characteristics of the item in question in order to effectively evaluate price reasonableness. They also said they believed that as contracting officers gain more experience in purchasing surgical implants, they will learn to more effectively complete this documentation. Furthermore, an NCO official stated that specific guidance from VHA on how to effectively complete the fair and reasonable price determination for surgical implants would be helpful. Ineffective market research and determination of price reasonableness may result in VHA overpaying for surgical implants. JOFOC: During our review of selected open-market surgical implant purchases over $3,000, we found two issues with respect to those purchases made on a sole-source basis for which a JOFOC was required under VHA’s new purchasing process. First, the JOFOC was not on file for between 5 and 29 percent of the applicable purchases we reviewed at the four VAMCs we visited. Second, at three of the VAMCs we visited, we found that the officials responsible for completing the JOFOC selected the “unusual and compelling urgency” justification for each of the purchases we reviewed, even though some officials told us that they did not have a full understanding of which justifications applied. Specifically, we found the following: Officials from two of these VAMCs told us that they were unclear about which justification to cite on the JOFOC for each purchase because they believed VHA guidance on this matter was insufficient. For example, one official responsible for completing the JOFOC stated that he did not understand the difference between the justifications and therefore picked the one that made the most sense to him. Officials at another VAMC stated that they chose the “unusual and compelling urgency” justification because the contracting package for surgical implants is typically completed after the item has already been implanted and therefore this justification made the most sense to them. Furthermore, officials at this VAMC stated that a VHA draft directive on surgical implant purchasing discourages the use of VA’s acquisition authority under 38 U.S.C. § 8123 to justify sole-source awards of surgical implants on the open market. Because they were discouraged from using this justification on the JOFOC, they tended to rely on the “unusual and compelling urgency” justification instead. Moreover, while VHA requires each JOFOC to include sufficient facts and rationale to support why a specific justification was cited, only one of the four VAMCs fully provided this information for any of the purchases we reviewed. At two VAMCs, no rationale was provided for the justification selected. Furthermore, the JOFOCs at these two VAMCs were frequently missing required information, such as the contracting officer’s signature, or were not completed until several months after the purchase. At the fourth VAMC, each JOFOC for the purchases we reviewed typically included a generic paragraph stating that the purchase was justified based on the determination of the clinician who requested the implant, regardless of which justification was cited. VAMC and NCO officials told us that clinicians’ surgical implant purchase requests often do not contain sufficient information to support why the clinician was requesting the particular item. Because the JOFOCs we reviewed were often incomplete, VHA lacks assurance that support existed for the sole-source awards pursuant to VHA’s policies at these four VAMCs. As we discuss later, VA and VHA both identified more widespread areas of noncompliance (in the areas we describe above)—beyond the 4 VAMCs we visited—through oversight efforts they conducted at numerous VAMCs and NCOs nationwide. For example, VA’s oversight identified extensive noncompliance, including inadequate fair and reasonable price determinations and missing or improperly completed JOFOCs. VHA officials stated that they were familiar with the challenges that VAMCs and NCOs were experiencing with complying with the requirements of VHA’s new process for surgical implant purchases over $3,000 and acknowledged that some of these challenges stem from a lack of sufficient guidance for implementing this new process. To address these challenges, VHA officials stated that VHA has developed additional guidance to facilitate the purchase of these items. This guidance includes (1) revised JOFOC templates to help ensure that the correct purchasing justification is cited and that an appropriate rationale is provided for using each justification and (2) additional guidance on completing the fair and reasonable price determination, which, as of November 2013, had not yet been issued. Moreover, VHA contracting officials stated that VHA plans to provide training on the documentation of purchases over $3,000; however, as of November 2013, this training had not yet been approved. In addition, VHA is developing a directive and associated standard operating procedure that outline the processes VAMCs and NCOs are expected to follow when completing contracting packages for surgical implant purchases over $3,000, including a JOFOC in the event of a sole- source purchase. As of November 2013, VHA was still in the process of providing its draft directive and standard operating procedure to stakeholders for review, and there are no established timelines for finalizing this directive and standard operating procedure, according to VHA officials. VHA officials stated that there had been disagreement between VHA prosthetics and contracting officials concerning the details of the directive and standard operating procedure–for example, which specific requirements would be the responsibility of VAMC prosthetics purchasing staff and which would be the responsibility of NCO contracting officers. According to VHA officials, these disagreements delayed the completion of these documents and disseminating them to VAMCs and NCOs. VHA officials also told us that they are considering ways to streamline the documentation requirements for purchases over $3,000 in response to concerns from VAMCs and NCOs about the resources required to complete a contracting package for each purchase and potential delays in making purchases for surgical implants, which could result in the postponement of needed medical procedures. For example, VHA officials stated that they are considering providing VAMC prosthetics purchasing staff with ordering officer delegations, which would allow them to purchase surgical implants from national committed-use contracts, without having to work with a contracting officer to complete a contracting package for such a purchase. They told us that this would free up contracting officers’ time to work on documenting open market purchases and help ensure that these purchases are appropriately documented. However, VHA officials told us that they were still in the process of evaluating the feasibility of delegating ordering authority to prosthetics purchasing staff, including addressing technical challenges with the prosthetics purchasing system, which they said currently prevent VHA from effectively implementing this change. Again, no timelines have been established to complete streamlining efforts, according to VHA officials. Officials at three of the four VAMCs we visited told us that the VAMCs had purchasing agreements with open-market vendors to provide surgical implants to the VAMCs on consignment, but that these agreements were not always in compliance with a VHA requirement that consignment agreements be authorized by a VHA contracting officer. Under a consignment agreement, the vendor maintains vendor-owned items at the VAMC, and the VAMC purchases only the items actually used. VAMC and network officials told us that clinicians likely made these unauthorized agreements with vendors to ensure that they had timely access to the surgical implants that they preferred to use. However, they could not tell us when these agreements had been established, who had authorized them, and what the terms of the agreements were. These officials told us that the unauthorized agreements with vendors have resulted in unauthorized commitments, vendors not being paid in a timely manner, and surgical implants not being tracked in the VAMCs’ inventories. Furthermore, because these unauthorized agreements covered surgical implants from the open market, and the purchase prices were not negotiated, the VAMCs may have overpaid for these items. To address these unauthorized consignment agreements, officials at each of these VAMCs or the corresponding networks stated that they were in the process of establishing authorized consignment agreements which include an agreed-upon price and quantity for each surgical implant. Having authorized consignment agreements in place may be useful in instances where the requirement for a surgical implant is immediate and it is not possible to predetermine which of several types or models are required. Establishing such agreements involves determining the types of surgical implants that clinicians need to have available on a consignment basis and obtaining a contracting officer’s authorization for the VAMC to enter a consignment agreement with the vendors. At the time of our review, VHA was developing a standard operating procedure to assist VAMCs and NCOs in developing authorized consignment agreements, however, as of November 2013 they were unsure as to when it would be finalized and rolled out to the VAMCs. As a result of their oversight efforts, VA and VHA found noncompliance with surgical implant purchasing requirements, but did not ensure that NCOs or VAMCs took corrective action. Furthermore, VHA assesses how each VAMC is performing on key aspects of the surgical implant purchasing process, but it did not ensure that VAMCs took corrective action to address deficiencies identified or require networks to do so. Oversight of VAMCs’ and NCOs’ compliance with VHA’s new process for surgical implant purchasing over $3,000 includes efforts by both VA’s Office of Acquisition and Logistics and VHA’s Office of Procurement and Logistics. However, this oversight was not fully effective because neither VA nor VHA ensure that corrective action is taken to address noncompliance. Recent VA and VHA oversight activities that identified significant noncompliance issues included the following: VA oversight: In February 2013, VA’s Office of Acquisition and Logistics began assessing compliance with its January 9, 2013, memorandum, which required VAMCs that had not fully implemented VHA’s new process for surgical implant purchasing over $3,000 to complete an abbreviated contracting package, including a fair and reasonable price determination and, if applicable, a JOFOC. This oversight effort, which examined several hundred purchases between February 2013 and August 2013, identified extensive noncompliance, including inadequate fair and reasonable price determinations and missing or improperly completed JOFOCs, which were similar to the issues we identified in our assessment. VHA oversight: In April 2013, VHA began conducting oversight of surgical implant purchases over $3,000 for the seven NCOs (and the VAMCs associated with those NCOs) that had fully implemented the new process. Additional oversight is planned for fiscal year 2014 for all NCOs. An audit team from VHA’s Procurement and Logistics Office is using a checklist consisting of nine questions, which assess various requirements of the purchasing process, such as whether the fair and reasonable price determination and a JOFOC, if applicable, is on file for each purchase. VHA’s oversight identified noncompliance including various missing or incomplete documentation in the contracting packages for surgical implant purchases over $3,000. While VA and VHA both conducted oversight and identified instances of noncompliance, they did not ensure that corrective action was taken. Consistent with the federal internal control standard for monitoring, which states that actions should be taken promptly in response to findings or recommendations, VA and VHA should have taken steps to ensure that A senior official from noncompliance is addressed in a timely manner. VHA’s Procurement and Logistics Office told us that VA’s Office of Acquisition and Logistics did not provide VHA with information on the VAMCs at which noncompliance was identified, which—according to the official—would have allowed VHA to take steps to address noncompliance at the appropriate VAMCs. The official also told us that VHA’s oversight is largely intended to be consultative in nature and that VHA’s Procurement and Logistics Office is not sufficiently staffed to ensure that corrective action is taken. Therefore, while VHA asked NCOs to correct areas of noncompliance, VHA did not require NCOs to document how they addressed, or plan to address, noncompliance identified in VHA oversight activities. Because neither VA nor VHA have taken steps to ensure that corrective actions are taken to address VAMC and NCO noncompliance, they lack assurance that this noncompliance is being appropriately addressed. See GAO/AIMD-00-21.3.1 and GAO-01-1008G. VHA assesses how each VAMC within VHA’s 21 networks performs on metrics established for surgical implant purchasing, and provides the results of its assessments to the prosthetics representative at each network. According to VHA prosthetics officials, VHA’s assessments cover the following metrics: the extent to which VAMCs purchased surgical implants from a national committed-use contract or obtained a waiver allowing clinicians to use an alternative item; the extent to which VAMCs entered the serial number and lot number for each surgical implant purchase; the timeliness of each surgical implant purchase—that is, the time from which a clinician requests a surgical implant to the time the item is purchased; the extent to which clinicians’ purchase requests for surgical implants have been fulfilled; and the extent to which amounts obligated for surgical implant purchases have been recorded in the prosthetics purchasing system. Network prosthetics representatives at two of the four networks we visited told us that they regularly monitored the results from VHA’s assessments and took steps to ensure that VAMCs address deficiencies VHA identified, such as correcting data entry errors. At the other two VAMCs, network prosthetics representatives did not take such steps. In both networks that did not ensure that VAMCs address deficiencies, VHA’s metrics identified a relatively high rate of noncompliance with surgical implant purchases from national committed-use contracts (28 percent at one network and 13 percent at the other network in the first three quarters of fiscal year 2013). At one of these networks, this noncompliance included a high percentage of purchases missing serial numbers or lot numbers (16 percent in the first three quarters of fiscal year 2013). The failure to address deficiencies uncovered through VHA’s assessments may lead to higher costs for VHA and may have patient safety implications. For example, not recording the serial number or lot number for a surgical implant makes it difficult to systematically determine which veteran received an implant subject to a subsequent manufacturer or Food and Drug Administration recall. As of November 2013, VHA did not have a policy governing how deficiencies identified through these assessments should be addressed. Accordingly, VHA officials told us that they have not required VAMCs to address deficiencies—i.e., if VAMCs do not meet an established threshold for each metric—nor did they require network prosthetics representatives to do so. Consistent with the federal internal control for monitoring, VHA or networks should establish a process to ensure that deficiencies are addressed. According to VHA officials, the directive on surgical implant purchasing that VHA is developing will require network prosthetics representatives to review each metric and ensure that deficiencies are addressed. To its credit, VHA has established national committed-use contracts for a number of surgical implants, which can help VHA effectively leverage its purchasing power. We found that an implant’s availability on this type of contract positively influenced clinicians’ decisions to use that implant, but these types of contracts are only available for a limited number of surgical implants. Establishing national committed-use contracts for a greater number of commonly used surgical implants could help reduce the number of open-market purchases of these items and ultimately reduce the costs for VA. Furthermore, steps could be taken to improve clinicians’ awareness of high quality surgical implants available on FSS contracts, which may also lead to a further reduction in open-market purchases. While VHA has requirements in place to document surgical implant purchases from the open market, selected VAMCs did not fully comply with these requirements and VA and VHA’s own oversight found similar issues at other VAMCs nationwide. Greater compliance with these requirements would provide VHA with information needed to determine why VAMCs are purchasing surgical implants from the open market; it would also help provide assurance that VHA is paying a fair and reasonable price for surgical implants. VA’s and VHA’s oversight of compliance with surgical implant purchasing requirements does not ensure that corrective action is taken to address identified noncompliance, which could lead to potentially serious issues remaining unaddressed. For example, while VHA identified that at one VAMC a high number of serial numbers for surgical implants were not recorded—as required—in the prosthetics purchasing system, VHA did not ensure that this problem was addressed, resulting in potential patient safety issues remaining unresolved. Providing effective oversight of surgical implant purchasing, which includes ensuring that noncompliance is addressed, would help VA and VHA improve compliance with its applicable requirements, while potentially lowering costs for surgical implants, and improving patient safety. To expand the volume of surgical implants purchased from existing, higher-priority contracts and to improve compliance and oversight related to purchasing requirements, we recommend that the Secretary of the Department of Veterans Affairs take the following five actions: Create a plan that includes timelines for evaluating the benefits of developing additional national committed-use contracts for surgical implants and establishing these contracts. Explore options to increase clinicians’ awareness of high quality surgical implants available on FSS contracts, including developing a user-friendly list for VAMC clinicians of surgical implants available on FSS contracts for each surgical specialty. Re-emphasize to VAMCs that waivers must be completed for open- market purchases of surgical implants, provide clear guidance to VAMCs on when and how to complete these waivers, and establish internal controls to ensure VAMCs’ compliance with waiver requirements. Provide additional training to VAMCs and NCOs on how to properly document open-market purchases of surgical implants over $3,000, including those awarded on a sole-source basis. Enhance information sharing on noncompliance between VA and VHA and revise existing guidelines to require that VAMCs and NCOs document the measures they are taking to address noncompliance and report their progress (via corrective action plans) in achieving those measures through the VHA and VA management chains of command. VA provided written comments on a draft of this report, which we have reprinted in appendix III. In its comments, VA generally agreed with our conclusions, concurred with our five recommendations, and described the department’s plans for implementing each of our recommendations. For example, VA will create a plan for evaluating the benefits of developing additional national committed-use contracts for surgical implants; develop and disseminate a list of surgical implants available on FSS contracts; emphasize the FSS waiver process through webinar trainings and standard operating procedures guidance; develop a checklist for proper documentation of open-market surgical implant purchases over $3,000; and require documentation of measures taken to address noncompliance identified in audits. VA also provided a technical comment, which was incorporated as appropriate. We are sending copies of this report to appropriate congressional committees and the Secretary of Veterans Affairs. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To conduct our work, we visited Department of Veterans Affairs (VA) medical centers (VAMC) in four locations: Indianapolis, Indiana; Los Angeles, California; New York, New York; and Seattle, Washington. We selected these four VAMCs because they are all classified by the Veterans Health Administration (VHA) as surgically complex facilities, serve large veteran populations and are located in different regional networks (See table 2). To examine the factors clinicians consider when choosing which surgical implant to use, when multiple, similar types of implants are available, at each of the four VAMCs, we interviewed at least one clinician from each of the following six clinical specialties that typically use surgical implants: cardiology or cardiac surgery; general surgery; orthopedic surgery; neurosurgery; podiatry; and vascular surgery. In total, we interviewed 28 clinicians. During each interview, we discussed the factors that affected clinicians’ surgical implant purchase decisions and their knowledge of surgical implants available on national committed-use contracts and federal supply schedule (FSS) contracts. In addition, we interviewed staff involved in managing surgical implants at the VAMCs, such as the operating room nurse manager, surgical implant coordinator, or sterile processing service technician. To assess compliance at the four selected VAMCs with pertinent VHA requirements for documenting surgical implant purchases from the open market, we first identified the applicable requirements. To do so, we reviewed VA’s statutory authority to acquire prosthetics—including surgical implants, applicable federal regulations—including the Federal Acquisition Regulation (FAR) and VA Acquisition Regulation (VAAR) and implementing guidance issued by VA and VHA with respect to surgical implant purchasing. To learn more about VA’s and VHA’s surgical implant purchasing processes and, in particular, surgical implant purchases from the open market, we also interviewed procurement and prosthetics officials at VA and VHA. For purposes of our review, we selected pertinent VHA requirements for documenting surgical implant purchases from the open market. We assessed compliance with these requirements at the four selected VAMCs as follows: To assess compliance with the requirement that a waiver be obtained and approved by the VAMC Chief of Staff for open-market purchases of non-biological implants, for which a comparable product is available on a national committed-use contract, we selected 20 purchases at one VAMC and 30 purchases at another VAMC made between October 2012 and March 2013 for which data from VHA’s National Prosthetics Patient Database (NPPD) indicated that such a waiver was obtained. These selected purchases represented about 35 and 80 percent, respectively, of all applicable purchases at each VAMC during this time period. We subsequently reviewed whether those waivers were, in fact, on file at the two VAMCs and whether they were complete. We selected the purchases for our review to obtain a diverse selection of purchases from different vendors. At the two other VAMCs we visited, the NPPD data indicated that no waivers had been obtained, even though there were open-market purchases of non-biological implants, for which an alternative product was available on a national committed-use contract. At these two VAMCs, we interviewed VAMC and network prosthetics staff about why the waivers had not been obtained. To assess compliance with the requirement that a waiver be obtained when a decision is made to purchase a biological implant from the open market rather than from a Federal Supply Schedule (FSS) contract, at each VAMC, we selected 20 to 30 purchases of biological implants made between October 2012 and March 2013 which, based on NPPD data, appeared to have been purchased from the open market because the NPPD data did not indicate the purchase was associated with a contract, and assessed whether a waiver was on file for those purchases, and if so, whether the waiver was complete. We selected the purchases for our review to obtain a diverse selection of purchases from different vendors. We selected 110 purchases in total representing from about 9 percent to about 83 percent of all applicable purchases meeting our selection criteria at each VAMC during this time period. To assess compliance with VHA’s requirements for open-market surgical implant purchases over $3,000, including (1) a statement affirming that the vendor’s quoted price is fair and reasonable and (2) a justification for other than full and open competition (JOFOC) in the case of a sole-source award, we reviewed a selection of open- market purchases over $3,000 from each VAMC to determine whether the required documentation was on file and whether it was complete. At three of the four VAMCs, which had implemented VHA’s new process for surgical implant purchases over $3,000 by the beginning of fiscal year 2013, we selected 30 purchases made between October 2012 and March 2013 which, based on NPPD data, appeared to have been purchased from the open market because the NPPD data did not indicate that the purchase was associated with a contract. At the fourth VAMC, we selected 15 purchases made between January 9, 2013, and March 31, 2013, which, based on NPPD data, appeared to have been purchased from the open market, because this VAMC had not yet implemented VHA’s new process for surgical implant purchasing over $3,000 and therefore was not required to complete the VHA requirements until January 9, 2013. We selected the purchases for our review to obtain a diverse selection of purchases from different vendors. We selected 97 purchases in total representing from about 6 percent to about 8 percent of all applicable purchases at each VAMC during this time period. To ensure that the NPPD data we used to select purchases for review were sufficiently reliable for our purposes, we conducted a data reliability assessment of the data that we used, which included checks for missing values and interviews with a VHA official knowledgeable about the data. We restricted these assessments, however, to the specific variables that were pertinent to our analyses. Our review revealed some inconsistencies and errors in the data that were attributable to data entry errors and omissions. Overall, however, we found that all of the data were sufficiently reliable for selecting our purchases for review. We cannot generalize our findings on compliance beyond the purchases we reviewed. We also reviewed documentation and interviewed VAMC and network officials at the four VAMCs we visited regarding compliance with these requirements. To examine VA and VHA’s oversight of compliance with surgical implant purchasing requirements, we obtained and analyzed VA and VHA documentation on existing or planned monitoring activities, including audit plans and reports documenting deficiencies in VAMCs’ purchasing of surgical implants; interviewed VA and VHA procurement and prosthetics officials; and assessed VA and VHA’s monitoring activities in the context of federal standards for internal control. The internal control standard for monitoring refers to an agency’s ability to assure that ongoing review and supervision activities are conducted, with the scope and frequency determined by the level of risk; deficiencies are communicated to at least one higher level of management; and actions are taken in response to findings or recommendations within established timelines. We conducted this performance audit from April 2013 to January 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. According to Veterans Health Administration (VHA) officials, in 2008, the Department of Veterans Affairs’ (VA) Office of Information Technology began developing the Veterans Implant Tracking and Alert System (VITAS), which was designed to track and retrieve identifying information—including the lot and serial number—of surgical implants placed in patients VHA-wide. VITAS was developed to address shortcomings in VHA’s existing ability to track surgical implants, which may limit VHA’s ability to identify and locate patients who received an implant in the event of a manufacturer or Food and Drug Administration recall. According to VHA, these shortcomings include the following: As we noted previously in this report, the lot number and serial number of items implanted in patients is not always entered into the prosthetics purchasing system, as required. While VHA clinicians from most specialties track identifying information of items implanted in their patients using standalone systems or spreadsheets that are particular to the clinicians’ specialties, VA found that information on surgical implants recorded in these systems is not standardized nor is it shared across VAMCs. Furthermore, VA found that identifying information on surgical implants used in certain clinical specialties, including gastroenterology, interventional radiology, and pulmonary, is not tracked in any system. According to VA and VHA officials involved in the development of VITAS, the development of this system was suspended as of the end of fiscal year 2012 due to data-reliability and interoperability challenges. As of December 2013, VA and VHA had not decided whether to resume the development of VITAS. In addition to the contact named above, Kim Yamane, Assistant Director; Ashley Dixon; Cathleen Hamann; Jennifer Whitworth; and Michael Zose made key contributions to this report. | VHA spending on surgical implants--such as stents and bone and skin grafts--has increased to about $563 million in fiscal year 2012. Clinicians at VAMCs determine veterans' needs and request implant purchases either from a contract or from the open market (i.e., not from an existing contract). VHA requirements--which implement relevant federal regulations--include providing justifications for open-market purchases. GAO was asked to evaluate implant purchasing by VHA. This report examines (1) factors that influence clinicians' decisions to use particular implants when multiple, similar items are available; (2) selected VAMCs' compliance with pertinent VHA requirements for documenting open- market purchases; and (3) VA's and VHA's oversight of VAMC compliance with implant purchasing requirements. GAO visited four VAMCs that serve large veteran populations and are dispersed geographically. GAO interviewed clinicians at the VAMCs, reviewed pertinent statutes, regulations, and policies and reviewed a sample of implant purchases from different vendors. These results cannot be generalized to all VAMCs but provide insights. GAO also interviewed VA and VHA officials and reviewed agency documents. Clinicians at the four Department of Veterans Affairs Medical Centers (VAMC) GAO visited said that patient need and their clinical expertise were the main factors influencing their decisions of which surgical implants to use. Also, clinicians in certain specialties said they typically used one of the implants available on VA-negotiated national committed-use contracts, which generally establish a fixed price for several models of nine types of surgical implants that the Veterans Health Administration (VHA) commits to using nationally. VHA recognizes the need for expanding items covered under these contracts to fully leverage its purchasing power but, as of October 2013, had not identified additional implants to include on such contracts or established timelines for doing so. GAO also found that the availability of implants on VA-negotiated federal supply schedule (FSS) contracts rarely influenced clinicians' decisions on which implant to use. Clinicians were often not aware of the availability of surgical implants on FSS contracts, which are negotiated by one of VA's contracting offices, but for which VHA clinicians have little or no input. Clinicians told GAO that in some cases they may avoid implants on FSS contracts due to their concerns about the quality of these items. In regard to compliance with VHA's requirements for justifying open-market purchases of surgical implants, which VHA adopted to promote adherence to relevant federal regulations, GAO found the following: None of the four VAMCs fully complied with requirements for obtaining waivers for open-market purchases of surgical implants because they were focusing on other priorities or lacked awareness of the requirements, among other factors. None of the four VAMCs fully complied with additional requirements for documenting open-market purchases that are part of a new process VHA implemented in fiscal year 2013 for surgical implant purchases over $3,000. VAMC and regional office officials attributed noncompliance mainly to insufficient VHA guidance and VA staff's inexperience in completing these requirements. Three of the four VAMCs did not comply with a VHA requirement pertaining to agreements with vendors that provided surgical implants to them on consignment. These agreements, which clinicians likely established to ensure timely access to implants, do not comply with a VHA requirement that consignment agreements must be authorized by a VHA contracting officer. The Department of Veterans Affairs (VA) and VHA have begun conducting oversight of surgical implant purchases over $3,000 to assess compliance with VHA's new requirements. However, VHA officials told GAO that VA and VHA have not ensured that corrective action has been taken to address identified noncompliance because of poor communication between VA and VHA and insufficient staffing to follow up on identified issues. Furthermore, VHA assesses each VAMC's performance on metrics established for surgical implant purchasing, but it does not have a policy governing how any identified deficiencies should be addressed nor the corrective actions to be taken by VAMCs and VHA's regional networks. GAO recommends that VA identify implants and establish a timeline to expand the volume that can be purchased from VA-negotiated contracts and improve compliance with and oversight of purchasing requirements. VA concurred with these recommendations. |
In January 2013, the NDAA for Fiscal Year 2013 was enacted into law. Section 955 of the act required the Secretary of Defense to: develop and implement a plan to achieve savings in the total funding for civilian and contractor workforces that are not less, as a percentage, than savings in funding for basic military personnel pay resulting from reductions in military end strengths from fiscal years 2012 through 2017; ensure that the plan is consistent with policies and procedures required by 10 U.S.C. § 129a, and ensure that the savings are not achieved through unjustified transfers of functions between or among the military, civilian, and service contractor personnel workforces of DOD, consistent with authorities available to the department under sections 129a, 2330a, 2461, and 2463 of Title 10 of the United States Code; provide status reports describing the implementation of the plan in the prior year as part of the budget submitted by the President to Congress for each of fiscal years 2015 through 2018; and in each status report, provide a summary of savings achieved through personnel reductions and the number of military, civilian, and contractor personnel reduced in the prior fiscal year; and in each status report include an explanation where any savings fall short of the annual target. Further, section 955 gives DOD authority to exclude certain civilian and contractor workforces from reporting required reductions. These exclusions are to be related to functions identified as core or critical to the mission of the department. For example, in DOD’s initial and subsequent status reports, DOD excluded the acquisition workforce, its cyber workforce, and medical workforce, among others. DOD’s reports were developed by the OUSD (Comptroller), which is the department’s principal advisor on budgetary and financial matters and is responsible for directing the development and overseeing the execution of DOD’s annual budget. In December 2015 we reported that for DOD’s initial report submitted to Congress in September 2014 and its first status report submitted in February 2015, DOD did not fully address most statutory requirements identified in section 955 of the NDAA for Fiscal Year 2013. Section 955 has six requirements, and we reported that DOD had partially addressed three requirements and did not address the other three requirements. For example, DOD’s September 2014 report to Congress partially addressed the requirement to develop and implement a plan for achieving savings by outlining reductions in its civilian and contracted services workforces, but DOD did not describe a process for implementing the planned reductions. Also, DOD did not address the savings that the department intended to achieve through reductions in the number of military, civilian, and contracted services personnel. Instead, the report outlined reductions in full-time equivalent positions, and it did not outline savings in funding for the contracted services workforce beyond fiscal year 2015. See table 1 for our previous assessment of statutory requirements addressed in DOD’s September 2014 and February 2015 reports on section 955 of the NDAA for Fiscal Year 2013. Based on our December 2015 assessment of DOD’s requirements, we made a number of recommendations for information to be included in status reports to be submitted in fiscal years 2017 and 2018. These recommendations are intended to help ensure that Congress has the necessary information to provide effective oversight over DOD’s workforces and they include providing additional cost savings data and an efficiencies plan, among others. DOD concurred with all of our recommendations and stated that it would take action in its future reports. DOD’s February 2016 status report presents some, but not all, savings data for the military, civilian, and contracted services workforces. Section 955 of the fiscal year 2013 NDAA requires DOD to achieve savings in the total funding for the civilian and contract services workforces from fiscal year 2012 through 2017 that are not less, as a percentage of the funding for basic military personnel pay achieved from reductions in military end strengths over the same time. Further, section 955 requires DOD to submit with its annual budget submission an annual status report in fiscal years 2015 through 2018. DOD’s status reports are to include a summary of the savings achieved in the prior fiscal year, in both costs and numbers of personnel, for military, civilian, and contracted services personnel. Section 955 also provides DOD with the authority to grant exclusions to civilian workforces identified as core or critical to the mission. See table 2 for a summary of DOD’s compliance with selected section 955 reporting requirements. DOD’s February 2016 status report includes actual military average end strength and civilian workforce full-time equivalent savings data for fiscal years 2012 through 2015, or the most current actual data, but it does not include fiscal years 2013 through 2015 cost savings data for the military and civilian workforces. DOD’s status report also includes actual contracted services cost data from fiscal years 2012 through 2015, but it does not include contracted services personnel data for any of the fiscal years. See table 3 for DOD reported military, civilian, and contracted services savings from fiscal years 2012 through 2015. Officials stated that DOD did not include civilian and military workforce costs for fiscal year 2015 because DOD interpreted the statute as requiring them only to report the civilian cost savings achieved when comparing costs from fiscal year 2012 to fiscal year 2017, and not each fiscal year in between. Further, officials stated that they did not include contractor full-time equivalent data for fiscal years 2012 through 2015 because, for section 955 and budget purposes, DOD does not measure contracted services by contractor full-time equivalency. The department did report contractor full-time equivalents in its annual Inventory of Contracted Services, which provides data on contract service execution. Officials stated that the department continues to institutionalize the capabilities associated with the Enterprise-wide Contractor Manpower Reporting Application (ECMRA) across all its components, and which will continue to improve reporting of contractor full-time equivalents for the prior fiscal year. For the budget years, officials stated the DOD will continue to measure dollar amounts budgeted for the contracted services. We reported in November 2015 that DOD continues to face challenges in implementing its ECMRA system. According to DOD officials, implementation of the system began in 2011, but it has yet to be fully institutionalized, and officials were unable to provide a final implementation date. Further, officials stated that DOD did not include full-time equivalents (FTEs) for contracted services in the section 955 report because they were unable to provide an accurate number. We recommended in December 2015 that DOD include in its status reports the costs in civilian personnel and military basic pay for fiscal years 2012 through 2017. DOD concurred with that recommendation, and partially implemented it in its February 2016 status report. In that status update DOD compared costs from fiscal year 2012 with those estimated to be achieved in fiscal year 2017, but it did not include costs associated with reductions for each fiscal year from 2012 through 2017. Furthermore, although DOD included full-time equivalent data from fiscal years 2012 through 2015, we have reported that reductions in full-time equivalents may not be a reliable measure of the costs of the civilian workforce. For example, while FTE’s may go down, costs may go up due to a variety of factors, including annual automatic pay increases. Without DOD’s fully implementing the recommendation to include cost savings information for the prior fiscal year, as required by section 955, Congress may not know whether DOD is on track to meet its reduction requirements. As such, we believe our previous recommendation is still valid. DOD estimates that it will meet its statutory requirement to reduce civilian personnel costs in fiscal year 2017, but that it will not meet its requirement to reduce contracted services costs. Section 955 of the Fiscal Year NDAA states that DOD must reduce its civilian and contractor workforces costs for fiscal years 2012 through 2017 at a rate that is not less, as a percentage of such funding, than the reduction in military personnel basic pay costs. DOD reports that it will reduce military personnel costs by 6.4 percent from fiscal year 2012 through fiscal year 2017, and it estimates that it will reduce civilian personnel costs by 7.1 percent over the same time period. However, DOD does not account for exclusions in its civilian workforce savings calculation, as it uses the average full-time equivalent pay across the civilian workforce. As noted above, section 955 gives DOD the authority to exclude segments of the civilian workforce from the required reductions. DOD excluded approximately 71.6 percent, or about 530,000, of its civilian workforce from the reductions required by statute in fiscal year 2017. These exclusions include workforces in the areas of acquisition, cyber, medical, and safety and security, among others. Because DOD excluded segments of the civilian workforce that may not cost the same as the average full-time equivalent position, its cost estimation calculations may not reflect actual savings. For example, DOD excluded nearly 140,000 civilian acquisition personnel and nearly 60,000 civilian medical personnel in fiscal year 2017. As mentioned above, DOD did not include actual civilian workforce savings from fiscal years 2012 through 2015, and officials noted that even if DOD had included actual civilian personnel cost savings, it would only be an estimation based on their calculation. The officials said that DOD does not have the ability to identify savings for the applicable workforce because of broader limitations inherent in the department’s accounting methodology. For example, the officials stated that the exclusion categories are not part of DOD’s accounting structure and therefore DOD does not have the ability to report specific positions excluded and their associated costs. The officials stated that DOD had held internal discussions regarding changes necessary to the accounting structure that would allow the department to meet various reporting requirements, but it was determined that a solution would be too burdensome to implement in a timely manner without significant costs. DOD also reported that it will reduce contracted services costs by 5 percent over the time period, a percentage that does not meet the required amount of reductions of not less, as a percentage of funding, than the reductions in basic military pay—which is 6.4 percent. Section 955 requires that in any case in which savings fall short of the annual target, the report shall include an explanation of the reasons for such a shortfall. DOD reported that although the department has significantly decreased advisory and assistance and other contracted services from the fiscal year 2012 level, there have been increases in equipment maintenance contracts as the department repairs and maintains its equipment in order to maintain its global presence, which is why DOD did not meet the contracted services reduction requirement. See table 4 for DOD’s military, civilian, and contracted services actual and estimated reductions. We previously recommended that DOD provide an explanation for any shortfall in its reduction for the civilian and contracted services workforces, as well as a description of actions DOD is taking to achieve the required savings. DOD concurred with our recommendation. While DOD did provide an explanation in its February 2016 status report for its shortfall in estimated contracted services reductions, it did not provide a description of actions it is taking to achieve the required savings. We continue to believe that our previous recommendation is valid and should be fully implemented. DOD’s February 2016 status report does not include an efficiencies plan for reducing its civilian and contracted services workforces in fiscal year 2015. Section 955 of the fiscal year 2013 NDAA required DOD to develop an efficiencies plan in order to reduce total funding for the civilian and contractor workforce from fiscal year 2012 through fiscal year 2017 at a rate not less, in percentage terms, than the savings in funding for basic military personnel pay achieved from reductions in military end strengths over the same period. The plan was to be developed within 90 days of the enactment of the NDAA, which would have been April 4, 2013. We have previously reported that DOD’s first two reports from September 2014 and February 2015 do not include a comprehensive description of the efficiencies plan for reductions that would provide congressional decision makers with information on how the department will achieve required savings. Further, section 955 requires DOD to submit status reports each fiscal year through 2018 that describes the implementation of the efficiencies plan to reduce costs for the civilian and contracted services workforces, and any modifications to the plan required due to changing circumstances. DOD’s February 2016 report does not include a description of the implementation of an efficiencies plan in fiscal year 2015 for achieving required reductions, as required by section 955. Section 955 states that in the development and implementation of the efficiencies plan, DOD may exclude certain civilian and contractor workforces from reporting required reductions. These exclusions are to be related to functions identified as core or critical to the mission of the department. DOD excluded approximately 538,000 civilian full-time equivalents of its approximately 776,000 civilian full-time equivalents from required reductions for fiscal year 2012. These exclusions have been in place since DOD’s first report in February 2014. However, while its February 2016 report states that the civilian workforce exclusions were selected because they are critical to the mission of the department, it does not describe how this criticality selection was determined, or whether DOD has assessed the need for changing the workforces it excludes, and officials stated that the department’s rationale is not included in other documents. According to OUSD (Comptroller) and military department officials, the exclusions were decided upon during discussions by OUSD (Comptroller) and the military departments prior to DOD’s first report, in February 2014. Without a comprehensive description of an efficiencies plan, including an explanation indicating how exclusions were determined, DOD cannot provide congressional decision makers with full information on their approach, or show how the department will achieve required savings on the non-excluded workforce. We previously recommended that DOD include a comprehensive description of an efficiencies plan to achieve savings for the civilian workforce and contracted services workforces for fiscal year 2012 through 2017. DOD concurred with our recommendation. Without including a comprehensive plan, DOD cannot provide congressional decision makers with complete information on how the department will achieve required savings. Further, we previously recommended that status reports included in the President’s budget request for fiscal years 2017 and 2018 should describe the implementation of the plan in the prior year. DOD concurred with our recommendation. Without a description of the implementation of the plan in the prior year, DOD cannot provide congressional decision makers with information regarding decisions it has made in order to achieve its goal, such as which workforces it has excluded. We continue to believe that these recommendations are valid and should be fully implemented. DOD’s February 2016 status report refers to elements present in workforce management laws, but it does not explain how savings are being achieved in a manner consistent with workforce management laws. Section 955 of the fiscal year 2013 NDAA states that the required plan shall be consistent with workforce management laws under section 129a of Title 10 of the U.S. Code. Section 129 of Title 10 outlines certain restrictions on DOD’s management of civilian personnel, including a requirement that the number of civilian personnel shall be managed solely on the basis of and consistent with total-force management policies and procedures established under section 129a, workload requirements, and funds made available to the department each fiscal year. Section 129a of Title 10 governs DOD’s general policy for total-force management, and requires the Secretary of Defense to establish policies and procedures for determining the most appropriate and cost-efficient mix of military, civilian, and contractor personnel to perform the mission of the department, among other things. DOD’s February 2016 status report references its Strategic Workforce Plan, which we reported in July 2014 did not meet the statutory requirement to include an assessment of the appropriate mix of military, civilian, and contractor personnel capabilities, and states that in August 2016 the Deputy Secretary of Defense directed a manpower review to measure DOD’s compliance level with workforce management laws and other statutory requirements; however, it does not provide the level of detail needed to determine whether DOD’s method for achieving reductions is consistent with workforce management laws. We previously recommended that DOD include a description demonstrating that the plan is consistent with policies and procedures implementing workforce-management laws and steps that the department is taking to ensure that no unjustified transfers between workforces take place as part of the implementing plan. DOD concurred with our recommendation but stated that the department’s plans to date are consistent with and reflect workforce shaping and workload sourcing requirements, as well as other criteria pertaining to manpower requirement, such as risk mitigation. DOD also stated that the Office of the Secretary of Personnel and Readiness remains actively engaged in decisions affecting the workforce; that the department’s Planning, Programming, Budgeting and Execution System process helps to ensure that there are no unjustified transfers between workforces; and that any changes are made in accordance with workforce management laws. DOD included similar language in its February 2016 status report. However, as mentioned above, the description lacks an explanation demonstrating how the department’s reductions are consistent with workforce management laws. Without this explanation, decision makers and Congress will not be able to determine whether DOD’s actions are consistent with policies and procedures for implementing workforce- management laws, and whether DOD is taking steps to ensure that no unjustified transfers between workforces take place as part of the implementing plan. We continue to believe that our previous recommendation is valid. We are not making any new recommendations in this report and believe that DOD’s fully implementing the previous recommendations is needed to better inform the Congress. We provided a draft of this report to DOD for review and comment. In its written comments, DOD noted that we are not making new recommendations and stated that it has implemented the recommendations from our December 2015 report. Although DOD has taken some action in response to our December 2015 report, we disagree that the recommendations have been fully implemented, as summarized below. DOD’s comments are reprinted in appendix II. In response to the recommendation in our December 2015 report to include a comprehensive description of a plan to achieve savings for the civilian workforce and contracted services workforces for fiscal year (FY) 2012 through 2017, DOD stated in its written comments that it included a section that describes its plan, guidance, and implementation in its February 2016 status report. DOD also stated that it continues to conform to the principles and tenets of Strategic Workforce Planning and continues to conform to the plan prescribed in the department’s Planning, Programming, Budgeting, and Execution processes. DOD’s February 2016 report does reference its Strategic Workforce Plan and its Planning, Programming, Budgeting, and Execution processes, however it does not include a comprehensive description of an implementation plan that DOD would use to achieve the congressionally mandated financial savings. Without a comprehensive description of such a plan, congressional decision makers do not have complete information on how the department will achieve required section 955 savings. In response to the recommendation in our December 2015 report to include a description demonstrating that the plan is consistent with policies and procedures implementing workforce-management laws and steps that the department is taking to ensure that no unjustified transfers between workforces take place as part of the implementing plan, DOD stated in its written comments that its February 2016 status report references DOD’s Strategic Workforce Plan, which outlines workforce management laws that the department follows. It also states it has taken actions outside of the February 2016 report to ensure that the department remains in compliance with section 955 and other statutory requirements. However, we reported in July 2014 that DOD’s Strategic Workforce Plan for fiscal years 2013-2018 lacked required elements under section 115b of Title 10 of the U.S. Code. Specifically, we found that the Strategic Workforce Plan did not address all statutory reporting requirements, such as the requirement to include an assessment of the appropriate mix of military, civilian, and contractor personnel capabilities. In May 2013, we recommended that DOD determine the appropriate workforce mix and as of August 2016, DOD had not yet done so. Moreover, as we state in our report, DOD’s February 2016 status report does not provide the level of detail or an explanation needed to enable decision makers or Congress to determine whether DOD’s method for achieving reductions is consistent with workforce management laws. In response to the recommendation in our December 2015 report that status reports included in the President’s budget request for FY 2017 and 2018 should describe the implementation of the plan in the prior year, DOD stated in its written comments that its February 2016 report timeline covered FY 2012 to FY 2021 and therefore implemented the recommendation. However, while DOD included numbers of reductions in the civilian and military workforces for the previous fiscal year, it did not include a description of how it is implementing its plan. For example, section 955 requires DOD to submit status reports each fiscal year through 2018 that describe the implementation of the efficiencies plan to reduce costs for the civilian and contracted service workforces, and states that in the development and implementation of the efficiencies plan, DOD may exclude certain civilian and contractor workforces from reporting required reductions. DOD excluded approximately 538,000 civilian full- time equivalents of its approximately 776,000 civilian full-time equivalents from required reductions for fiscal year 2012. While its report states that the civilian workforce exclusions were selected because they are critical to the mission of the department, it does not describe how this criticality selection was determined, or whether DOD has assessed the need for changing the workforces it excludes. In response to the recommendation in our December 2015 report to include in its status reports the costs in civilian personnel and military basic pay for fiscal years 2012 through 2017, DOD stated in its written comments that it included in its February 2016 status report the number and cost of military and civilian personnel reduced. As we discuss in our report, DOD partially implemented the recommendation in its 2016 status report and included the actual and estimated numbers of military and civilian full-time equivalent reductions, and the costs associated with the workforces in FY 2012 and FY 2017. However, DOD did not include the costs associated with the workforces for each fiscal year in between, as we recommended. Therefore, DOD has not yet fully implemented the recommendation. In response to the recommendation in our December 2015 report to provide an explanation for any shortfall in its reduction for the civilian and contracted services workforces, as well as a description of actions DOD is taking to achieve the required savings, DOD stated in its written comments it has met its civilian personnel cost reduction requirement, and thus no explanation of any shortfall was required. In its February 2016 report DOD partially implemented our recommendation by providing an explanation for estimated shortfalls for contracted services. In addition, as we state in our report, DOD is estimating that it is on track to meet its civilian personnel cost savings requirement in fiscal year 2017 and thus did not need to include a description of any shortfalls. However, our recommendation also was to include a description of the specific actions DOD is taking to achieve the required savings. DOD did not include such a description in its February 2016 report and therefore DOD has not yet fully implemented our recommendation. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense (Comptroller); and the Under Secretary of Defense (Personnel and Readiness). In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of our review were to evaluate the extent to which DOD’s February 2016 status report demonstrates (1) DOD’s achievement of savings, as required by section 955 of the Fiscal Year 2013 National Defense Authorization Act (NDAA); and (2) DOD’s development and implementation of its efficiencies plan that is consistent with workforce management laws in fiscal year 2015, as required by section 955 of the Fiscal Year 2013 NDAA. To determine the extent to which DOD’s February 2016 status report demonstrates DOD’s achievement of savings, as required by section 955, we reviewed section 955 to determine what information is required to be reported and what levels of savings are required. We then assessed DOD’s February 2016 status report to determine what savings information were included in its report for the civilian and contracted services workforces. We compared savings information included against information requirements in section 955. We also compared DOD’s actual and projected fiscal years 2012 through 2017 reductions in military basic pay against information included in DOD’s February 2016 status report on (1) DOD’s actual and projected civilian workforce cost and full-time equivalent reductions from fiscal years 2012 through 2017; and (2) DOD’s actual and projected contracted services cost reductions from fiscal years 2012 through 2017. We also assessed the methodology for how DOD calculated its actual and estimated civilian workforce savings to determine whether DOD’s calculations accounted for exclusions to its workforce reductions requirements. We reviewed our December 2015 report that used workforce data and discussed the reliability of the data with OUSD (Comptroller) officials. We determined that these data were sufficiently reliable for the purpose of reporting on and analyzing reductions to the civilian workforces. To determine the extent to which DOD’s February 2016 report demonstrates DOD’s development and implementation of its efficiencies plan that is consistent with workforce management laws in fiscal year 2015, as required by section 955 of the Fiscal Year 2013 NDAA, we reviewed section 955 of the Fiscal Year 2013 NDAA to identify requirements for DOD’s report. We reviewed and analyzed DOD’s February 2016 status report to determine how, if at all, it demonstrated the implementation of an efficiencies plan. We interviewed officials from the Departments of the Army, Navy, and Air Force to determine the types of guidance they received from OSD, if any, and the extent to which they established their own guidance on developing and implementing DOD’s plan required by section 955. We also reviewed DOD’s February 2016 status report to determine whether DOD excluded workforces from the civilian and contractor workforce reduction requirement. We interviewed Office of the Under Secretary of Defense (Comptroller) officials and officials from the Departments of the Army, Navy, and Air Force to gain an understanding of how these workforces were excluded, the basis for such exclusions, and when the exclusions were identified. We spoke with officials from the Office of the Under Secretary of Defense (Comptroller) and officials from the Departments of the Army, Navy, and Air Force regarding what guidance was provided for the exclusions and how the process took place in order to determine the exclusions. Further, we requested documentation on the process that took place in order to determine such exclusions. To determine the extent to which DOD’s February 2016 report demonstrates that its reductions are consistent with workforce management laws as required by section 955 we reviewed DOD’s February 2016 status report and materials listed in the report and interviewed officials at OUSD. We determined that the level of detail included in DOD’s February 2016 report were not sufficient to conclude whether or not DOD was in compliance with workforce management laws. For both objectives, we identified actions taken, if any, in response to our 2015 report. We conducted this performance audit from February 2016 to October 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Vincent Balloon, Assistant Director; Timothy Carr, Michael Silver, Norris “Traye” Smith, Sabrina Streagle, Guiovany Venegas, and Cheryl Weissman made key contributions to this report. | With long-term fiscal challenges likely to continue, DOD must operate strategically and efficiently, to include cost-effective management of its human capital. Section 955 of the NDAA for FY 2013 requires DOD to, among other things, develop and implement a plan to achieve savings in total funding for civilian and contracted services workforces from FYs 2012 through 2017. Section 955 also includes a provision that GAO review the section 955 status reports DOD submits to Congress to determine whether the required savings are being achieved and the plan is being implemented consistently with workforce-management laws. This report addresses the extent to which DOD's February 2016 status report demonstrates DOD's (1) achievement of savings and (2) development and implementation of its efficiencies plan that is consistent with workforce management laws in FY 2015. GAO reviewed DOD's report and interviewed DOD officials. The Department of Defense (DOD) did not report all required data on military, civilian, and contracted services workforces in its February 2016 report that would demonstrate savings, as required by section 955 of the National Defense Authorization Act (NDAA) for Fiscal Year (FY) 2013, and it estimates that by FY 2017 it will meet savings for the civilian workforce but not for contracted services. Section 955 requires DOD to submit annual reports in FYs 2015—2018 that include the costs of civilian and contracted services workforces from FYs 2012—2017, among other items. See the table for DOD's February 2016 compliance with selected reporting requirements. Officials stated that DOD interpreted section 955 as requiring DOD to report civilian savings achieved when comparing costs from FY 2012 to FY 2017, and not each year in between. Further, officials stated that DOD did not include full-time equivalents (FTEs) for contracted services in the report as required because they were unable to provide an accurate number. In December 2015 GAO recommended that DOD include costs savings for civilian personnel in its reports, and DOD concurred. Without including these cost data, Congress may not know whether DOD is on track to meet the mandated savings. DOD has not developed and implemented an efficiencies plan for reducing the civilian and contracted services workforces, and DOD did not demonstrate how its reductions are consistent with workforce management laws in its February 2016 status report. Section 955 requires DOD to develop an efficiencies plan to reduce civilian personnel and costs for FYs 2012—2017, and for each FY through 2018 to submit a report that describes the implementation of the efficiencies plan. Furthermore, section 955 allows for DOD to grant civilian and contractor workforce exclusions from section 955 required reductions for areas identified as critical. For example, DOD reported that it excluded about 538,000 of 776,000 civilian FTEs. However, its reports do not provide a description indicating why these exclusions were chosen. In December 2015 GAO recommended that DOD include a comprehensive description of the efficiencies plan to achieve savings, and DOD concurred. Without an efficiencies plan, including an explanation of its exclusions, DOD has not provided Congress with information on how the department will achieve required savings. GAO previously recommended that DOD fully address ongoing section 955 requirements, such as including an efficiencies plan, among other things, in its subsequent reports. DOD agreed but has not yet implemented them. GAO is not making any new recommendations, but believes fully implementing the previous ones would better inform Congress. In comments, DOD stated it had implemented GAO's previous recommendations. DOD has taken some action, but GAO disagrees the recommendations have been fully implemented, as discussed in this report. |
Factors commonly used to evaluate tax policy in general can be applied to decisions of whether and how to extend expiring tax provisions, including tax expenditure provisions. The factors, listed in table 1 and discussed below, may also be relevant to evaluating other policy tools, such as spending programs or regulations. 1. Revenue Effects. Tax expenditures may, in effect, be viewed as spending programs channeled through the tax system. Tax expenditures can be viewed this way because they grant special tax relief for certain kinds of behavior by a taxpayer or for taxpayers in special circumstances. Revenues foregone through tax expenditures either reduce funding available for other federal activities or require higher tax rates to raise a given amount of revenue. Like decisions about spending, deciding whether to extend an expiring tax expenditure involves considering whether the benefit of the intended outcome is worth the effect on other programs or tax rates. Revenue the government would have collected absent a tax expenditure could have been used for other federal priorities, deficit reduction, or tax rate reductions. GAO-05-1009SP. gain that is said to improve economic efficiency. These gains improve peoples’ well-being in a variety of ways, including increased income and consumption opportunities. Estimating efficiency gains and losses can be challenging. Studies may be limited by what can be quantified; for example, studies may examine dollars spent on qualified research or the number of economic development projects built, rather than whether the use of funds for these activities constitute a better use of resources. Simplicity, transparency, and administrability. A tax expenditure’s design can affect three related and desirable features of tax provisions: simplicity, transparency, and administrability. Simple tax expenditures impose less taxpayer compliance burden, such as keeping records, learning about tax rules, filing tax returns, and other compliance activities. Transparent tax provisions are easy to understand, that is, taxpayers can grasp the logic behind them. Administrable tax expenditures have lower administrative costs for both the Internal Revenue Service (IRS) and third parties, such as banks or employers required to submit information on taxpayers’ income and transactions to IRS. Administration includes processing returns, programming information systems, answering taxpayer questions, and enforcement activities. Simplicity, transparency, and administrability are not the same but are interrelated. For example, extensions of expiring tax code provisions, sometimes retroactively, can add compliance burden, reduce taxpayers’ understanding of the tax laws, and impose additional costs on IRS, such as more phone calls from taxpayers. 3. Relationship to Other Policy Tools. Tax expenditures are one policy tool out of several—including spending, grants, loans and loan guarantees, and regulations—that policymakers can use to achieve public goals. The choice of whether to use tax expenditures, spending, or other tools depends on which approach better meets the goal at the lowest cost. Different policy tools may be more effective than others in achieving a particular policy outcome. With tax expenditures, certain activities may be cheaper and simpler to subsidize through the tax code because IRS has the administrative infrastructure to collect and remit money to millions of taxpayers. For example, the incremental administrative and compliance costs to deliver the tax credit for child and dependent care expenses may be relatively low compared to the costs of setting up a separate system for processing child care applications and sending vouchers to those eligible. How a tax expenditure is designed can affect its revenue effects and how it relates to the criteria for a good tax system. For example, depending on their design, tax expenditures can result in taxpayers receiving benefits for actions they would have taken absent the tax expenditure. Also, each type of tax expenditure creates tax savings in different ways and, consequently, reduces federal revenues in different ways and may have different distributional effects. The amount of tax relief per dollar that a taxpayer receives using an exclusion, exemption, or deduction depends on the taxpayer’s marginal tax rate. Generally, the higher a taxpayer’s marginal tax rate, the greater the tax savings from these tax expenditure types. Tax credits reduce tax liability dollar-for-dollar, so the value of a credit is the same regardless of a taxpayer’s marginal tax rate. The Government Performance and Results Act (GPRA) Modernization Act of 2010 (GPRAMA)expenditures in that it establishes a framework for providing a more crosscutting and integrated approach to focusing on results and improving government performance. GPRAMA makes clear that tax expenditures are to be included in identifying the range of federal agencies and activities that contribute to crosscutting goals. Moving forward, GPRAMA implementation can help inform tough choices in setting priorities as policymakers address the rapidly building fiscal pressures facing our national government. can help in evaluating tax If not well designed or effectively implemented, tax expenditures can contribute to mission fragmentation and program overlap, thus creating the potential for duplication with other policy tools. All federal spending and tax policy tools, including tax expenditures, should be reexamined to ensure that they are achieving their intended purposes and are designed in the most efficient and equitable manner. 4. Measurement Challenges. Unavailable or insufficient data can hinder policymakers’ ability to consider how the factors described above relate to particular tax expenditures. A key challenge is that data necessary to assess how and by whom a tax expenditure is used generally are not collected on tax returns unless IRS needs the information to ensure tax compliance or is legislatively mandated to collect or report the information. In some cases, IRS may combine reporting requirements to minimize its workload and taxpayer burden, and as a result, the information collected may not identify specific beneficiaries or activities targeted by a tax expenditure. Also, the influence of other economic and social factors can confound efforts to measure a tax expenditure’s effects on efficiency and equity. We and the Office of Management and Budget (OMB) have noted that the desired outcomes of a tax expenditure or other policy tool are often the combination of effects of the program and external factors. If policymakers conclude that additional data would facilitate reexamining a particular tax expenditure, decisions would be required on what data are needed, who should provide the data, who should collect the data, how to collect the data, what it would cost to collect the data, and whether the benefits of collecting additional data warrant the cost of doing so. Another factor to consider is how to facilitate data sharing and collaborative evaluation efforts amongst federal agencies. Our prior reports on tax expenditures illustrate how these factors can be used to help evaluate whether and how to extend expiring tax provisions. Domestic Ethanol Production. Our past work related to domestic ethanol production highlights the importance of considering how tax expenditures relate to other policy tools. Congress has supported domestic ethanol production through two policy tools: (1) a tax credit, the most recent version of which expired after December 31, 2011, and (2) a renewable-fuel standard that generally requires transportation fuels in the United States to contain certain volumes of biofuels, such as ethanol. In 2009, we reported that the tax credit was important in helping to create a profitable corn starch ethanol industry when the industry had to fund investment in new facilities, but is less important now for sustaining the industry because most of the capital investment has already been made. We found that Congress’s efforts to support domestic ethanol production through a tax credit and renewable-fuel standard were duplicative. The fuel standard is now at a level high enough to ensure that a market for domestic ethanol production exists in the absence of the ethanol tax credit. As such, we suggested that Congress consider modifying the credit or phasing it out. Congress allowed the credit to expire at the end of 2011. JCT did not include an estimate of the budgetary effect of extending the credit through December 31, 2013, in its March 2012 estimates, as the President did not propose to extend the credit. Higher Education. Our past work on higher-education tax expenditures illustrates how tax expenditures that are not transparent (i.e., cannot be easily understood by taxpayers) can result in taxpayers making decisions that do not maximize their tax benefits. The tuition and fees deduction, which expired after December 31, 2011, helped students and their families pay for higher education by allowing them to deduct qualified education expenses from income that would otherwise be taxable. In 2008, we found that tax filers did not always claim higher-education tax expenditures, such as the tuition and fees deduction, that maximize their potential tax benefits, potentially because of the complexity of higher- education tax provisions.provisions involved could potentially increase transparency in the system. JCT estimates the budgetary effect of extending this provision through December 31, 2013, would be about $1.5 billion in fiscal years 2012- 2022. Further analysis and simplification of the tax Higher education tax expenditures also illustrate how measurement and methodological challenges can impede evaluating their effectiveness. In 2005, we reported that little is known about the effectiveness of education-related federal grants, loans, and tax expenditures in promoting student outcomes including college attendance, students’ choice among colleges, and the likelihood that students will continue their education. We also found that research gaps may be due, in part, to data and methodological challenges—such as difficulty isolating the behavioral effects of the tax expenditure under study from other changes—that have proven difficult to overcome. Research Tax Credit. Our past work on the research tax credit provides insights into how improving the design of a tax expenditure could improve its economic efficiency and reduce revenue costs. Economists widely agree that some government subsidy for research is justified because the social returns from research exceed the private returns that investors receive. Since 1981, the research tax credit has provided significant subsidies (an estimated $6 billion for fiscal year 2011) to encourage business to invest in research and development. The most recent version of the credit expired after December 31, 2011. Despite the widespread support for the concept of a credit for increasing research activities, concerns have been raised about the cost-effectiveness of the design of the current credit and its administrative and compliance costs. We found that the research tax credit, as currently designed, distributes incentives unevenly across taxpayers and provides many recipients with windfall benefits, earned for research that they would have done anyway. For example, we found that for those claiming the regular credit, more than half of the credit such claimants earned was a windfall. The disparities in incentives can lead to an inefficient allocation of investment resources across businesses, and the windfall benefits represent foregone tax revenue that does not contribute to the credit’s objective. Accordingly, we suggested that Congress modify the research tax credit to reduce economic inefficiencies and excessive revenue costs. JCT estimates the budgetary effect of the President’s proposal to enhance and make permanent this provision would be about $99 billion in fiscal years 2012- 2022. Our past work on the research tax credit also provides insight into how tax expenditure design can affect transparency and administrability. In 2009, we reported that there are numerous areas of disagreement between IRS and taxpayers concerning what types of spending qualify for the research credit because of issues such as the definitions used to determine eligibility and the documentation needed to support the claim. These disputes raise the cost of the credit to both taxpayers and IRS and diminish the credit’s incentive effect by making the ultimate benefit to taxpayers less certain. We made several recommendations to the Department of the Treasury (Treasury) to reduce the uncertainty that some taxpayers have about their ability to earn credits for their research activities. To date, Treasury has not fully implemented these recommendations. New Markets Tax Credit (NMTC). Our past work on the NMTC provides examples highlighting issues of simplicity and the need to consider tax expenditures in light of other policy tools. Congress enacted the NMTC in 2000 as part of an ongoing effort to revitalize low-income communities. Treasury awards tax credits to Community Development Entities (CDE), which sell the credits to investors to raise funds. JCT estimates the budgetary effect of the President’s proposal extending and modifying the NMTC would be about $3.5 billion in fiscal years 2012-2022. In 2007, we reported that the NMTC appeared to increase investment in low-income communities. However, in 2010 we reported that the complexity of NMTC transaction structures appeared to make it difficult to complete smaller projects and often results in less of the money investors initially put into the project ending up in low-income community businesses—the beneficiaries of NMTC financing—than would be the case if the program were simplified. We suggested Congress consider offering grants to CDEs that would provide the funds to low-income community businesses and assess the extent to which the grant program would increase the amount of federal subsidy provided to low-income community businesses compared to the NMTC. One option would be for Congress to set aside a portion of funds to be used as grants and a portion to be used as tax credits under the current NMTC program to facilitate a comparison of the two programs. Revitalization Programs. Our past work on revitalization programs, including the Empowerment Zone (EZ), Enterprise Community (EC), and Renewal Community (RC) programs, provides an example of measurement challenges when evaluating tax expenditures. Congress established the EZ, EC, and RC programs to reduce unemployment and generate economic growth in selected Census tracts. Urban and rural communities designated as EZs, ECs, or RCs received grants, tax expenditures, or a combination of both to stimulate community development and business activity. Tax provisions for empowerment zones and the District of Columbia (DC) enterprise zone (including the first-time homebuyer credit for the District of Columbia) expired after December 31, 2011. JCT estimates that the budgetary effect of extending these provisions through December 31, 2013, would be $585 million from fiscal years 2012-2022. Our prior work has found improvements in certain measures of community development in EZ communities, but data and methodological challenges make it difficult to establish causal links. In the case of the EZ, EC, and RC programs, the lack of tax benefit data limited the ability of the Department of Housing and Urban Development (HUD) and the Department of Agriculture to evaluate the overall mix of grant and tax programs to revitalize selected urban and rural communities. In response to our recommendations, HUD and the IRS collaborated to share data on some program tax credits. However, the IRS data did not tie the program tax incentives to specific designated communities, making it difficult to assess the effect of the tax benefits. We have previously reported that if Congress authorizes similar programs that rely heavily on tax expenditures in the future, it would be prudent for federal agencies responsible for administering the programs to collect information necessary for determining whether the tax benefits are effective in achieving program goals. Nonbusiness Energy Property Credit. Our work on the nonbusiness energy property credit highlights the importance of considering revenue foregone and the criteria for good tax policy when determining whether and how to extend specific tax provisions. Enacted as part of the Energy Policy Act of 2005, the nonbusiness energy property credit was intended to increase homeowners’ investment in energy-conserving improvements, such as insulation systems, exterior windows, and metal roofs, by reducing their after-tax costs. The credit expired on December 31, 2011. JCT estimates the budgetary effect of the President’s proposal extending and modifying this provision through December 31, 2013, would be about $2.4 billion in fiscal years 2012-2022. The design of the credit affects its economic efficiency and revenue costs. The credit combines features of both cost-based and performance-based credits. Cost-based credits provide incentives that are usually a fixed percentage of qualified spending, whereas performance-based credits provide incentives that are tied to specific measures of energy savings and therefore may require before and after energy audits. The nonbusiness energy property credit is cost-based in that the amount of credit claimed is directly proportional to a taxpayer’s qualified spending. It is performance-based in that only certain qualifying purchases are eligible. In 2012, we reported that both the performance-based and cost- based credits have advantages and disadvantages with neither design being unambiguously the better option based on current information. For example, a performance-based credit is more likely to effectively reduce energy use and carbon dioxide emissions because it rewards energy savings from the investment rather than the cost-based credit’s rewarding of spending regardless of whether this spending results in energy savings. However, the performance-based credit may have significant up- front costs for energy audits, not required by the cost-based credit, which could reduce its effectiveness by discouraging investment. The credit’s design also can affect its administrability and equity. For taxpayers who do invest, these up-front costs may mean that a performance-based credit may have significantly higher taxpayer compliance and IRS administrative costs than a cost-based credit. Views on what is a fair distribution of the credit’s costs and benefits can differ dramatically across individuals. However, whatever one’s views of fairness, an analysis of the distribution of costs and benefits by such factors as income level can be useful. Indian Reservation Depreciation. Our work on this provision is another example of how measurement challenges can hinder evaluation of tax expenditures. The provision allows taxpayers to take larger deductions for depreciation from their business income earlier than they otherwise would be allowed for certain property on Indian reservations. For the deduction, taxpayers are not required to identify the reservation on which the depreciated property is located, preventing assessments linking investment to economic indicators on specific reservations. We suggested Congress consider requiring IRS to collect this information, but we noted that Congress would need to weigh the associated costs of collecting and analyzing the information as well as the effects on IRS’s other priorities. The credit expired on December 31, 2011. JCT estimates the budgetary effect of extending this provision through December 31, 2013, would be $100 million in fiscal years 2012-2022. In closing, considering the various factors I have laid out today can help when deciding whether and how to extend expiring tax provisions. Improving tax expenditure design may enable individual tax expenditures to achieve better results for the same revenue loss or the same results with less revenue loss. Also, reductions in revenue losses from eliminating ineffective or redundant tax expenditures could be substantial depending on the size of the eliminated provisions. As we have stated in prior reports, we believe that tax expenditure performance is an area that would benefit from enhanced congressional scrutiny as Congress considers ways to address the nation’s long-term fiscal imbalance. Chairman Tiberi, Ranking Member Neal, and Members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions you and Members of the Subcommittee may have at this time. GAO-05-690 and GAO, Tax Policy: Tax Expenditures Deserve More Scrutiny, GAO/GGD/AIMD-94-122 (Washington, D.C.: June 3, 1994). For further information regarding this testimony, please contact James R. White, Director, Strategic Issues, at (202) 512-9110 or whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Jeff Arkin, Assistant Director; Shannon Finnegan; Melanie Papasian; MaryLynn Sergent; Anne Stevens; and Sabrina Streagle. Kevin Daly, Tom Gilbert, Susan J. Irving, Thomas McCabe, Timothy Minelli, Ed Nannenhorn, Michael O’Neill, and Jim Wozny also provided technical support. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | GAO was asked to discuss the extension of tax provisions, sometimes called tax extenders, that either expired in 2011 or are scheduled to expire at the end of 2012. For a prior hearing of this subcommittee, the Joint Committee on Taxation (JCT) prepared a document detailing 64 expiring tax provisions. Most of these provisions are tax expendituresreductions in a federal taxpayers tax liability that result from special credits, deductions, exemptions and exclusions from taxation, deferral of tax liability, and preferential tax rates. Tax expenditures are often aimed at policy goals similar to those of spending programs, such as encouraging economic development in disadvantaged areas and stimulating research and development. Because revenue is foregone, these provisions may, in effect, be viewed as spending programs channeled through the tax system. For those provisions the President proposed extending through 2013, JCT estimated the budgetary effect would be at least $40 billion in foregone revenue over its 10-year budget window. This testimony outlines factors useful for considering trade-offs when deciding whether and how to extend provisions and illustrates their application to some of the expiring provisions. GAOs testimony is based on previous work on tax reform and tax expenditures. Factors commonly used to evaluate tax policy, as well as other policy tools such as spending programs or regulations, can be applied to decisions about whether and how to extend expiring tax expenditures, as discussed below. Revenue Effects . Revenues foregone through tax expenditures either reduce resources available to fund other federal activities or require higher tax rates to raise a given amount of revenue. Like decisions about spending, deciding whether to extend an expiring tax expenditure involves considering whether the benefit of the intended outcome is worth the effect on other programs or tax rates. The nations long-term fiscal challenge makes it all the more important to ensure tax expenditures are efficient and relevant. Criteria for Good Tax Policy . Three long-standing criteria typically used to evaluate tax policyequity; economic efficiency; and a combination of simplicity, transparency, and administrabilitycan be applied to the expiring tax expenditures. Because the criteria may sometimes conflict with one another, there are usually trade-offs to consider when evaluating particular tax expenditures. Relationship to Other Policy Tools . Tax expenditures represent just one policy tool of severalincluding spending, grants, loans, and regulationsthat policymakers can use to achieve policy goals. If not well designed, tax expenditures can create the potential for duplication with other policy tools. Measurement Challenges . Unavailable or insufficient data can hinder policymakers ability to consider how the factors described above relate to particular tax expenditures. A key challenge is that data necessary to assess how a tax expenditure is used and by whom generally are not collected on tax returns unless the Internal Revenue Service needs the information to ensure tax compliance or is legislatively mandated to collect or report the information. GAOs prior reports on tax expenditures illustrate how these factors can be used to evaluate whether and how to extend expiring tax provisions. For example, GAO found that the research tax credit, as currently designed, provides many recipients with windfall benefits earned for spending they would have done anyway. A report on domestic ethanol productionin which GAO suggested modifying or phasing out a tax credit that was duplicative of the renewable-fuel standardhighlights the importance of considering how tax expenditures relate to other policy tools. GAOs work on higher-education tax expenditures illustrates how tax expenditures that are not transparent (i.e., cannot be easily understood by taxpayers) can result in taxpayers making decisions that do not maximize their tax benefits. This work also concluded that little is known about the effectiveness of education-related federal grants, loans, and tax expenditures in promoting certain student outcomes, such as college attendance. Research gaps may be due, in part, to data and methodological challengessuch as difficulty isolating the behavioral effects of the tax expenditure under study from other changesthat have proven difficult to overcome. GAO has made many recommendations in its previous reports on tax expenditures that reflect the factors described in this testimony. Some have been acted on, while others have not. |
The FBI serves as the primary investigative unit of the Department of Justice. The FBI’s mission responsibilities include investigating serious federal crimes, protecting the nation from foreign intelligence and terrorist threats, and assisting other law enforcement agencies. Approximately 12,500 special agents and 18,000 analysts and mission support personnel are located in the bureau’s Washington, D.C., headquarters and in more than 450 offices in the United States and more than 50 offices in foreign countries. Mission responsibilities at the bureau are divided among the following five major organizational components: Intelligence: Collects and analyzes information on evolving threats to the United States and ensures its dissemination within the FBI, to law enforcement, and to the U.S. intelligence community. Counterterrorism and Counterintelligence: identifies, assesses, investigates, and responds to national security threats. Criminal Investigations: Investigates serious federal crimes, including those associated with organized crime, violent offenses, white-collar crime, government and business corruption, and civil rights infractions. Probes federal statutory violations involving exploitation of the Internet and computer systems for criminal, foreign intelligence, and terrorism purposes. Law Enforcement: provides law enforcement information and forensic services to federal, state, local, and international agencies. Administration: manages the bureau’s personnel programs, budgetary and financial services, records, information resources, and information security. To execute its mission responsibilities, the FBI relies on IT, and this reliance has continued to grow. The bureau operates and maintains hundreds of computerized systems, networks, databases, and applications, such as: the Combined DNA Index System, which supports forensic examinations; the National Crime Information Center and the Integrated Automated Fingerprint Identification System, which helps state and local law enforcement agencies identify criminals; the Automated Case Management System (ACS), which manages information collected on investigative cases; the Investigative Data Warehouse, which aggregates data in a standard format from disparate databases to facilitate content management and data mining; and the Terrorist Screening Database, which consolidates identification information about known or suspected international and domestic terrorists. Following the terrorist attacks in the United States on September 11, 2001, the FBI shifted its mission focus to detecting and preventing future attacks, which ultimately led to the FBI’s commitment to reorganize and transform. According to the bureau, the complexity of this mission shift, along with the changing law enforcement environment, strained its existing IT environment. As a result, the bureau accelerated the IT modernization program that it began in September 2000. This program, later named Trilogy, was the FBI’s largest IT initiative to date, and consisted of three parts: (1) the Information Presentation Component to upgrade FBI’s computer hardware and system software, (2) the Transportation Network Component to upgrade FBI’s communication network, and (3) the User Application Component to upgrade and consolidate FBI’s five key investigative software applications. The heart of this last component became the Virtual Case File (VCF) system, which was intended to replace the obsolete Automated Case Support system, FBI’s primary investigative application. While the first two components of Trilogy experienced cost overruns and schedule delays, both are nevertheless currently still operating. However, VCF never became fully operational. In fact, the FBI terminated the project after Trilogy’s overall costs grew from $380 million to $537 million, the program fell behind schedule, and pilot testing showed that completion of VCF was infeasible and cost prohibitive. Among reasons we and others cited for VCF’s failure were poorly defined system requirements, ineffective requirements change control, limited contractor oversight, and human capital shortfalls due to, for example, no continuity in certain management positions and a lack of trained staff for key program positions. The FBI reports that it has almost 500 systems, applications, databases, and networks that are in operation, undergoing enhancement, or being developed or acquired. In particular, it has identified 18 new or enhancement projects that support its intelligence, investigative, and analyst activities. Included in these is the Sentinel program. The Sentinel program succeeds and expands VCF and is intended to meet FBI’s pressing need for a modern, automated capability for investigative case management and information sharing to help field agents and intelligence analysts perform their jobs more effectively and efficiently. The program’s key objectives are to: (1) successfully implement a system that acts as a single point of entry for all investigative case management and that provides paperless case management and workflow capabilities, (2) facilitate a bureau-wide organizational change management program, and (3) provide intuitive interfaces that feature data relevant to individual users. Using commercially available software and hardware components, Sentinel is planned to provide a range of system capabilities and services, including: investigative case management, leads management, and evidence management; document and records management, indexed searching, and electronic links to legacy FBI systems and external data sources; training, statistical, and reporting tools; and security and application management. The FBI plans to acquire Sentinel in four phases, each of which will span 12 to 18 months. While the specific content of each phase is to be proposed by and negotiated with the prime contractor, the general content of each phase is as follows: Phase 1: Includes a Web-based portal that will provide a data access tool for ACS and other legacy systems and includes the definition of a service- oriented architecture to support delivery and sharing of common services across the bureau. Phase 2: Includes the creation of case document and records management capabilities, document repositories, improved information assurance, application workflow, and improved data labeling to enhance information sharing. Phase 3: Includes updating and enhancing system storage and search capabilities. Phase 4: Includes implementing a new case management system to replace ACS. Overall, the FBI estimates that the four phases will cost about $425 million and take 6 years to complete. For fiscal year 2005, the FBI reprogrammed $97 million in appropriated funds from various sources to fund Sentinel work and submitted a $100 million budget estimate for fiscal year 2007. To manage the acquisition and deployment of Sentinel, the FBI established a program management office within the Office of the Chief Information Officer. The program office is led by a program manager and consists of the eight primary units described here (see fig. 1). Human capital decision making is vested with the program manager (or deputy program manager in his absence.) Program Management Office Staff: General Counsel provides legal advice; dedicated Contracting Officer manages program support and development contracts on behalf of the Program Management Office; and office staff manages day-to-day operations. Communications and Liaison Team: Prepares communications for the user community regarding Sentinel content and progress, media releases, and program briefings for stakeholders through FBI channels. Also, prepares information and reports for congressional stakeholders and testimony for the Director, Deputy Attorney General, and Attorney General regarding the program. Organizational Change Management Team: Prepares user community for adapting to new technology and associated work process changes and cultural shifts and serves as the user community’s representative and information conduit to the program office. Business and Administrative Support Unit: Provides support and oversight services, including support for human capital management, information and physical security, budget and investment management, contract support, audit, cost estimation, financial management, earned value management, and property management. Program Integration Unit: Prepares program baselines and plans, including milestones, and tracks progress against them; also, documents baseline changes. Manages the configuration management process, schedules program reviews, and provides major reports and updates regarding the program to bureau management and stakeholders. System Development Unit: Focuses on system design and development and related technical aspects of the program, such as design, development, and testing to ensure that technical solutions meet system and user requirements. Performs technical analyses of new requirements and changes to the enterprise architecture. Transition Unit: Manages the phased roll-out of system capabilities, including headquarters and field site preparation, user training, and changeover in user support to the Operations and Maintenance Unit. Operations and Maintenance Unit: Oversees and supports deployed system capabilities. To support the program office, the FBI has also issued task orders under existing contracts for program management support and services. In 2005, we testified that the FBI’s efforts to establish a strategic approach to managing its IT human capital remained a work in progress, and that completing these efforts posed a significant challenge for the bureau. In addition, we reported that the CIO had yet to create a strategic approach to managing IT human capital. As we said at that time, such an approach includes an assessment of the core competencies and essential knowledge, skills, and abilities needed to perform key IT functions, as well as an inventory of existing workforce capabilities and a gap analysis between defined needs and existing capabilities. The approach also provides for defining strategies and actions for filling identified gaps, such as the appropriate mix of hiring, training, and contract activities. It also establishes performance and accountability mechanisms, such as time frames, resources, roles and responsibilities, and performance measures associated with executing the strategies and actions. In September 2005, the National Academy of Public Administration reported that the bureau had developed a strategic human capital plan and had initiatives under way to improve its human capital system. However, it also reported that the bureau’s programs, activities, and actions were unlikely to produce a successful human capital program. Specifically, human capital improvement efforts were not carried out in a systematic, coordinated, and strategic manner; human capital management responsibility and authority were shared among different individuals; implementation of initiatives that involved contractors was not effectively coordinated; and implementation of plans and decisions was not always sustained. The Academy concluded that the bureau is likely to miss its staffing targets, due in part to insufficient workforce planning. To its credit, the FBI has moved quickly to staff its Sentinel program office, following what the Sentinel program manager describes as a meticulous series of actions to determine staffing needs, develop position descriptions, review resumes and reassess program needs. During the last year, it has also filled most of the positions in the plan primarily by using contractors. Nevertheless, a few key positions remain unfilled. Moreover, the staffing plan addressed only the program’s immediate staffing needs; it does not provide for the kind of strategic human capital planning and management that our research has shown to be critical to the success of any organization, such as inventorying the knowledge and skills of existing staff, forecasting knowledge and skill needs over the life of the program, and formulating explicit strategies for filling gaps. Exacerbating this lack of a strategic approach to human capital management is that the program’s inventory of risks does not include human capital as a program risk, and thus steps are not planned to proactively address these risks. Program officials told us that they are satisfied with Sentinel workforce management efforts, and, although challenges lie ahead, they are confident that the FBI can address the program’s evolving human capital needs. In contrast, other program documentation cites human capital as a program challenge and risk. In our view, the FBI’s approach to managing human capital in the Sentinel program is reactive and introduces the risk of not having skilled personnel available. A more proactive approach would increase the bureau’s ability to deliver Sentinel’s needed functionality and promised mission benefits on time and within budget. To its credit, the FBI has moved quickly to staff its Sentinel program office. During the last year, it created a staffing plan for Sentinel that is to serve as the program’s primary human capital planning document. Basically, this staffing plan defines the program’s immediate workforce requirements and identifies key program functions, positions, skills, and staffing levels that the FBI says it currently needs to begin executing the program. The staffing plan is intended to be a “living” document, meaning that the FBI intends to update it as required to reflect significant changes in the program office’s roles and responsibilities and staffing needs throughout the life of the program. These officials also stated that they developed the plan with the assistance of a contractor, and that it reflects their meticulous efforts to analyze staffing needs (skills and levels), develop position descriptions, review resumes, and reassess program needs. Further, they said that it is based on more than 100 years of combined program management experience and knowledge and that these efforts complied with bureau policies and procedures. Using the plan, program officials told us that they collaborated with the FBI Human Capital Office to fill defined positions with transfers from other FBI units and other federal agencies, and by hiring from outside the government. In doing so, the officials said that their approach was to fill program leadership positions with government staff and to fill the rest with government and contractor staff. Further, FBI officials said that they had initially focused on positions associated with near-term program management activities, such as program planning, requirements management, and contract solicitation and award. For government positions, program officials received support from the Human Capital Office in posting job announcements and processing applications. The program officials worked directly with existing contractors to fill contract positions. According to officials, they were able to address their initial staffing needs quickly because of the priority the Sentinel Program Manager, who is directly responsible for human capital decisions, devoted to recruitment and staffing efforts during the program’s planning stages, the availability of the FBI’s Human Capital Office to assist them, and the ability to draw from existing contract vehicles. Of the program’s 78 positions, 60 are to be filled by contractors (77 percent). This level of reliance on contractors for program management is appreciably higher than it was for another major IT program that we recently reviewed . For example, the ratio of government-to-contractor staffing on the Department of Homeland Security’s US VISIT program was about 50-50. According to Sentinel officials, their reliance on contractors for program management is a common practice in intelligence programs. While we are not aware of any generally accepted standards governing the desired mix of government versus contractor personnel performing program management functions, acquisition experts have recently raised over-reliance on contractors in performing program management functions as an emerging issue in the federal government. To date, the program office reports that it has filled 63 of 78 identified positions (81 percent). According to program officials, they are actively recruiting 5 of the 15 unfilled positions and plan to hire the remaining 10 in later phases of the program, when the need for these positions becomes more relevant. Among the 15 vacancies is the lead test engineer position, which is important for ensuring the testability of defined system requirements early in the program. According to program officials, the unfilled positions have had no negative impact on the program’s schedule or deliveries to date. (See fig. 2 for a complete list of the program office’s positions, including those still unfilled.) The success of any IT program depends on effectively leveraging people, processes, and tools to achieve defined outcomes and results. To effectively leverage people, they must be treated as strategic assets. As we previously reported, a strategic approach to human capital management enables an organization to be aware of and prepared for its current and future human capital needs, such as workforce size, knowledge, skills, and training. To be effective, our research shows that such a strategic approach includes using data-driven, fact-based methods to (1) assess the knowledge and skills needed to execute a program; (2) inventory existing staff knowledge and skills; (3) forecast the knowledge and skills needed over time; (4) analyze the gaps in capabilities between the existing staff and future workforce needs, including consideration of evolving program and succession needs caused by turnover and retirement; and (5) formulate strategies for filling expected gaps, including training, additional hiring, and the appropriate use of contractors. (See fig. 3 for an overview of this process). Through effective human capital management, organizations can effectively mitigate the serious risks associated with not having highly qualified employees. The Sentinel program has yet to determine and follow such a strategic approach to managing its human capital needs. In particular, in addressing its near-term staffing needs, FBI officials did not use a documented, fact- based, data-driven methodology to assess needs and existing capabilities, nor did it perform a gap analysis of the number of staff required and the specific skills and abilities needed to develop, maintain, and implement Sentinel. As previously stated, officials told us that they relied on their collective years of experience in managing IT projects and the assistance of a contractor to create the staffing plan, and that they reviewed resumes of candidates to fill the positions in the staffing plan. These efforts have not produced program life cycle strategies for retention of key staff, succession planning for key positions, long-term hiring of new staff, replenishment of workforce losses due to foreseeable attrition, or training of existing staff. The staffing plan also fails to specify the desired mix of government and contractor staff for the program. As we reported in 2005, the Chief Information Officer planned, at that time, to hire a contractor with human capital expertise to help identify gaps between existing skills and abilities and those needed to successfully modernize the bureau’s entire IT organization. In commenting on a draft of this report, the CIO stated that, in July 2005, he began a three-phase strategic human capital planning initiative, the purpose of which is to provide the CIO with the means to meet the bureau’s IT human capital needs for the 21st century. The three phases are (1) development of a competency model and an inventory of existing staff knowledge and skills, (2) an analysis of gaps between staff needs and existing capabilities, and (3) development and implementation of strategies to fill critical gaps. The FBI reports that they are close to completing the first phase, but that much work remains to be done. At the same time, the Sentinel program is well under way. Moreover, while the CIO stated that the Sentinel staffing plan will dovetail with this three-phase initiative, program officials told us that it was not clear to them how, or whether, the program’s staffing efforts were aligned with or part of other human capital efforts under way at the bureau. Nevertheless, program officials told us that they do not see a need to change their approach to managing Sentinel human capital because they believe that the approach used to initially staff-up the program office had served them well. Officials told us they will reassess their human capital needs for future phases of the project to ensure the right complement of staff and skills mix is available for each phase. In addition, they said that the bureau’s life cycle management policies and procedures do not require such a strategic approach to managing human capital for IT programs. Our analysis of the bureau’s system life cycle management directive and program management handbook confirmed that they do not contain policies, procedures, or guidance for doing so. In our view, not addressing Sentinel human capital more strategically and proactively increases the risk of not delivering required system capabilities and expected mission value on time and within budget. Risk management is a continuous, forward-looking process that is intended either to prevent program cost, schedule, and performance problems from occurring or to minimize the impact if problems do occur by proactively identifying risks, implementing risk mitigation strategies, and measuring and disclosing progress in doing so. To its credit, the FBI has established a risk management process for Sentinel that includes a risk management plan and an inventory of risks that are to be proactively managed to mitigate the probability of their occurring and their impacts if they do occur. However, this risk inventory does not include any human capital-related risks. According to program officials, the inventory does not include human capital risks because they do not see a need to include them. Available Sentinel program documentation and other statements by program officials, however, suggest otherwise. For instance, the Sentinel staffing plan states that adequate staffing is a critical factor in the program office’s ability to successfully execute its responsibilities, and that staff recruitment will be a difficult challenge given the competition for skilled IT professionals with security clearances in the Washington Metropolitan Area. Moreover, the FBI’s fiscal year 2007 budget submission for Sentinel (Exhibit 300) identifies the availability of human capital for the prime contractor as a program risk, and in commenting on a draft of this report, the CIO stated that human capital risks exist. In addition, officials identified various hiring challenges, such as that two-thirds of applicants fail the bureau’s security screening process and that the time it takes to execute the hiring process can be lengthy. Moreover, they said that they will face on-going hiring issues due to attrition and staff rotations. For instance, several contractor staff had recently left the program—although the CIO said that this was normal attrition—and the bureau filled the positions within 30 days. Also, 4 of the program office’s 19 government staff are on temporary duty and will rotate to other tours of duty, including the program manager, whose 2-year detail at the FBI expires in 2007 (although the possibility exists for a one-year extension). In our view, not identifying human capital as a program risk and managing it as such on a major IT program like Sentinel increases the chances that promised system capabilities and benefits will not be delivered on time and within budget. The success of any IT program depends on effectively leveraging people, processes, and tools to achieve defined outcomes and results. To effectively leverage people, they must be treated as strategic assets and managed as such. Notwithstanding the FBI’s considerable efforts to quickly staff-up the Sentinel program office, it has not adopted the kind of strategic management approach needed to effectively leverage Sentinel human capital throughout the life of the program, in part because the FBI’s IT program management policies and procedures do not require it. Moreover, the program’s risk management inventory does not include the availability of Sentinel human capital, and thus it is not recognized and managed as a serious program risk. Given the pressing need to deliver mission-critical investigative and intelligence IT support to FBI agents and analysts and the importance of strategic human capital management to programs like Sentinel, it is essential that this program risk be proactively mitigated. Unless the FBI adopts a more strategic and proactive approach to managing Sentinel human capital and treats it as a program risk, the chances of the program delivering required intelligence and investigative capabilities and mission value in a timely and cost effective manner are diminished. To strengthen the FBI’s management of its Sentinel program and to better ensure that the program delivers required capabilities and expected benefits on time and within budget, we make the following two recommendations: The FBI Director should have the bureau’s CIO establish IT program management policies and procedures for strategically managing IT programs’ human capital needs and ensure that these policies and procedures are fully implemented on all major IT programs, including Sentinel. The FBI Director should have the CIO treat and manage both Sentinel program office and prime contractor human capital availability as program risks and periodically report to the Director on the status and results of efforts to mitigate these risks. In written comments on a draft of this report signed by the CIO and reprinted in appendix II, the FBI agreed with our recommendations and stated that while progress has been made to lay a foundation for improved IT human capital management across the bureau, much work remains. In this regard, the FBI described steps completed, under way, and planned relative to managing human capital on the Sentinel program and across the FBI IT organization. For instance, the FBI stated that the CIO’s office invested 3 months in developing a staffing plan for Sentinel that analyzed staffing needs in light of lessons learned from other IT projects, analyzed resumes of both government and contactor staff, and used contractor staff until government staff could replace them. In addition, FBI stated that it has initiated Project Management Professional certification and training efforts and begun a strategic human capital planning initiative that is to provide a repeatable and strategic approach to managing IT human capital resources across its IT organization. We support these steps as they are consistent with our findings and recommendations. The bureau also provided other technical comments and updated information, which we have incorporated, as appropriate, in the report. We are sending copies of this report to the Chairman and Vice Chairman of the Senate Select Committee on Intelligence and the Ranking Minority Member of the House Permanent Select Committee on Intelligence, as well as to the Chairman and Ranking Minority Member of the Senate Committee on the Judiciary, the Chairman of the Senate Committee on Appropriations Subcommittee on Commerce, Justice, Science, and Related Agencies, and the Chairman and Ranking Member of the House Committee on Appropriations, Subcommittee on Science, the Departments of State, Justice, and Commerce, and Related Agencies. We are also sending copies to the Attorney General; the Director, FBI; the Director, Office of Management and Budget; and other interested parties. In addition, the report will also be available without charge on GAO’s Web site at http://www.gao.gov. Should you have any questions about matters discussed in this report, please contact me at (202) 512-3439 or by e-mail at hiter@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs Office may be found on the last page of this report. Key contributors to this report are listed in appendix II. Our objective was to determine whether the Federal Bureau of Investigation (FBI) has adequately provided for the human capital needs of its Sentinel program. To address our objective, we focused on three areas: FBI’s efforts to date in staffing the Sentinel program office, the bureau’s plans to address gaps between the program’s human capital needs and existing FBI capabilities, and the extent to which FBI is proactively treating and managing human capital as a program risk. To evaluate whether the FBI is adequately providing for the Sentinel program’s human capital needs, we compared the bureau’s efforts against relevant criteria and best practices, including our own framework for strategic human capital management. These criteria promote the use of data to determine key performance objectives and goals in identifying current and future human capital needs, including the appropriate number of employees, the key competencies and skills mix for mission accomplishment, and the appropriate deployment of staff across the organization. They also advocate strategies for identifying and filling human capital gaps and performing succession planning, as well as being the basis for efforts intended to mitigate human capital-related program risks. To accomplish these steps, we requested key staffing-related documents from the FBI, including (1) the organization chart for the Sentinel program office, including filled positions and vacancies and the source of the resources filling those positions (i.e., internal FBI, contractors, outside hires); (2) FBI’s assessment of workforce needs—including positions, roles and responsibilities, and core competencies—to adequately perform system acquisition activities (i.e., configuration management, organizational change management, risk management, contractor tracking and oversight, and solicitation); (3) a current skills inventory and identification of gaps and shortfalls in human capital available to meet workforce needs and plans to address these shortfalls; and (4) FBI’s inventory of program risks, including risks associated with human capital or workforce planning. In addition, we reviewed the number and mix of contractor and government positions needed to staff the Sentinel program office and analyzed where the FBI stands in filling these positions. We reviewed the evidence provided, including FBI’s Life Cycle Management Directive Version 3.0 and the FBI Project Management Handbook Version 1.0, and compared it to our criteria to determine if the bureau’s plans and efforts to date comport with best practices and relevant guidance. Further, and in order to verify our analyses, we interviewed appropriate FBI officials and Sentinel program office personnel. We performed our work at FBI headquarters in Washington, D.C., from September 2005 through July 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, the following people made key contributions to this report: Paula Moore, Assistant Director; JC Ceaser; Neil Doherty; Nancy Glover; Dan Gordon; Kevin Walsh; and Kim Zelonis. | The Federal Bureau of Investigation (FBI) recently began a 6-year, $425 million program called Sentinel to replace and expand on both its failed Virtual Case File (VCF) project and its antiquated, paper-based, legacy system for supporting mission-critical intelligence analysis and investigative case management activities. Because of the FBI's experience with VCF and the importance of Sentinel, GAO was requested to address a number of program management issues associated with acquiring Sentinel via a prime development contractor. This report focuses on one of these issues: whether the FBI is adequately providing for the program's human capital needs. The findings are based on GAO's review of relevant program documentation, interviews with program officials, and human capital management guidance. To its credit, the FBI has moved quickly to staff its Sentinel program office. During the last year, it created a staffing plan for Sentinel, which defines the positions needed for the program, and it has filled most of the positions in the plan, primarily by using contract staff (77 percent). However, a few key program management positions remain to be filled. More importantly, the Sentinel staffing plan addresses only the program office's immediate staffing needs. It does not provide for the kind of strategic human capital management focus that GAO's research and evaluations have shown to be essential to the success of any organizational entity. For example, the staffing plan was not derived using a documented, data-driven methodology and does not provide for inventorying the knowledge and skills of existing staff, forecasting future knowledge and skill needs, analyzing gaps in capabilities between the existing staff and future workforce needs, (including consideration of expected succession needs), and formulating strategies for filling expected gaps. Exacerbating this situation is that the FBI is not proactively managing Sentinel human capital availability as a program risk; it has not included human capital in the program's risk inventory nor has it developed and implemented a proactive risk mitigation strategy, even though program documents cite human capital as both a challenge and a risk. According to program officials, they plan to manage their human capital needs in the same way as when they initially staffed the program office, in part because the bureau's IT system life cycle management policies and procedures do not require them to do otherwise. Unless the FBI adopts a more strategic approach to managing human capital for the Sentinel program and treats human capital as a program risk, the chances of delivering required intelligence and investigative support capabilities in a timely and cost-effective manner are reduced. |
Agencies reported improper payment estimates of almost $55 billion in their fiscal year 2007 PARs or annual reports, an increase from the fiscal year 2006 estimate of about $41 billion. The reported increase was primarily attributable to a component of the Medicaid program reporting improper payment estimates for the first time totaling about $13 billion for fiscal year 2007, which we view as a positive step to improve transparency over the full magnitude of improper payments. The $55 billion estimate consists of 78 programs in 21 agencies (see app. II for further details) and represents about 2 percent of total fiscal year 2007 federal executive branch agencies’ government outlays of almost $2.8 trillion. In addition, the $55 billion largely consists of improper payments made in eight programs, as shown in figure 1. Collectively, the eight programs account for about $48 billion or approximately 88 percent of the total estimate. Also, of the total improper payment estimate of $55 billion, we identified 19 programs and activities that estimated improper payments for the first time in their fiscal year 2007 PARs, totaling about $16 billion. Of these 19 programs, we identified 6—including Medicaid—that had been required to report selected improper payment information for several years prior to the passage of IPIA. In total, these 6 programs represented $14.8 billion, or 94 percent, of the approximately $16 billion in newly reported programs. We view these agencies’ efforts as a positive step toward measuring improper payments and continuing progress in meeting the goals of IPIA. Likewise, agencies continued to report that they had made progress to reduce improper payments in their programs and activities. Since initial IPIA implementation, we noted that 39 agency programs reported improper payment estimated error rates for each of the 4 fiscal years— 2004 through 2007. Of the 39, 23 programs, or about 59 percent had reduced error rates when comparing each program’s fiscal year 2007 error rate to the initial or baseline error rate reported for fiscal year 2004. In a separate analysis, we found that the number of programs with error rate reductions totaled 34 when comparing fiscal year 2007 error rates to the prior year rates. For example, the error rate of the U.S. Department of Agriculture’s (USDA) Marketing Assistance Loan program decreased from 20.3 percent in fiscal year 2006 to 7.5 percent in fiscal year 2007, a reduction of 12.8 percent. As we testified before this Subcommittee, USDA’s high error rate for the Marketing Assistance Loan program reported in its fiscal year 2006 PAR resulted from improvements in how it measured its improper payments. However, in its fiscal year 2007 PAR, USDA reported that a large percentage of fiscal year 2006 improper payments were caused by noncompliance with administrative procedures and that corrective actions had been taken to reduce the instance of improper payments. Reported examples of corrective actions taken included implementing policies related to processing payments, conducting more frequent external audits of program effectiveness, and making the delivery of services consistent across county offices. OMB noted that further reductions in agency program estimated error rates are expected as agencies take steps to address payment errors attributed to insufficient or lack of documentation. OMB’s implementing guidance requires agencies to discuss in their PAR the portion of payment errors attributable to insufficient or lack of documentation, if applicable. We identified 25 programs from 10 agencies that attributed a portion of their payment errors to insufficient or no documentation. However, only 8 of these programs—all reported by USDA—cited what portion of the error rate resulted from insufficient or no documentation. The other agencies only reported that these types of errors contributed to the cause for the improper payments in the remaining 17 programs. For example, the Department of State (State) reported that there was insufficient documentation to support eligibility for the grantee of an award, but did not cite a rate for this type of error. Similarly, the Federal Communications Commission (FCC) reported that lack of documentation was a significant concern of the auditors’ review of program payments, but did not report the affected portion of the error rate. Because agencies for 17 of the 25 agency programs that attributed some of their payment errors to insufficient or no documentation did not report the portion of payment errors attributable to these problems, we could not readily determine the extent to which such errors contributed to the total improper payment estimate of $55 billion. Yet, we found that 25 of the 78 programs reporting improper payment estimates, or 32 percent, identified insufficient or no documentation errors as a cause of their improper payments. OMB anticipates that errors attributable to insufficient or no documentation will decrease significantly once agencies correct the root cause. From our review, we noted that 22 of the 25 agency programs reported corrective action plans to address errors due to insufficient or no documentation. Examples of these efforts included development of policies on documentation retention, updating processing procedures, and training for providers on the importance of supporting documentation. While agencies have shown progress, major challenges remain in meeting the goals of IPIA and ultimately improving the integrity of payments. Specifically, some agencies have not yet reported estimates for all risk- susceptible programs, the total improper payment estimate does not yet reflect the full scope of improper payments across executive branch agencies, noncompliance issues continue to exist, reported statutory or regulatory barriers limit agencies’ ability to reduce improper payments, and agencies continue to face challenges in the implementation or design of internal controls to identify and prevent improper payments. IPIA requires agencies to annually review all of their programs and activities to identify those that may be susceptible to significant improper payments. Yet, in our review, we found that not all agencies reported conducting risk assessments. We also noted that four agencies reported that they did not conduct a risk assessment of all of their programs and activities because OMB guidance allows agency programs deemed not risk-susceptible to conduct a risk assessment generally every 3 years. As we have previously reported, this is inconsistent with the express terms of IPIA, which require that agencies annually review all of their programs and activities. However, OMB guidance does state that if a program experiences a significant change in legislation, a significant increase in funding level, or both, agencies are required to reassess the program’s risk susceptibility during the next annual cycle, even if it is less than 3 years from the last assessment. In its fiscal year 2007 PAR, the Department of the Interior (Interior) reported that it did not perform a risk assessment because the results of previous risk assessments demonstrated that Interior was at low risk for making improper payments. As a result, the agency reported that the next risk assessment would be completed in fiscal year 2009. HHS reported that it had last completed risk assessments in fiscal year 2006 in which HHS did not identify any new high-risk programs in its fiscal year 2006 risk assessment work. HHS reported that OMB’s implementing guidance requires risk assessments once every 3 years and as a result, HHS did not perform risk assessments during fiscal year 2007. We also identified three additional agencies that reported they were not required to conduct a risk assessment for specific programs that OMB had previously designated as risk-susceptible prior to IPIA implementation. These agencies determined that those programs had continued to demonstrate over a 2-year period a low-risk level for susceptibility to improper payments and thus, OMB had granted them relief from improper payments reporting. According to their PARs, the next risk assessments for the Environmental Protection Agency’s (EPA) Clean Water and Drinking Water State Revolving Funds and Department of Veterans Affairs (VA) Insurance programs will be conducted in fiscal years 2010 and 2009, respectively. The Department of Housing and Urban Development (HUD) reported that it will conduct an annual risk assessment of its Community Development and Block Grant (CDBG) program; however, because it reported over 2 consecutive years error rates of less than $10 million for this program, OMB granted it relief from annual improper payment reporting and it did not report an estimate in its fiscal year 2007 PAR. OMB reported that, in aggregate, agencies have assessed risk and measured nearly 86 percent of all high-risk outlays and that agencies were focusing their resources on programs with the highest risk levels of improper payments. While we agree that, as a practical matter, a comprehensive risk assessment may not be warranted for programs with minimal outlays or potentially low-risk programs and activities, an appropriately designed risk assessment should be performed annually as it is required of agencies to comply with IPIA. As we previously reported, OMB guidance provides that agencies annually perform risk assessments of their programs and activities, but offers limited information on how to conduct an appropriately designed risk assessment, thus allowing agencies broad flexibility for determining a methodology to meet IPIA requirements. As such, the level and extent to which agencies conduct their risk assessments can vary. This is evident in our recent work on selected agencies’ IPIA implementation, in which we raised significant concerns regarding their risk assessment activities, as highlighted in the following examples: In September 2007, we reported that for fiscal year 2006, the Department of Homeland Security (DHS) did not perform a risk assessment on approximately $13 billion of its more than $29 billion in disbursements subject to IPIA. Also, DHS only tested programs with disbursements greater than $100 million and did not perform a qualitative risk assessment of all program operations, such as an assessment of internal controls, oversight and monitoring activities, and results from external audits. In November 2007, we reported that for fiscal years 2004 through 2006, neither the United States Agency for International Development (USAID) nor the National Aeronautics and Space Administration (NASA) had developed a systematic process to (1) identify risks that exist in their payment activities or (2) evaluate the results of their payment stream reviews, such as weighting and scoring the effectiveness of existing internal control over payments made and results from external audits. Furthermore, both USAID and NASA maintained insufficient or no risk assessment documentation to support their conclusions that no programs or activities were susceptible to significant improper payments. In December 2007, we reported that the Department of Defense’s (DOD) travel payment data used to assess the program’s risk of significant improper payments only included payments processed by the Defense Travel System (DTS)—approximately 10 percent of the $8.5 billion of the department’s travel obligations reported for fiscal year 2006. Further, the travel data excluded the largest user of DTS, the Army, which would likely have increased DOD’s travel improper payment estimate of $8 million by over $4 million. In its fiscal year 2007 PAR, DOD reported that the agency is implementing a sampling and review process for Army travel payments processed through its Integrated Automated Travel System in fiscal year 2008 to meet improper payment reporting requirements. Although we have identified significant deficiencies in the risk assessment methodology used to address IPIA requirements at the four agencies mentioned above, not all agencies have been subjected to an independent review. Therefore, the extent to which the results of the agencies’ risk assessments can be relied on may not be fully known. We have previously recommended that OMB expand its implementing guidance to describe in greater detail factors that agencies should consider when conducting their annual risk assessments, such as program complexity, operational changes, findings from investigative reports, and financial statement and performance audit reports. OMB agreed with this recommendation and stated that it has taken steps to address implementing it. Specifically, OMB stated that it had included factors to be considered in agency risk assessments in its revised implementation guidance for IPIA. Our review found that not all agencies have developed improper payment estimates for all of the programs and activities they identified as susceptible to significant improper payments. As shown in table 1, the fiscal year 2007 total improper payment estimate of $55 billion did not include any amounts for 14 programs, with fiscal year 2007 outlays totaling about $170 billion. A majority of these programs represent newly identified risk-susceptible programs reported by DHS. The identification of these programs as risk- susceptible is a positive step toward addressing IPIA requirements. We also found, however, that three Department of Health and Human Services (HHS) programs had not reported improper payment estimates for fiscal year 2007, even though OMB had required these and other programs to report selected improper payment information for several years before passage of IPIA. After the enactment of IPIA, OMB’s implementing guidance required that these programs continue to report improper payment information under IPIA. Since IPIA implementation, HHS has reported on its various improper payment pilot activities to show that efforts were underway to fully address IPIA reporting requirements. For fiscal year 2007, HHS reported that pilot reviews were conducted in various states for the Temporary Assistance for Needy Families and Child Care and Development Fund programs and that estimated improper payment rates for these programs would be reported in fiscal year 2008. Further, HHS reported that it also expects to report a comprehensive improper payment estimate rate for the State Children’s Health Insurance Program that will encompass its fee-for- service, managed care, and eligibility components. We recognize that measuring improper payments for these state-administered programs and designing and implementing actions to reduce or eliminate them are not simple tasks, particularly for grant programs that rely on administration efforts at the state level. Consequently, as we previously reported in April 2006, communication, coordination, and cooperation among federal agencies and the states will be critical factors in estimating national improper payment rates and meeting IPIA reporting requirements for state-administered programs. Further, we found a few instances where estimates were not based on a 12-month reporting period. For example, HHS’s Medicaid program is the largest of the programs constituting the total improper payment estimate, with an estimate of about $13 billion for fiscal year 2007. Reporting for the first time, the Medicaid program estimate is based on 6 months of fee-for- service claims processed by the states rather than a complete fiscal year. Generally, OMB guidance requires that a 12-month period be used to generate improper payment estimates as it more fully characterizes the extent of improper payments within a program for any given year. In its PAR, HHS reported that it is completing its review of the remaining 6 months and will report an annual Medicaid fee-for-service error rate, based on a full fiscal year 2006 fee-for-service claims, in its fiscal year 2008 PAR. We also found instances where agencies’ estimates encompassed only one component of a particular program. For example, USDA identified two types of errors related to its Supplemental Nutrition Program for Women, Infants, and Children—vendor payment errors and certification errors. However, as part of its IPIA reporting, USDA only reported on improper payments resulting from vendor payment errors. For certification errors, USDA reported that it plans to use results from the 2008 decennial income verification study to provide a nationally representative estimate and will report the error rate in fiscal year 2009. The extent to which other agencies used a period of review less than 12 months or estimated for only a component of their program is unknown, as most of the agencies reporting estimates did not provide this level of information in their PARs. As agencies continue to enhance their measurement process and report on additional program components, it is likely the total improper payment estimate will increase. Lastly, we noted that while agencies reported improper payment estimates for their various programs and activities, only five agencies—consisting of nine programs—reported to some degree the amount of actual improper payments they expect to recover and how they will go about recovering them as part of their IPIA reporting. OMB guidance states that for program improper payment estimates exceeding $10 million, agencies must address this IPIA reporting requirement in their PARs. We would also point out that this separate reporting requirement is distinct and different from the recovery auditing reporting requirements OMB has outlined in its guidance for agencies to address in their PAR reporting. We discuss the Recovery Auditing Act and OMB reporting requirements later in this statement. We found that of the 78 programs with improper payments estimates, 47 reported improper payment estimates exceeding $10 million. Of this universe, only 9 agency programs reported on recovery of improper payments under IPIA. Of the 9, 6 programs reported on both aspects of the requirement—expected or actual recovery amount and how they will recover them. The remaining 3 programs reported a recovery amount but did not discuss how they recovered the amount, or their future plans for recovering the funds. For example, DHS reported that for its Individuals and Households program it had collected $18 million of Hurricane Katrina payments identified as improper during its payment sample testing, but did not report on its recovery method. In contrast, the Railroad Retirement Board (RRB) reported it had recovered $104.5 million for fiscal years 2003 to 2006 in Retirement and Survivors Benefits program receivables. RRB reported that its collection program is in full compliance with the Debt Collection Improvement Act of 1996 and recoveries are made through a variety of mechanisms. These include the offset of future benefits, reclamation from the financial institution of benefits erroneously paid after the death of a beneficiary, and direct payments from debtors. RRB also reported that fraudulent payments are referred to the OIG for prosecution through the Department of Justice (Justice). As agencies continue to enhance their IPIA reporting, full and reasonable disclosures regarding actual improper payments and actions to recover those payments will provide needed transparency of this issue and address the American public’s increasing demands for accountability over taxpayer funds. For fiscal year 2007, a limited number of agency auditors reported on compliance issues with IPIA as part of their financial statement audit, although such reporting is not specifically required by IPIA. Specifically, auditors for 5 of the 39 agencies included in our scope reported assessing the agencies’ compliance with IPIA. Of the 5, agency auditors for all except USAID reported noncompliance issues related to the key requirements of the act, including risk assessments, sampling methodologies, implementing corrective actions, recovering improper payments, and inadequate documentation. Fiscal year 2007 reflected the fourth year that auditors for HHS and DHS reported noncompliance issues with IPIA, including not estimating for all risk-susceptible programs and deficiencies related to sampling and testing of transactions. Agency auditors at the Department of Transportation (Transportation) and DOD reported noncompliance with IPIA for a second year. For fiscal year 2007, Transportation auditors reported that they had not received sufficient documentation by the time of PAR issuance to determine if the department’s sampling plan was statistically valid. The auditors for DOD reported for fiscal year 2007, that the department was still in the process of developing procedures to identify improper payments and that its efforts to manage recovery audit contracts had been largely unsuccessful. As we previously testified before this Subcommittee, separate assessments conducted by agency auditors provided a valuable independent validation of agencies’ efforts to implement the act. Independent assessments would also enhance an agency’s ability to identify sound performance measures, monitor progress against those measures, and help establish performance and results expectations. Without this type of validation or other types of reviews performed by GAO and agency OIGs, it is difficult to determine the magnitude of deficiencies that may exist in agencies’ IPIA implementation efforts. As previously mentioned, for fiscal year 2007, 21 agencies reported improper payment estimates for 78 programs totaling $55 billion for fiscal year 2007. Of the 21 agencies, 16 reported improper payment estimates that exceeded $10 million for one or more programs, and therefore, under OMB guidance, were required to report on various elements as part of their plan to reduce improper payments, including any statutory or regulatory barrier that may limit the agencies’ corrective actions in reducing improper payments. Of the 16 agencies required to report on any statutory or regulatory barriers, 14 agencies reported on whether they had such barriers which may limit corrective actions in reducing improper payments. The remaining 2 agencies did not address whether any statutory or regulatory barriers existed. We further noted that of the 14 agencies that addressed statutory or regulatory barriers, 9 identified such barriers that may limit corrective actions to reduce improper payments. The remaining 5 agencies reported that they either had no existing statutory or regulatory barriers or were unaware of any at this time. Agencies cited various barriers that restricted their ability to better manage their programs against improper payments. For example, the Office of Personnel Management’s (OPM) Retirement Program (Civil Service Retirement System and Federal Employees Retirement System) reported in its fiscal year 2007 PAR that it faces regulatory barriers that restrict its ability to recover improper payments. For instance, once OPM learns of the death of an annuitant, it requests that Treasury reclaim all posthumously issued payments from the deceased’s bank account. When there is insufficient money in the account, OPM would like to seek collection from the individual who last withdrew money from the account. According to OPM, based on current law and Treasury’s regulations, financial institutions are barred from providing OPM with the information necessary to recover these improper payments. The law and regulations have specifically exempted the Social Security Administration (SSA), RRB, and VA from this prohibition, but not OPM. Further, OPM reported that this situation has a substantial impact on its ability to prevent and recover improper payments. OPM has determined that the current law will need to be amended to overcome this prohibition and Treasury has drafted legislative language to address this issue. The Department of Education (Education) reported that the ability to perform data matching between Federal Student Aid applications and tax return data would substantially reduce improper payments in the Pell Grant program, as the large majority of errors are the result of misreporting of income and related data fields. However, according to OMB, Section 6103(c) of the Internal Revenue Code, concerning confidentiality of tax return information, precludes data matching with regard to grants by Education. In its January 2007 annual report on improper payments, OMB reported that the President’s Fiscal Year 2008 Budget contained a series of reforms that are necessary to achieve greater program integrity and payment accuracy, including a proposal to facilitate data matching of Pell grant program data. This report indicates that, through administrative changes, Education and the Internal Revenue Service intend to implement a process to verify students’ (and their parents’) income, tax, and certain household information appearing on their tax return that they provided as part of their application for federal student aid. Agencies continue to face challenges in the implementation or design of internal controls to identify and prevent improper payments. Over half of the agencies’ OIG identified management or performance challenges that could increase the risk of improper payments, including challenges related to internal controls. In addition, several OIGs identified instances where agencies needed to improve their oversight of grantees receiving federal funds. For example, in its fiscal year 2007 PAR, Education’s OIG reported that its recent investigations continued to uncover problems, including inadequate attention to improper payments and failure to identify and take corrective action to detect and prevent fraudulent activities by grantees. The Small Business Administration’s (SBA) OIG included a management challenge related to the agency’s controls over the section 7(a) loan guaranty purchase process. The OIG reported that the majority of the loans made under the program are made with little or no review by SBA prior to loan approval because SBA has delegated most of the credit decisions to lenders originating these loans. SBA’s review of lender requests for guaranty purchases on defaulted loans is, therefore, the agency’s primary tool for assessing lender compliance on individual loans and protecting SBA from making erroneous purchase payments. However, OIG audits of early defaulted loans and SBA’s guaranty purchase process have shown that reviews made by the National Guaranty Purchase Center have not consistently detected lender failures to administer loans in full compliance with SBA requirements and prudent lending practices, resulting in improper payments. Management challenges were also found in agency programs that did not estimate improper payments in their fiscal year 2007 PAR. The National Science Foundation (NSF) OIG found that NSF did not have a comprehensive, risk-based system to oversee and monitor contract awards and ensure that the requirements of each contract were being met. In another example, Treasury’s OIG identified erroneous and improper payments as a major management challenge and reported that some tax credits, such as the Education Credit, provide opportunities for abuse in income tax claims. Related to this issue, Treasury’s independent auditor reported that weaknesses in controls over the collection of tax revenues owed to the federal government and over the issuance of tax refunds resulted in lost revenue to the federal government and potentially billions of dollars in improper payments, which the auditors classified as a material weakness. Section 831 of the National Defense Authorization Act for Fiscal Year 2002 provides an impetus for applicable agencies to systematically identify and recover contract overpayments. The act requires that agencies that enter into contracts with a total value in excess of $500 million in a fiscal year carry out a cost-effective program for identifying and recovering amounts erroneously paid to contractors. The law authorizes federal agencies to retain recovered funds to cover in-house administrative costs as well as to pay contractors, such as collection agencies. Any residual recoveries, net of these program costs, shall be credited back to the original appropriation from which the improper payment was made, subject to restrictions as described in the legislation. The techniques used in recovery auditing offer the opportunity for identifying weaknesses in agency internal controls, which can be modified or upgraded to be more effective in preventing improper payments before they occur for subsequent contract outlays. However, we would like to emphasize that effective internal control calls for a sound, ongoing invoice review and approval process as the first line of defense in preventing unallowable contract costs. Given the large volume and complexity of federal payments and historically low recovery rates for certain programs, it is much more efficient and effective to pay bills properly in the first place. Prevention is always preferred to detection and collection. Aside from minimizing overpayments, preventing improper payments increases public confidence in the administration of programs and avoids the difficulties associated with the “pay and chase” aspects of recovering improper payments. Without strong preventive controls, agencies’ internal control activities over payments to contractors will not be effective in reducing the risk of improper payments. Beginning in fiscal year 2004, OMB required that applicable agencies publicly report on their recovery auditing efforts as part of their PAR reporting of improper payment information. Agencies are required to discuss any contract types excluded from review and justification for doing so. Agencies are also required to report, in table format, various amounts related to contracts subject to review and actually reviewed, contract amounts identified for recovery and actually recovered and prior year amounts. In addition, agencies are to discuss the following: a general description and evaluation of the steps taken to carry out a recovery auditing program, a corrective action plan to address root causes of payment error, and a general description and evaluation of any management improvement program. For fiscal year 2007, agencies reported reviewing about $329 billion in contract payments to vendors under recovery audit programs. From these reviews, agencies reported identifying about $121 million in improper payments for recovery and actually recovering about $87 million, or an estimated overall rate of recovery of approximately 72 percent, as shown in table 2. We found that the number of agencies reporting recovery audit information remained the same when compared to the prior year. However, the fiscal year 2007 dollar amounts identified for recovery significantly decreased by about $217 million from fiscal year 2006. We noted that a significant decrease in DOD’s fiscal year 2007 reporting of amounts identified for recovery and amounts recovered from the prior year contributed to the overall decrease. For example, for fiscal year 2006 DOD reported $195.3 million for contract overpayments identified for recovery. This amount decreased sharply to $24.6 million for fiscal year 2007. Similarly, DOD reported recovering $137.9 million for fiscal year 2006 compared to just $19.6 million for fiscal year 2007. According to OMB, the significant decrease in DOD’s reported amounts resulted from the department’s exclusion of voluntary refunds of contract payments at the recommendation of a DOD OIG audit since the voluntary refunds did not originate from recovery audit efforts. In addition, we noted that agencies used different types of resources to carry out their recovery audit programs. Of the 21 agencies reporting recovery auditing information for fiscal year 2007, 9 reported they contracted out their recovery audit services, 3 conducted in-house recovery audits, 5 reported using both in-house and recovery audit contractors, and two were silent. The remaining 2 agencies—HUD and Labor—did not conduct recovery audits as they reported it was not cost beneficial. HUD reported in its fiscal year 2007 PAR that current internal controls over its contract payment and contract close-out processes were adequate to reduce the risks of overpayments. HUD further reported on continued initiatives such as strengthening its fund control processes. Therefore, HUD concluded that a recovery auditing program would not be cost beneficial and was not warranted. Likewise, Labor reported that its sampling and testing of nonpayroll costs, consisting of department expenses including contract payments related to the operation and administration of programs’ and headquarters’ activities for the current and prior fiscal years found no improper payments in its contract payments. Based on these results, Labor decided that a recovery auditing program was not warranted in fiscal year 2007. However, Labor reported that it plans to implement a recovery auditing program for contract payments in fiscal year 2008, and will report its recovery audit actions, costs, and amounts recovered on an annual basis. From our review of the PARs, we found that agencies’ reporting of the various recovery auditing reporting elements was limited. For example, agencies generally provided some information on steps to carry out a recovery audit program. However, less than half, or 8 agencies reported on their corrective action plans to address root causes of contract payment errors. For example, the Department of Energy (Energy) reported that it established a policy that prescribes requirements for identifying overpayments to contractors and establishes reporting standards to track the status of recoveries. However, Energy did not report on corrective actions to address the root causes of contract overpayments. We also found that three agencies—Department of Commerce (Commerce), Justice, and SSA—reported on justifications for certain contracts that were excluded from their recovery audit review. For example, Commerce reported that travel payments, bankcards/purchase cards, all procurement vehicles with other federal agencies, and government bills of lading were excluded from its review, as the costs for recovery audit activities would likely exceed the benefits of a recovery audit. Justice reported that certain payments at foreign offices were excluded as they were processed by the Department of State. Lastly, SSA reported that it excluded cost-type contracts that either (1) had not been completed where payments are interim, provisional, or otherwise subject to further adjustment by the government in accordance with the terms and conditions of the contract, or (2) were completed, subjected to final contract audit, and prior to final payment of the contractor’s final voucher, all prior interim payments were accounted for and reconciled. In closing, we recognize that measuring improper payments and designing and implementing actions to reduce them are not simple tasks or easily accomplished. Further, while internal control should be maintained as the front-line of defense against improper payments, recovery auditing holds promise as a cost-effective means of identifying contractor overpayments. We are pleased that agencies are identifying and reporting on more risk- susceptible programs and have reported that overall program error rates have decreased since IPIA implementation. Yet, we also note that deficiencies continued to be identified regarding agencies’ efforts to comply with IPIA based on independent assessments conducted by agency auditors or from past GAO reviews. As agencies continue to strengthen their program integrity efforts and recovery audit reviews, fulfilling the requirements of IPIA and the Recovery Auditing Act will require sustained attention to implementation and oversight to monitor whether desired results are being achieved. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have. For more information regarding this testimony, please contact McCoy Williams, Managing Director, Financial Management and Assurance, at (202) 512-2600 or by e-mail at williamsm1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony included Carla Lewis, Assistant Director; Gabrielle Fagan; Neeraj Goswami; Mary Osorno; Christina Quattrociocchi; Donell Ries; and Viny Talwar. Community Development Block Grant (Entitlement Grants, States/Small Cities) Research and Education Grants and Cooperative Agreements Federal Employees Group Life Insurance Federal Employees Health Benefits Program Retirement Program (Civil Service Retirement System and Federal Employees Retirement System) 504 Certified Development Companies (Debentures) 504 Certified Development Companies (Guaranties) 7(a) Business Loan Program (Guaranty Purchases) 7(a) Business Loan Program (Guaranty Approvals) Old Age and Survivors’ Insurance 100 Business Class Travel and Sensitive Payments International Information Program—U.S. Speaker and Specialist Program International Narcotic and Law Enforcement Affairs—Narcotics Program 109 Highway Planning and Construction 110 Earned Income Tax Credit 112 Dependency and Indemnity Compensation nnual report was not ilble as of the end of fieldwork. Appendix II: Improper Payment Estimates Reported in Agency Fiscal Year 2006 and 2007 Performance and Accountability Reports or Annual Reports Fiscal year 2006 total estimate (dollars in millions) Fiscal year 2006 error rate (percent) Fiscal year 2007 total estimate (dollars in millions) rate (percent) Cooperative Agreements, Grants, and Contracts Child and Adult Care Food Program Conservation Security Program (previously Farm Security and Rural Investment) Fiscal year 2006 total estimate (dollars in millions) Fiscal year 2006 error rate (percent) Fiscal year 2007 total estimate (dollars in millions) rate (percent) Fiscal year 2007 total estimate (dollars in millions) Fiscal year 2006 error rate (percent) millions) rate (percent) Fiscal year 2006 total estimate (dollars in millions) Fiscal year 2006 error rate (percent) Fiscal year 2007 total estimate (dollars in millions) rate (percent) Immigration and Customs Enforcement—Detention and Removal OperationsImmigration and Customs Enforcement—Federal Protective Service Immigration and Customs Enforcement— InvestigationsBlock Grant (Entitlement Grants, States/Small Cities) Fiscal year 2006 total estimate (dollars in millions) Fiscal year 2006 error rate (percent) Fiscal year 2007 total estimate (dollars in millions) rate (percent) 77 All programs and activities 79 All programs and activities 80 All programs and activities 83 Retirement Program (Civil Service Retirement System and Federal Employees Retirement System) 84 All programs and activities 85 All programs and activities 30 Railroad Retirement Board 86 Railroad Unemployment Companies (Debentures) Companies (Guaranties) 92 7(a) Business Loan Program (Guaranty Purchases) 93 7(a) Business Loan Program (Guaranty Approvals) Fiscal year 2007 total estimate (dollars in millions) Fiscal year 2006 error rate (percent) millions) rate (percent) Fiscal year 2006 total estimate (dollars in millions) Fiscal year 2006 error rate (percent) Fiscal year 2007 total estimate (dollars in millions) rate (percent) nnual improper pyment etimte or error rte. t it hd no progr or ctivitie susceptible to ignificnt improper pyment. l yer 2006 etimte or error rte was pdted to the revied etimte or error rte reported in the fil yer 2007 PAR or nnual report. te was less thn one percent or error rte ronded to zero for prpo of thi tetimony. bove. ddress improper pyment or IPIA in it PAR or nnual report for fil yer 2006, fil yer 2007, or both. l yer 2007 was the firt yer thi gency was inclded in ocope of review. nnual report was not ilble as of the end of fieldwork. t it wold etimte improper pyment in the fre for thi progrm. See tble 1 of thi tetimony. m no longer susceptible to ignificnt improper pyment. t the nnual improper pyment mont or error rte was zero. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The federal government is accountable for how its agencies and grantees spend hundreds of billions of taxpayer dollars and is responsible for safeguarding those funds against improper payments and recouping those funds when improper payments occur. The Congress enacted the Improper Payments Information Act of 2002 (IPIA) and section 831 of the National Defense Authorization Act for Fiscal Year 2002, commonly known as the Recovery Auditing Act, to address these issues. GAO was asked to testify on agencies' efforts to eliminate and recover improper payments. Specifically, GAO focused on (1) progress made in agencies' implementation and reporting under IPIA for fiscal year 2007, (2) major challenges that continue to hinder full reporting of improper payment information, and (3) agencies' efforts to report on recovery auditing and recoup contract overpayments. This testimony is based in part on a recently issued report (GAO-08-377R) in addition to a further review and analysis of improper payment and recovery auditing information reported in agencies' fiscal year 2007 performance and accountability reports (PAR) or annual reports. The Office of Management and Budget (OMB) provided technical comments which GAO incorporated as appropriate. While agencies have made progress, GAO identified ongoing challenges in key areas related to IPIA and recovery auditing implementation and reporting. (1) Progress made in agencies' implementation and reporting under IPIA: Agencies reported improper payment estimates of about $55 billion in their fiscal year 2007 performance and accountability reports (PAR) or annual reports, an increase from the almost $41 billion reported in fiscal year 2006. The reported increase was primarily attributable to a component of the Medicaid program reporting improper payments for the first time totaling about $13 billion, which GAO viewed as a positive step to improve transparency over the full magnitude of improper payments. The $55 billion estimate consists of 21 agencies reporting for 78 programs, including 19 agency programs or activities reporting for the first time in fiscal year 2007. Further, select agency programs that first reported an error rate in fiscal year 2004 reported an overall decrease in their error rate estimates when compared to fiscal year 2007. OMB noted that further reductions in error rates are expected as agencies take steps to address payment errors resulting from insufficient or no documentation. (2) Challenges with IPIA implementation: Not all agencies reported conducting risk assessments of all of their programs and activities as required under IPIA. Further, agencies have not estimated for 14 risk-susceptible programs with outlays totaling about $170 billion. Additionally, in some instances, agencies did not measure improper payments for a 12-month period as generally required by OMB's implementing guidance, nor did the estimates reflect improper payments for the entire program. Four agency auditors reported noncompliance issues with IPIA regarding risk assessments, sampling methodologies, corrective actions, recovery of improper payments, and inadequate documentation. Agencies also reported that statutory or regulatory barriers may limit corrective actions to reduce improper payments. Lastly, agencies continue to face challenges in the implementation or design of internal controls to identify and prevent improper payments. Specifically, over half of agencies' Offices of Inspectors General identified management or performance challenges that could increase the risk of improper payments. (3) Agencies' efforts to report recovery auditing information continue: In total, 21 agencies reported identifying about $121 million in improper payments in fiscal year 2007 for recovery and actually recovering about $87 million, a decrease of about $217 million when compared to the reported amount identified for recovery in the prior year. Most of the decrease can be attributed to the Department of Defense's decision to stop reporting voluntary refunds. GAO noted that few agencies reported on corrective action plans to address the root causes of contract payment errors. Also, two agencies reported that conducting recovery audits was not cost beneficial. All but two agencies reported they contracted out recovery audit services, conducted in-house recovery audits, or both. The other two were silent on this on matter. |
Under NCLBA, states are required to hold their Title I schools accountable for students’ performance by developing academic standards and tests, measuring student proficiency in certain grades and subjects, and determining whether schools are meeting proficiency goals. Schools that have not met state established goals for 2 or more consecutive years are identified as in need of improvement and must implement certain activities meant to improve student academic achievement. NCLBA also requires states to set aside Title I funds to assist schools in implementing improvement activities. Title I of the Elementary and Secondary Education Act (ESEA), as amended and reauthorized by NCLBA, authorizes federal funds to help elementary and secondary schools establish and maintain programs that will improve the educational opportunities of economically disadvantaged children. Title I is the largest federal program supporting education in kindergarten through 12th grade, supplying an estimated $12.8 billion in federal funds in fiscal year 2007. Appropriations for Title I grew rapidly in the years following the enactment of NCLBA, from about $8.8 billion in fiscal year 2001 to $12.3 billion in 2004. However, Title I funding growth slowed between 2004 and 2007. (See fig. 1.) Title I funds are allocated through state educational agencies to districts using statutory formulas based primarily on Census Bureau estimates of the number of students from families below the poverty line in each district. States retain a share for administration and school improvement activities before passing most of the funds on to school districts. In turn, districts are required to allocate Title I funds first to schools with poverty rates over 75 percent in rank order, with any remaining funds distributed at their discretion to schools in rank order of poverty either districtwide or within grade spans. A school’s Title I status can change from year to year because school enrollment numbers and demographics vary over time, and annual allocations to districts under Title I formulas can vary considerably. In 2002, NCLBA added several new provisions to the ESEA, as amended, to strengthen accountability of all schools identified for improvement, which included requiring states to develop academic achievement standards and establish proficiency goals for making adequate yearly progress (AYP) that will lead to 100 percent of their students being proficient in reading, mathematics, and science by 2014. To measure their progress, states administer an annual assessment to students in most grade levels. In addition, each school’s assessment data must be disaggregated in order to compare the achievement levels of students within certain designated groups with the state’s performance goals. These student groups include the economically disadvantaged, major racial and ethnic groups, students with disabilities, and those with limited English proficiency, and each of these groups generally must make AYP in order for the school to make AYP. The last reauthorization of ESEA prior to NCLBA—the Improving America’s Schools Act of 1994 (IASA)—required that schools be identified for improvement if they did not make AYP for 2 consecutive years and that they take certain actions to improve student performance. NCLBA also includes a timeline for implementing specific interventions based on the number of years a school fails to make AYP and adds some interventions that were not required under IASA. (See table 1.) Under NCLBA, schools that fail to make AYP for 2 consecutive years are identified for improvement and must develop an improvement plan in consultation with the district, school staff, parents, and outside experts. This plan, which is subject to district approval, must incorporate strategies to address the specific academic issues that caused the school to be identified for improvement. At this stage districts also must offer students in the school the opportunity to transfer to a higher-performing public school in the district—an option that is called offering public school choice. After the third year, districts must also offer supplemental educational services (SES), such as tutoring. Under NCLBA, if a school fails to make AYP for 4 consecutive years, it is required to implement one of the corrective actions identified in the law, such as implementing a new curriculum or extending the school year or day. Finally, if a school fails to make AYP for 5 or more years it must make plans to restructure its governance and implement those plans. Schools exit improvement status if they make AYP for 2 consecutive years. In addition, all schools identified for improvement are required to spend at least 10 percent of their Title I funds on professional development for the school’s teachers and principal as appropriate. School districts bear the primary responsibility for ensuring that their schools in improvement receive technical assistance. Specifically, districts must ensure that each school identified for improvement receives assistance based on scientifically based research in three areas: analysis of student assessment data, identifying and implementing instructional strategies, and analysis of the school budget, as shown in table 2. States provide technical assistance to districts and schools through their statewide systems of support, with a priority given to those in improvement status. In developing their statewide system of support, the state educational agency must (1) establish school support teams that include individuals who are knowledgeable about scientifically based research and practice to assist schools throughout the state that are identified for improvement in areas such as strengthening instructional programs; (2) designate and use distinguished teachers and principals who are chosen from Title I schools and have been especially successful in improving academic achievement; and (3) devise additional approaches to improve student performance, for example, by drawing on the expertise of other entities such as institutions of higher education, educational service agencies, or private providers of scientifically based technical assistance. NCLBA requires states to set aside a portion of Title I funds to allocate to districts for use by and for schools for school improvement activities and to carry out the state’s responsibilities for school improvement. In fiscal years 2002 and 2003, states were required to reserve 2 percent of the Title I funds for school improvement, and in fiscal years 2004 to 2007, states were required to reserve 4 percent. However, states may not always be able to reserve the full amount for school improvement because of a hold- harmless provision that prevents states from reducing the amount of Title I funds any district receives from what it received the prior year. The hold- harmless provision is intended to protect school districts from declines in Title I funding from year to year, by preventing the state from giving them less funds than the year before. If the total increase in Title I funds from districts with increasing allocations is less than 4 percent of a state’s total Title I allocation, then that state would not be able to set aside the full 4 percent of Title I funds for school improvement. States are generally required to allocate 95 percent of the 4 percent set-aside to districts for schools identified for improvement. States may use the remaining 5 percent of the 4 percent set-aside to carry out their responsibilities related to school improvement, including creating and maintaining their statewide system of support. (See fig. 2.) NCLBA establishes priorities and requirements for the distribution of school improvement funds to districts. Specifically, under NCLBA states must give funding preference to districts that serve the lowest-achieving schools, demonstrate the greatest need for assistance, and demonstrate the strongest commitment to using the funds to assist their lowest-performing schools with meeting progress goals. States may either allocate these funds directly to districts for schools identified for improvement to be used for activities required under the school improvement section of the law or, with the permission of districts, retain funds to provide for these activities for schools identified for improvement. While NCLBA directs 95 percent of improvement funds to schools through districts, some flexibility exists for funds to be used at the state or district level for improvement-related activities. For example, NCLBA gives states authority to use some of the 95 percent funds at the district level if the state determines that it has more funding than needed to provide assistance to schools in improvement. In addition, states may use some of their 5 percent funds generally retained at the state level for districts to support district-level activities. Among other requirements regarding the allocation of funds, states are required to make publicly available a list of the schools that have received funds or services from the school improvement set-aside and the percentage of students in each of these schools from families with incomes below the poverty line. In addition to the Title I set-aside, Education officials told us that states may use state funds for school improvement or incorporate other federal funds to support school improvement efforts, including the School Improvement Grant Program under NCLBA, Comprehensive School Reform, Reading First, and Title II teacher and principal quality programs (see table 3). These programs either establish funding priorities for schools identified for improvement or allow for state flexibility to establish such priorities. Education oversees how states allocate school improvement funds as part of its overall monitoring of state compliance with Title I and NCLBA. Education monitors states in two ways: (1) by routinely gathering and analyzing data collected from Web-based searches and documents, such as the Consolidated State Performance Reports, and (2) by conducting on- site visits to state educational agencies and selected districts and schools within each state to interview officials and review relevant documents. Education has a 3-year monitoring cycle for visiting each state. During these visits, Education reviews whether states provide guidance to districts related to the use of school improvement funds and activities and how the state monitors school improvement plans. Education’s monitoring guide includes specific questions about how the state allocated school improvement funds, whether all the funds have been spent, and what guidance the state provided to districts—and was recently updated to include some additional questions on whether states are monitoring expenditures of school improvement funds at the school level and assisting schools in effectively using their resources to make AYP and exit improvement status. The hold-harmless provision, which is designed to protect school districts from reductions in their Title I funding, prevented some states from being able to target school improvement funds to low-performing schools. However, many states have used other federal and state funds for school improvement efforts. The hold-harmless provision prioritizes maintaining the Title I funding of all eligible districts over ensuring that states can set aside the full 4 percent for schools identified for improvement—the lowest performing schools. Twenty-two states have been unable to set aside the full 4 percent of Title I funds for school improvement for 1 or more years since NCLBA was enacted because they did not have enough funds to do so after satisfying the hold-harmless provision. Schools identified for improvement are, by definition, performing worse than other schools— and may be among the neediest. When states cannot set aside the full 4 percent for school improvement, it is difficult for them to plan and provide consistent assistance to these schools. In addition to Title I funds for school improvement, many states have dedicated other federal funds and state funds to school improvement efforts. In the period since NCLBA was enacted, state funds used for this purpose totaled almost $2.6 billion, compared to $1.3 billion in federal Title I funds. While the hold-harmless provision is designed to protect school districts from reductions in their Title I funding, it has prevented some states from being able to set aside the full amount of funds for school improvement, which are intended for the lowest-performing schools. While the total amount of Title I funds a state receives does not decrease in any one year as a result of calculating the 4 percent set-aside, the hold-harmless provision can affect how those funds are allocated within a state. Specifically, when states set aside funds for school improvement, the hold-harmless provision prevents the state from reducing the Title I funding for any school district from the previous year. Sometimes, after taking into consideration the hold-harmless provision, there are not enough funds available from those districts with increasing Title I allocations to cover the full 4 percent set-aside. Specifically, 22 states have been unable to set aside the full portion of Title I funds for school improvement for 1 or more years since NCLBA was enacted because they did not have enough left over after satisfying the hold-harmless provision. Six of these—Florida, Kansas, Kentucky, Maine, Massachusetts, and Michigan—have been unable to set aside the full amount for 3 or more years. (See fig. 3). Title I allocations are distributed through states to school districts based on poverty levels, and the hold-harmless provision protects districts from receiving less than they received the previous year. In other words, if a district’s population of low- income students decreases, the hold-harmless provision ensures that a district does not receive less Title I funds than the previous year as a result of the school improvement set-aside. Consequently, states can only set aside funds for school improvement that would otherwise have been allocated to school districts slated for Title I increases. In addition to the 22 states affected by the hold-harmless provision, 4 states did not set aside the full portion of Title I school improvement funds for other reasons. For example, 1 state reported that it did not set aside the entire set-aside amount because it had few schools identified for improvement. In 2006, 12 states were unable to set aside the full 4 percent of Title I funds for school improvement due to the hold-harmless provision, with set-asides ranging from as little as 0.2 percent in Kansas to 3.75 percent in Florida. The lowest-performing schools—schools identified for improvement—are affected when states cannot set aside the full 4 percent for school improvement. These schools—which are the targets of the school improvement funding—have failed to meet state performance goals and are, by definition, performing worse than other schools. Effectively, the hold-harmless provision prioritizes preserving the Title I funding of all eligible Title I districts over ensuring that the lowest-performing schools receive funds for school improvement. Furthermore, schools identified for improvement may be among the neediest. In fiscal year 2006, schools identified for improvement in the 12 states that were unable to set aside the full 4 percent had higher average percentages of students in poverty and minority students compared to other Title I schools that were in need of improvement in those states. (See table 4.) When states cannot set aside the full portion of Title I funds for school improvement, it is difficult for states to provide consistent assistance to schools identified for improvement. States that were unable to set aside the full 4 percent for school improvement experienced large decreases in their school improvement funds from year to year compared to all other states. (See table 5.) For example, Ohio officials told us that they experienced a decline of $14 million in Title I allocations to districts between fiscal years 2004 and 2005 due to a decrease in census estimates of the number of low-income students. Since the state still had to provide all districts with no less Title I funds than the year before, it set aside 58 percent, or $9.3 million, less in school improvement funds than it had in the previous year. An Ohio official said this variability made it difficult to commit school improvement assistance to districts. To address this issue, Ohio now retains a portion of its total Title I school improvement set-aside each year to help ensure that school improvement funds will be available if there are future decreases in school improvement funds as a result of the hold-harmless provision. There is also wide variation among states in the average amount of school improvement money available per school in improvement. (See fig. 4). The average amount per school in improvement varies due to differences in overall Title I allocations as well as the number of schools identified for improvement in each state. For example, Massachusetts received over $200 million in Title I funds in fiscal year 2006 and set aside less than $780,000 for its 455 schools identified for improvement, for an average of approximately $1,700 available per school identified for improvement. In contrast, Texas received over $1 billion in Title I funds in fiscal year 2006 and set aside $47 million for its 291 schools identified for improvement, averaging approximately $163,000 per school identified for improvement. Education, recognizing challenges associated with the hold-harmless provision, has proposed eliminating the provision as part of its 2007 budget justification and again as part of its proposals for reauthorization of NCLBA. In its 2007 budget justification, Education estimated states’ ability to set aside the full 4 percent of Title I funds for school improvement for fiscal year 2005 and contended that the hold-harmless provision, in conjunction with Title I funding fluctuations, limited many states’ ability to reserve these funds. Additionally, the department pointed out that districts slated for Title I increases disproportionately contribute to the Title I school improvement set-aside. Congress has not repealed the hold-harmless provision and is currently deliberating the reauthorization of NCLBA. In addition to Title I funds for school improvement, many states have dedicated other federal funds to school improvement efforts. To further support school improvement efforts, 38 states targeted funds from other federal programs intended to improve student achievement, including the Comprehensive School Reform Demonstration Program (CSR), Reading First, and teacher and principal quality programs under Title II of NCLBA. Several states we visited reported incorporating CSR funds and Reading First funds into their school improvement strategies. For example, in Ohio, CSR funds were prioritized toward school improvement purposes under NCLBA. Ohio’s school improvement funding scheme provided Title I set- aside funds to schools for up to 3 years, after which schools could obtain funds from the CSR program. Additionally, 17 states have contributed almost $2.6 billion in state funds for school improvement activities since NCLBA was enacted, nearly double the $1.3 billion in federal Title I school improvement funds provided over the same period. In 2006, 14 states contributed state funds for school improvement under NCLBA. (See table 6.) For example, in 2006, Georgia spent $9.5 million of its funds on its statewide system of support, nearly as much as it expended in Title I school improvement funds. The 5 percent of the Title I school improvement set-aside the state of Georgia reserves under NCLBA for its own use supports 8 employees in its school improvement division, which implements its statewide system of support. The remaining 107 employees in the division are supported by Georgia’s own state funds. We found no relationship between the usage of state funds for school improvement and whether a state reserved the full Title I set-aside amount required under NCLBA. States generally target improvement funds to the most persistently underperforming schools, but some states did not fulfill some NCLBA requirements for allocating or tracking funds. On our survey, states generally reported that they provided more funds to the most persistently underperforming schools, and those schools had higher percentages of low-income and minority students than all other Title I schools. To allocate school improvement funds, 37 states use state-established criteria that include factors such as the number of years the school had been identified for improvement, 2 states used a competitive grant process, and 8 used some other method. However, 4 states reported that they allocated funds equally among schools in improvement, and may not have taken into consideration factors required by NCLBA, such as focusing on the lowest achieving schools. In addition, 1 state allocated Title I improvement funds to districts without schools in improvement and did not take the required steps to do this. Education did not identify these potential compliance issues as part of its monitoring efforts. We referred these issues to Education, and the department is following up with relevant states. Also, 4 states were unable to provide complete information on which schools in their state received improvement funds, as required under NCLBA. Education has not provided guidance on how states should provide this information and does not monitor states’ compliance with this requirement. Generally, we found that states targeted school improvement funds to the most persistently underperforming schools—those that had failed to make AYP for several years—and states tended to provide more funds to these schools. For example, the median grant amount for schools in restructuring nationwide was about $40,000 more than for schools in corrective action in 2006. (See fig. 5.) Overall, schools receiving improvement funds differed from Title I schools not in improvement and schools in improvement that did not receive funds. For example, schools receiving improvement funds had higher percentages of students in poverty and higher percentages of minority students compared to Title I schools not identified for improvement. (See table 7.) In addition, 54 percent of schools that received improvement funds were located in urban areas compared to 24 percent of all other Title I schools not identified for improvement. Nearly half of the schools that received funds were primary schools and nearly one-third were middle schools. While schools identified for improvement that received funds had similar poverty and minority percentages as all other schools identified for improvement, there were some differences between these two groups. For example, 26 percent of schools that received improvement funds were located in rural areas, compared to 12 percent of all other schools identified for improvement. In the 2005-2006 school year, approximately 71 percent of schools identified for improvement received school improvement funds. Thirty-seven states established criteria on a state level to determine which schools should receive Title I school improvement funds or services, and the remaining states used other allocation methods. (See fig. 6.) Of the 37 states, 27 used criteria that included the number of years the school failed to make AYP, and 21 states used criteria that included the number of students in each school. For example, Michigan officials told us that their allocation formula includes the year of school improvement as well as overall student enrollment. The state also differentiates between schools that failed to make AYP for academic reasons and those that missed AYP targets for other reasons, such as graduation rate or attendance. Of the 14 states that used methods other than state-established criteria to allocate funds, 2 states—Colorado and Idaho—distributed funds through a competitive grant process. Eight states used other allocation methods such as distributing funds to districts by ranking the schools identified for improvement based on school performance and the number of low-income students. However, we found that Delaware, New Hampshire, Virginia, and the District of Columbia reported they required districts to provide each school receiving school improvement funds an equal amount of funding, and, thus, may not have prioritized the allocation of funds as required under NCLBA. In addition to their various allocation methods, 9 states gave districts flexibility in determining which schools received funds. For example, New York allocates funds to districts based on state-established criteria regarding schools in need of improvement. However, districts can choose which schools receive funds and the amount of funds those schools receive. In addition to criteria used to allocate funds, states also varied in the proportion of school improvement funds allocated to schools and retained by the state. In 2006, 38 states allocated 95 percent of the school improvement set-aside funds directly to local school districts for schools identified for improvement, as NCLBA requires, with the remaining 5 percent retained by states to carry out their responsibilities. In 2006, 1 state that we visited retained less than 5 percent for its statewide system of support and distributed more than 95 percent to districts for schools identified for improvement. In contrast, some states retained more than 5 percent of their school improvement set-aside, as permitted under NCLBA under certain circumstances. With the approval of districts, a state may retain more than 5 percent to directly provide school improvement services for schools or arrange for other entities to provide these services. In 2006, 10 states retained more than 5 percent. (See fig. 7.) For example, New Mexico officials told us that eligible districts agreed that the state could retain the entire set-aside amount to support a systematic reform model for school years 2006-2007 and 2007-2008. For participating schools, state officials paid a contractor to provide leadership and instructional training, reading and math interventions, and materials needed to support the interventions for schools identified for improvement. While most states retained the allowed 5 percent of the Title I school improvement funds to carry out their state responsibilities, 23 states reported that they have fully implemented their statewide system of support. Of the remaining 28 states, 18 reported that their system was mostly implemented and 10 reported they have partially implemented their system. Education officials offered several reasons why states may not have fully implemented these systems. For example, some states may not have had enough funds to fully implement their statewide system of support. In other states, statewide strategies may only reach a portion of the schools identified for improvement because services are prioritized for the lowest-achieving schools and districts. In addition, officials said some states have experienced large increases in the number of schools identified for improvement, necessitating significant changes to their statewide system of support. Additionally, 21 states allocated some of their Title I school improvement funds to districts for district-level activities, including at least 1 state that may not have met NCLBA requirements for doing this. Districts have a major responsibility for providing technical assistance to schools identified for improvement. According to Education officials, while NCLBA does not explicitly set aside funds for district-level activities, it does allow for districts to use improvement funds to provide services to these schools. In addition, Education officials said that funds can be used for building district capacity if the funds are focused on providing services for schools identified for improvement. In Massachusetts, for example, some funds supported district-level specialists who provided direct assistance to schools identified for improvement in areas such as data analysis and implementing the school improvement plan. In addition, according to Education officials, states have authority to use some of the 95 percent funds for districts identified for improvement but without schools in improvement if the state determines that the amount of 95 percent funds exceeds the amount needed to provide assistance to schools identified for improvement. In this situation, a state may take excess funds from one district and give those funds to other districts based on state- determined need. Education officials told us that states must consult with districts before claiming unused funds and have evidence that these discussions took place. However, we found that 1 state may have allocated Title I improvement funds to districts identified for improvement without schools identified for improvement without first determining it had excess funds. We identified this issue through our site visits and are uncertain if other states may have also done this. Education officials said they did not identify this issue during their recent monitoring visit. We referred this matter to Education, which is following up on it. Most states collect and track information on the use of school improvement funds. Forty-eight states reported on our survey that they collect information on the expenditure of Title I school improvement funds at least annually from schools, districts, or other sources. Twenty- four states reported collecting expenditure information on each school receiving improvement funds. Other states reported collecting expenditure information from districts that provide aggregate information for all schools that received improvement funds in the district, rather than for each school receiving improvement funds. Seventeen states reported that district officials monitor school improvement funds by comparing activities that were funded to those identified in school improvement plans. For example, some district officials we visited said they compare school improvement expenditures to the school improvement plan before approving disbursements. Forty-five states reported that state or district officials conduct visits or monitor through other means how school improvement funds were expended and what school improvement activities were funded. State officials from 14 states reported that monitoring was conducted in multiyear cycles rather than annually or that a portion of schools were monitored annually. For example, as part of Ohio’s monitoring and review process, officials said that district cohorts are reviewed every 3 years with on-site reviews conducted at a minimum of 10 percent of the cohort. While most states monitor funds, 4 states were unable to make publicly available the complete list of schools receiving improvement funds, as required under NCLBA, because these states do not collect information on each school receiving improvement funds, and Education has not provided guidance on this requirement. Almost all states were able to provide a list of schools receiving funds to us, but 3 states—Arkansas, Florida, and North Carolina—provided information on districts that received funds, but could not provide information on which schools received funds, and California provided a partial list of schools that received funds. In a few cases, we found that non-Title I schools had inappropriately received Title I school improvement funds. State officials said that they would take steps to address this issue, and we referred this matter to Education, which is following up on it. Though Education monitors the allocation of school improvement funds through its 3-year Title I monitoring cycle, Education officials told us they had not uncovered these issues. In addition, Education does not regularly check when and whether states have made the lists of schools receiving improvement funds publicly available, as required, and has not provided guidance on how states make lists of schools receiving improvement funds publicly available. Schools that received funds and states have employed a range of improvement activities, and most states assess these activities by reviewing trends in student achievement data and obtaining feedback from district and school officials. At least 45 states reported that schools that received school improvement funds were involved in professional development, reorganizing curriculum or instructional time, or data analysis. Nearly all states reported that they assisted schools identified for improvement with school improvement plans and professional development, and officials in 42 states consider this assistance key to helping schools improve. To assess school improvement activities, 42 states reported that they track student achievement data or school performance trends, and 36 of those states also use feedback from school and district officials. Nearly all states reported on our survey that schools that received improvement funds in school year 2006-2007 were engaged in activities such as professional development and data analysis, and districts and schools we visited also cited these and other activities. Forty-seven states reported that schools receiving improvement funds were taking part in professional development, 46 states said schools were reorganizing curriculum or instructional time, and 45 states reported that schools were using data analysis from the state’s assessment system or other assessments. School officials in each state we visited also cited using school improvement funds for professional development activities. For example, at one school in California, staff received intensive training in instructional strategies and data analysis software, which was designed to help teachers analyze instructional practices and provided teachers with specific steps to increase student achievement. In addition, schools and districts in every state we visited mentioned using coaches who are generally former principals, teachers, or other subject area specialists who work with school administrators or teachers. School officials in Michigan noted that coaches had served as a key resource in the development of school improvement plans. In some of the schools and districts we visited, officials pointed to the importance of examining test scores and student data in helping schools improve. For example, a school district in Ohio provided school leaders and teachers immediate access to test scores and other information such as curriculum, professional development resources, and student records online to help track student achievement. While over 40 states reported that they assisted schools identified for improvement with the school improvement plan, professional development, and data analysis, or provided help from school support teams, states generally reported providing more assistance to schools in later stages of improvement (See fig. 8). In New Mexico, for example, all schools identified for improvement are required to conduct certain activities such as short cycle assessments several times a year, while schools in restructuring are also required to send staff to training in areas such as principal leadership. As another example, 44 states reported that they provided assistance from school support teams to schools in corrective action and restructuring, compared to 34 states that reported providing this assistance to schools in earlier stages of improvement. The only area in which states said they provided slightly more assistance to schools in earlier stages of improvement was helping with the school improvement plan. Forty-two states considered helping schools identified for improvement with the school improvement plan and professional development to be somewhat to very effective forms of state support. For example, many states provided schools and districts a template for improvement plans, which can help ensure some consistency in plans across the state. In Ohio, state officials showed us an electronic tool that they developed for both district improvement plans and school improvement plans that they said have been useful in aligning district and school improvement plans. Forty-two states reported that they tracked changes in student achievement data or school performance trends, and 36 of those states also used feedback from district and school officials to assess improvement activities. For example, Michigan officials said they require schools to provide student achievement data annually and to describe which improvement activities were working as well as what changes they planned to make. We also found that some states we visited conduct more extensive reviews of schools in corrective action and restructuring that include site visits, assessments, and observation of staff and leadership. Most districts and schools we visited also focus on student achievement data to assess activities. One school in Georgia has students take interim practice tests using questions similar to those of the state’s annual assessment to track students’ progress. The school has a “data room” that has test scores and other data by grade level and subgroups displayed in lists, graphs, and charts to track progress and serve as a visual reminder of its overall goals. (See fig. 9.) At the district and school levels, officials in every state we visited emphasized the importance of using the school improvement plans to identify specific actions and goals, and many use the plan to monitor progress and make adjustments as needed. Twenty-four states reported that they conduct evaluations, either on the state, district, or school level to assess activities. Based on information provided by some states, these assessments were not in line with Education’s definition of high-quality reviews of educational effectiveness but included approaches to assess activities and track school improvement. In some cases, states we visited told us they are working with or plan to work with an independent evaluator or other entity to conduct a more formal evaluation of school improvement activities. Education directly supports states with school improvement through written guidance, staff assistance, policy letters, and information provided at national conferences. In July 2006, Education published nonregulatory guidance on district and school improvement that updated and expanded its earlier guidance in this area. Education staff also provide direct assistance by responding to states’ questions. In some cases, Education officials said they send policy letters to individual states to address state- specific questions and post the letters on its Web site. For example, one state requested clarification from Education on allocating Title I school improvement funds to districts, and Education responded with a policy letter. In addition, Education also provides guidance and disseminates information through national conferences such as the annual Title I National Conference. In addition to direct support, Education provides a number of technical assistance and research-related resources to assist states, districts, and schools in their school improvement efforts. These include the Comprehensive Centers Program, Regional Education Laboratories, the Center for Comprehensive School Reform and Improvement, the What Works Clearinghouse, and a new Doing What Works Web site. (See fig. 10). Education provides a number of services to states through its Comprehensive Centers Program—consisting of 16 regional centers and 5 content centers. The regional centers are located across the country and provide training and technical assistance to address state needs and priority areas, which largely focus on school improvement. Each of the 5 content centers focuses on one of the following areas: accountability, instruction, teacher quality, innovation and improvement, or high schools. The content centers provide expertise, analysis, and research in the five content areas. According to an Education official, a key focus of the comprehensive centers is helping states build their statewide systems of support. Currently, the comprehensive centers have 8 regional initiatives and 35 individual state initiatives related to this topic. In addition, there are 2 regional initiatives and 26 individual state initiatives to address district and school improvement. One content center, the Center on Innovation and Improvement, provides a variety of services related to school improvement. The center gathers data and information on districts and schools making sustained gains to identify successful improvement strategies. It has developed two guides on this topic, a Handbook on Restructuring and Substantial School Improvement and a Handbook on Statewide Systems of Support, which it has distributed to regional centers, state educational agencies, and other organizations. The center also facilitates information sharing on school improvement topics through its annual 2-day training for representatives of the regional centers and additional workshops throughout the year. In addition, the center collaborates with the Council of Chief State School Officers to issue monthly School Improvement e-newsletters, which focus on school improvement efforts at the state and district levels. Education also compiles and disseminates relevant research on effective educational interventions. Education operates 10 Regional Education Laboratories to provide research on a variety of topics, such as statewide systems of support and factors that have helped schools make AYP. The laboratories are also available to provide assistance to any entity, such as school districts or schools, if they request assistance. Education also funds the Center for Comprehensive School Reform and Improvement to assist schools and districts in implementing comprehensive school reform and improvement by providing information about research-based strategies and assistance in using that information to make changes. In addition, Education developed the What Works Clearinghouse to review studies of educational interventions to determine which studies were conducted with a sound methodology and to what extent the interventions are effective. In November 2007, Education implemented a Doing What Works Web site to help educators adapt and use the research-based practices identified by the What Works Clearinghouse. State officials reported that Education’s written guidance, national meetings or conferences, and comprehensive centers were the most helpful forms of assistance and the What Works Clearinghouse was relatively less helpful (See fig. 11.) For example, in several states we visited, state officials told us that comprehensive centers have been helpful in areas such as building state school improvement capacity and facilitating discussions with other states. Although 15 states reported that the What Works Clearinghouse was moderately to very helpful, 20 states reported that it provided some to no help. District officials said that it has not been useful for reasons such as it is difficult to figure out how to translate the research on the What Works Clearinghouse into practical application at the classroom level. Almost all states also reported that they could benefit from additional assistance from Education. Fifty states reported that they could benefit from more tool kits and sample documents, and 48 states said they could benefit from more national or regional conferences to share lessons learned and promising practices. Education officials said that they have also heard that states want more opportunities to share information and are looking for ways to do this. Forty-three states also reported that they could use additional assistance in evaluating the effectiveness of school improvement activities, and 42 states said they could benefit from more help with monitoring and assessing school improvement activities. Education has taken steps to address concerns about the What Works Clearinghouse and to provide additional resources aimed at addressing areas in which states want more help. An Education official told us that the recently implemented Doing What Works Web site is aimed at helping educators adapt and use the research-based practices on the What Works Clearinghouse. To do this, the Doing What Works Web site provides the following: (1) information to help make the research on effective practices more understandable for educators, (2) links to real-life examples such as interviews with teachers and pictures from classrooms to help show the practices in action, and (3) tools and resources that educators can use in their own planning and training efforts. As far as providing more help on monitoring and evaluating school improvement activities, Education officials told us that they plan to collect additional information about successful school improvement practices as part of the new school improvement grants—authorized under NCLBA and funded for the first time in 2007. States receiving these grants will be required to track and report outcomes such as increased student proficiency and how school improvement activities helped schools improve. Education plans to compile this information and discuss this topic at a meeting of state Title I directors in early 2008. In addition, the Administration, as part of its proposed revision to the school improvement section of NCLBA, is recommending that Education be allowed to reserve up to 1 percent of Title I funds to conduct research, evaluation, and dissemination activities related to effective school and district school improvement activities. While the hold-harmless provision is intended to shield districts from receiving less in Title I funds than in the previous year as a result of the school improvement set-aside, we found some evidence that it may be preventing some of the neediest schools that face the most challenges to improving the academic achievement of their students from obtaining these funds. When states cannot set aside the full 4 percent of Title I for school improvement, their ability to target funds at the lowest-performing schools is diminished. Effectively, the hold-harmless provision prioritizes preserving the Title I funding of all eligible Title I districts over ensuring that the lowest-performing schools receive funds for school improvement. Furthermore, the variability from year to year in state Title I funds can affect some states’ ability to sustain a steady stream of support for low- performing schools. Removing the hold-harmless provision, as Education has proposed, would clearly increase states’ ability to target improvement funds to the lowest-performing schools. However, while Education points out that set-aside funds come from districts with increasing Title I allocations, it is still not known how removing the hold-harmless provision would affect those districts protected by it. It would be helpful for Congress as it deliberates reauthorization of NCLBA to know the characteristics of districts that contribute to the set-aside compared to those that are protected by the hold-harmless provision, particularly in terms of student characteristics and school performance. Thousands of schools have received Title I school improvement funds intended to help schools raise student achievement, and states have generally targeted these funds to schools with the most persistent achievement problems. However, without additional monitoring steps by Education to ensure that states are appropriately allocating funds for district-level activities and prioritizing funds to the lowest-achieving schools, some schools most in need of assistance may not receive funding. While Education monitors every state’s improvement program every 3 years, it has not uncovered several compliance issues that we identified. Further, because some states do not track which schools receive improvement funds and could not make this information publicly available, as required under NCLBA, Education and others have not been in the best position to ensure that school improvement funds are used only for Title I schools and targeted to the lowest-performing schools. Ensuring that states track which schools receive improvement funds and can make this information publicly available enhances transparency and accountability, and better enables the public, Education, and states to track compliance and progress. To enhance state efforts to target improvement funds to schools most in need of assistance, we are making the following three recommendations to the Secretary of Education: To further support the department’s proposal to eliminate the hold- harmless provision, develop an analysis comparing the characteristics of districts that contribute to the set-aside with those protected by the hold-harmless provision. Such an analysis could identify differences in school performance or student characteristics. Review the Title I monitoring process to ensure that steps are in place to ensure that states comply with NCLBA requirements for allocating school improvement funds to districts for district-level activities and prioritizing funds to the lowest performing schools. Ensure that states track which schools receive improvement funds and can comply with the requirement to make a list publicly available of all schools receiving Title I improvement funds by providing guidance to clarify when and how this information is to be made available and by monitoring state compliance. We provided a draft of this report to the Department of Education for review and comment. In its written response, included as appendix II, Education agreed with our three recommendations. Specifically, Education agreed to explore options to determine the types of analyses that would be helpful to inform the debate on eliminating the hold- harmless provision. Education also agreed to review its monitoring process and consider changes to gather additional evidence on whether school improvement funds are being allocated and prioritized as required by statute. In addition, Education agreed that it will explore options for providing guidance to states on the NCLBA requirement that states make publicly available a list of all schools receiving Title I school improvement funds. Education also identified some of the steps it has taken to collect additional information on the allocation and use of school improvement funds and to identify successful school improvement strategies. Copies of this report are being sent to the Secretary of Education, relevant congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be made available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Other contacts and major contributions are listed in appendix IV. To address the objectives of this study, we used a variety of methods. To obtain nationally representative information on states’ school improvement funding, types of activities being funded, and federal assistance, we administered a survey to state education agency officials in all 50 states and the District of Columbia. To get a national perspective of schools in improvement, we conducted descriptive analyses of characteristics of schools that received improvement funds and compared them to all schools identified for improvement and all other Title I schools nationwide. We also conducted site visits during which we interviewed state, district, and school officials representing 5 states and 12 school districts within these states. We spoke with officials at Education involved in oversight and distribution of school improvement funds and reviewed Education’s data on schools identified for improvement. We also interviewed several experts in the field of school improvement. We reviewed relevant federal laws, regulations, and agency guidance. We conducted our work from January 2007 through February 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To better understand states’ school improvement efforts, particularly how states are allocating and tracking school improvement funds and activities, we designed and administered a survey to state education agency officials in all 50 states and the District of Columbia between July and October 2007 and had a 100 percent response rate. The survey included questions on the amount of Title I improvement funds states have reserved and expended, what other federal or state funds are being used, what types of improvement activities are being funded, how activities are being monitored and assessed, and assistance received from Education. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as variations in how respondents interpret questions and their willingness to offer accurate responses. We took steps to minimize nonsampling errors, including pretesting draft instruments and following up with states to discuss questionable responses. Specifically, during survey development, we pretested draft instruments with officials in Rhode Island, Ohio, Illinois, Montana, and Florida between May and June 2007. In the pretests, we were generally interested in the clarity of the questions and the flow and layout of the survey. For example, we wanted to ensure that definitions used in the surveys were clear and known to the respondents, categories provided in closed-ended questions were complete and exclusive, and the ordering of survey sections and the questions within each section were appropriate. On the basis of the pretests, the survey instrument underwent some slight revision. A second step we took to minimize nonsampling errors was contacting state officials via phone and e-mail to follow up on obvious inconsistencies, errors, and incomplete answers. We also performed computer analyses to identify inconsistencies in responses and other indications of error. In addition, a second independent analyst verified that the computer programs used to analyze the data were written correctly. For our analysis, we used data from three sources—state-provided data on schools that received Title I improvement funds in each state, Education’s Common Core of Data (CCD), and Education’s Consolidated State Performance Reports (CSPR). For comparison, we created three discrete groups of schools: (1) schools identified for improvement that received funds and services, (2) schools identified but not receiving funds and services, and (3) all other Title I schools that were not identified for improvement for school year 2005-2006. To obtain information on the characteristics of schools receiving school improvement funds, we requested information from each state on the schools identified for school choice, supplemental educational services, corrective action, or restructuring in their respective state that received Title I set-aside school improvement funds or services pursuant to §1003(a) of the No Child Left Behind Act (NCLBA) during the 2004-2005, 2005-2006, and 2006-2007 school years. We also asked states to indicate the percentage of students from families with incomes below the poverty line for each school that received improvement funds during the 3 school-year time frame. In addition, we asked states to provide information on each school that included (1) the school’s full name; (2) the school’s address, city, and state; (3) the school’s district name; and (4) the school’s National Center for Education Statistics (NCES) school identification; (5) the school’s year of improvement under NCLBA: first year of improvement (school choice), second year of improvement (school choice and supplemental educational services), third year of improvement (school choice, supplemental educational services, and corrective action), fourth year of improvement (school choice, supplemental educational services, and plan for restructuring), or fifth year of improvement (school choice, supplemental educational services, and implementing a restructuring plan); and (6) we asked states, if possible, to provide the amount of Title I set-aside funds (and any other federal/state improvement funds, if applicable) that each school received. Three states were unable to provide this information, and 1 state provided partial information, so our data on school characteristics are presented only for those states that provided this information. We reviewed the lists of schools receiving improvement funds for obvious inconsistencies, errors, and completeness. When we found discrepancies, we brought them to the attention of state officials and worked with them to correct the discrepancies before conducting our analyses. On the basis of these efforts, we determined that the data were sufficiently reliable for the purposes of this report. Our other two data sources were data from Education’s CCD and CSPR. The CCD is a program of Education’s National Center for Education Statistics that annually collects data from state education agencies about all public schools, public school districts, and state education agencies in the United States. At the time we began our analysis, the latest CCD data available were from the 2005-2006 school year. Although we based our analysis on schools in improvement in 2006-2007, the characteristics were based on those of the prior year. To compare the characteristics of schools that received improvement funds to those of all schools in improvement and all Title I schools for 2005-2006, we used data from CSPR, which is the required data tool for each state, the District of Columbia, and Puerto Rico and contains lists of schools identified for improvement by state. The CSPR also provides each school’s nationally unique identification number, allowing us to link data on these schools with data provided in the CCD. For our analysis, we excluded Puerto Rico, Arkansas, Florida, and North Carolina because they could not provide information on which schools received funds. In addition, we did not have complete information from California because state officials provided a partial list of schools that received funds. We compared schools in improvement from the CCD for school year 2005-2006 with all other Title I eligible schools not identified for improvement. We also compared states’ lists of schools in improvement that received funds or services from the 2005-2006 CSPR with lists of schools that were in improvement but did not receive funding or services. We performed a series of tests and took additional steps as needed to assess the reliability of the data used. Specifically, we assessed the reliability of the data by (1) examining the data for obvious inconsistencies, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. To understand school improvement funding and implementation at the local level, we conducted site visits to 5 states and 12 districts and 22 schools within these states between April and October 2007. The states we chose were California, Georgia, Michigan, New Mexico, and Ohio, which were selected based on having high percentages of schools identified for improvement, variation in Title I set-aside funding allocation methods and administrative structures, and geographic diversity. We interviewed state officials on states’ efforts to allocate federal and state school improvement funds, provide assistance to schools identified for improvement, and Education’s assistance to states. Within each of the 5 states, we met officials from 2 school districts, and in Michigan, we met with officials from 4 school districts for a total of 12 school districts, as shown in table 8. The 12 districts were selected to provide variety in demographics, geographic location, and stages of improvement. During the site visits, we interviewed state and district officials as well as officials representing 22 schools, including principals, teachers, and other school staff involved with school improvement activities in order to provide in-depth information and illustrative examples of our more general findings. The selected schools represented varying stages of improvement, grade levels served, and locales. While, in many cases, district officials selected the schools we visited, we instructed state and district officials to consider each school’s stage of improvement and percentage of economically disadvantaged students, among other characteristics. Through our interviews with state, district, and school officials, we collected information on school improvement funding, school improvement activities being undertaken, and state and district assistance to schools identified for improvement. To learn more about Education’s oversight of Title I school improvement funds and efforts to assist states in implementation of school improvement provisions, we conducted interviews with representatives of the offices of Student Achievement and School Accountability Programs; Planning, Evaluation, and Policy Development; Institute of Education Sciences; Office of School Support and Technology Programs; and the Office of General Counsel. In addition, we interviewed experts on school improvement, including those at the American Institutes for Research, Center on Education Policy, Council of Chief State School Officers, and the National Governors Association. We also reviewed several studies on school improvement funding and activities. Bryon Gordon, Assistant Director, and Laura Heald, Analyst-in-Charge, managed the assignment. Cheri Harrington, Cara Jackson, Charlene Johnson, and Nathan Myers made significant contributions to this report in all aspects of the work. Shannon Groff and Ayeke Messam provided assistance in data collection; Cathy Hurley, Stuart Kaufman, and Jean McSween provided analytical assistance; Charlie Willson provided assistance on report preparation; Sheila McCoy provided legal support; Tina Cheng and Mimi Nguyen developed the report’s graphics; and Lise Levie verified our findings. No Child Left Behind Act: Education Should Clarify Guidance and Address Potential Compliance Issues for Schools in Corrective Action and Restructuring Status. GAO-07-1035. Washington, D.C.: September 5, 2007. Teacher Quality: Approaches, Implementation, and Evaluation of Key Federal Efforts. GAO-07-861T. Washington, D.C.: May 17, 2007. No Child Left Behind Act: Education Actions May Help Improve Implementation and Evaluation of Supplemental Educational Services. GAO-07-738T. Washington, D.C.: April 18, 2007. No Child Left Behind Act: Education Assistance Could Help States Better Measure Progress of Students with Limited English Proficiency. GAO-07-646T. Washington, D.C.: March 23, 2007. Reading First: States Report Improvements in Reading Instruction, but Additional Procedures Would Clarify Education’s Role in Ensuring Proper Implementation by States. GAO-07-161. Washington, D.C.: February 28, 2007. No Child Left Behind Act: Education Actions Needed to Improve Implementation and Evaluation of Supplemental Educational Services. GAO-06-1121T. Washington, D.C.: September 21, 2006. No Child Left Behind Act: Education Actions Needed to Improve Local Implementation and State Evaluation of Supplemental Educational Services. GAO-06-758. Washington, D.C.: August 4, 2006. No Child Left Behind Act: States Face Challenges Measuring Academic Growth. GAO-06-948T. Washington, D.C.: July 27, 2006. No Child Left Behind Act: Assistance from Education Could Help States Better Measure Progress of Students with Limited English Proficiency. GAO-06-815. Washington, D.C.: July 26, 2006. No Child Left Behind Act: States Face Challenges Measuring Academic Growth That Education’s Initiatives May Help Address. GAO-06-661. Washington, D.C.: July 17, 2006. No Child Left Behind Act: Improved Accessibility to Education’s Information Could Help States Further Implement Teacher Qualification Requirements. GAO-06-25. Washington, D.C.: Nov. 21, 2005. No Child Left Behind Act: Education Could Do More to Help States Better Define Graduation Rates and Improve Knowledge about Intervention Strategies. GAO-05-879. Washington, D.C.: Sept. 20, 2005. No Child Left Behind Act: Most Students with Disabilities Participated in Statewide Assessments, but Inclusion Options Could Be Improved. GAO-05-618. Washington, D.C.: July 20, 2005. No Child Left Behind Act: Education Needs to Provide Additional Technical Assistance and Conduct Implementation Studies for School Choice Provision. GAO-05-7. Washington, D.C.: Dec. 10, 2004. No Child Left Behind Act: Improvements Needed in Education’s Process for Tracking States’ Implementation of Key Provisions. GAO-04-734. Washington, D.C.: Sept. 30, 2004. No Child Left Behind Act: Additional Assistance and Research on Effective Strategies Would Help Small Rural Districts. GAO-04-909. Washington, D.C.: Sept. 23, 2004. No Child Left Behind Act: More Information Would Help States Determine Which Teachers Are Highly Qualified. GAO-03-631. Washington, D.C.: July 17, 2003. | Under the No Child Left Behind Act (NCLBA), the federal government provides millions of dollars annually to assist schools that have not met state academic goals. In the 2006-2007 school year, over 10,000 such schools were identified for improvement. NCLBA requires states to set aside 4 percent of their Title I funds to pay for school improvement efforts. GAO was asked to determine (1) the extent to which states have set aside these funds and used other resources for school improvement, (2) which schools received improvement funds and the extent funds are tracked, (3) the activities states and schools have undertaken and how activities are assessed, and (4) how Education supports states' improvement efforts. GAO administered a survey to state education officials and received a 100 percent response rate, matched survey data to an Education database, and conducted site visits to five states. A statutory requirement--known as a hold-harmless provision--has limited some states' ability to target the full 4 percent of Title I funds for school improvement to low-performing schools. However, many states have used other federal and state funds for this purpose. While the hold-harmless provision is designed to protect school districts from reductions in their Title I funding, it has also kept 22 states from setting aside the full portion of Title I school improvement funds since 2002 because they did not have enough funds to do so after satisfying the hold-harmless provision. To address this, Education has proposed repealing the hold-harmless provision. However, it is not known how removing this provision would affect districts protected by it. In addition to Title I funds, 38 states have dedicated other federal funds, and 17 have contributed state funds for school improvement. Though states generally target improvement funds to the most persistently underperforming schools, some states did not fulfill key NCLBA requirements. Specifically, 4 states did not follow all requirements to ensure that schools most in need of assistance received funds. Although Education monitors how states allocate improvement funds, it did not identify this issue. Also, 4 states were unable to provide a complete list of schools that received improvement funds, as required by law. Education has not provided guidance on this requirement and does not monitor compliance with it. Schools and states are engaged in a variety of improvement activities, and most states use student data and feedback to assess activities. Most states reported that schools receiving improvement funds used the funds for professional development and for reorganizing curriculum or instruction time. Nearly all states assisted schools with school improvement plans and professional development. Most states use student achievement data and feedback from schools and districts to assess improvement activities. Education provides a range of support for school improvement, including technical assistance and research results. Nearly all states want more help, such as more information on promising improvement practices. Education has a new Web site to provide additional resources and plans to collect more information on promising practices through a new grant program. |
The services’ O&M appropriation gives them the funds to carry out day-to-day activities, such as recruiting and fielding a trained and ready force, equipment maintenance and repair, child care and family centers, transportation services, civilian personnel management and pay, and maintaining the infrastructure to support the forces. Table 1 shows the Army’s and the Air Force’s O&M budget requests, amounts received, and amounts obligated for fiscal years 1993 through 1995. The services’ annual O&M budget requests to Congress are presented in four broad categories referred to as budget activities: operating forces, mobilization, training and recruiting, and administrative and servicewide activities. Each budget activity is further broken down into activity groups which, in turn, are further broken down into subactivity groups. For example, the Army’s operating forces budget activity is divided into two activity groups (land forces and land operations support), and each of the activity groups is made up of various subactivity groups. To illustrate, the land forces activity group consists of eight subactivity groups: combat units, tactical support, theater defense forces, forces related training, force communications, depot maintenance, Joint Chiefs of Staff (JCS) exercises, and base support. Table 2 is an example of the budget presentation document for the Army’s operating forces budget activity, activity groups, and subactivity groups for fiscal year 1995. The budget subactivity groups are further broken down into program element codes. Although the codes are not part of the budget presentation to Congress, they are maintained as part of the services’ funds control process and provide more details about how the services plan to spend the funds received. For example, within the Army’s subactivity group—combat units—there are three program element codes that show how the Army plans to use the funds designated for combat units. There are similar program elements in the four budget activities. For example, base operations is a program element that is in most budget activities. Therefore, by grouping similar program elements, it can be determined how much was budgeted and obligated for a particular function. In performing our analysis, we placed all the program element codes into the following four categories: combat forces and support of the forces, training and recruitment, management and command activities, and base support. For each of these four categories, we listed the budgeted and obligated amounts. See appendixes II and III for a complete listing of the program element codes that comprise each of these four major categories. For fiscal years 1993 through 1995, funds requested for combat forces and support of the forces ranged from $3.6 billion to $4.5 billion for the Army. During the same period, the amount of O&M funds the Army obligated for these program element codes ranged from $3.2 billion to $4.2 billion. For the Air Force, funds requested for combat forces and support of the forces ranged from $4.7 billion to $5.1 billion and the amount obligated ranged from $5 billion to $5.3 billion. As shown in table 3, the amounts obligated have increased slightly during the 3-year period, but, at their highest level, still only represent 21.5 percent of the Army’s and 26.8 percent of the Air Force’s total O&M obligations. As shown above, over the past 3 years, the Air Force obligated about $700 million more for its combat forces than was requested. In contrast, the Army obligated about $900 million less than the amounts requested.The fact that the Army did not obligate all the funds that it requested for its combat forces means that the amount initially requested for combat forces was probably used to fund other O&M programs. Training and recruiting are essential elements in maintaining a ready force. Our analysis showed that the amounts obligated for these purposes were routinely less than the amounts the services requested. As shown in table 4, the Army’s and the Air Force’s requests for the program elements related to training and recruiting in fiscal years 1993 through 1995 ranged from $2 billion to $2.2 billion and from $1.5 billion to $1.9 billion, respectively. Although the amounts obligated have increased slightly during the 3-year period, at their highest level they still only represented about 10.8 percent of the Army’s and 9.1 percent of the Air Force’s O&M budgets. Table 5 shows the types of training and recruiting activities the Army and the Air Force obligated their O&M funds for in fiscal year 1995. In most cases, the amounts requested and obligated were fairly close. However, overall, the Army and the Air Force obligated less than they requested for their training and recruiting activities. For fiscal years 1993 through 1995, the Army obligated about 33 percent of its total O&M funds for base support and the Air Force about 30 percent. Table 6 shows the amounts requested and obligated for base support program elements during the 3-year period under review. Although the amounts have decreased slightly from the high point during the period under review, the services continue to devote a substantial portion of their O&M budgets to maintaining the infrastructure. Program elements in the base support category include base communications, environmental compliance, real property maintenance and real property services activities, child and family services, leases, maintenance of nontactical equipment, food service programs, and community and morale support activities. Table 7 shows the amount requested and obligated for the above base operations program elements in fiscal year 1995. Of particular interest is the amount the services obligated for base maintenance and repair—about $974 million obligated versus about $793 million requested for the Army and about $2.4 billion obligated versus $1.3 billion requested for the Air Force. In fiscal year 1995, Congress reduced the Army’s and the Air Force’s requests by $100 million, respectively. Therefore, the fact that the services obligated more than requested cannot be explained by Congress appropriating more than was requested. The more likely answer is that O&M funds from other programs were reprogrammed to the base maintenance and repair accounts. Another area where the services obligated more than was requested was for environmental activities. Whereas the Army requested about $465 million for these activities, it obligated over $1.1 billion. The Air Force also obligated significantly more of its O&M funds for environmental activities than it requested—about $774 million versus $404 million. The reason for this is that the requested amounts do not include funds transferred from DOD to the services under DOD’s environmental account. Between fiscal years 1993 through 1995, the Army obligated approximately 37 percent of its O&M funds for management, command, and servicewide activities. The Air Force obligated about 35 percent of its O&M funds for the same purposes. Table 8 shows the amounts requested and obligated for management, command, and servicewide program elements during the 3-year period under review. Although the amounts obligated by the Army decreased from fiscal years 1993 to 1994, fiscal year 1995 obligations show a reversal of that trend. Air Force obligations rose steadily over this same time period. As shown in table 9, Army headquarters and command management activities accounted for $770 million, or 11 percent, of the total obligations in the management, command, and servicewide activities grouping. The $770 million included obligations to operate the headquarters activities of the Army’s specified commands as well as the operations of the departmental headquarters located in Washington, D.C., and other headquarters activities, such as the Corps of Engineers, public affairs, and space activities. The Air Force obligated about $654 million, or about 9 percent, of its management, command, and servicewide activities funds for similar programs. Athough it would not be expected that the amounts obligated would necessarily agree with the amounts requested, there are certain obligation trends that seem to emerge. This is particularly true for the Army, which historically obligates less funds for its combat forces than it requests even when appropriated more than requested. Conversely, the opposite trend emerges for the infrastructure and management accounts. The reason why obligations exceed the amounts requested may be because Congress appropriated more than requested. However, another reason could also be that funds initially requested for other O&M accounts were reprogrammed to the infrastructure and management accounts. We believe that this report will assist congressional decisionmakers by showing how obligation patterns differ from the budget estimates submitted to Congress. In commenting on a draft of this report, DOD said that all of the Army’s special activities programs (see app.II) should not have been included in the management, command, and servicewide activities category. DOD said that special activities include items such as the Army battle simulation center, Louisiana maneuvers, Army flying hour program, combat training centers, training range operations, soldier modernization, and contingency operations and that these items should have been included in the combat forces category. DOD further stated that the only portion of special activities that was properly included in the management, command, and servicewide activities category was subsistence-in-kind, which has been about $250 million to $300 million a year. DOD contended that including all of these programs in the management, command, and servicewide activities category led us to conclude that this category was too large in comparison to the combat forces budget. DOD’s response included a level of detail beyond the program element level that was not available to us during our review of the activities included in the special activities category or other O&M subaccounts. For that reason, our analysis was based on the program element level that was available for all O&M subaccounts. We categorized special activities as management, command, and servicewide activities because of the Army’s definition of the special activities program element. The Army defines special activities as units with a predominantly peacetime mission. After analyzing obligation data and the description of the type of activities DOD said are included in the special activities program element, with minor exceptions, we do not agree with DOD’s assertion that these activities should have been categorized as combat forces activities. If we had had the detailed information during our review, we would have listed the Louisiana maneuvers, battle simulation center, and combat training center as training activities in the training and recruiting category. We would also have listed the cost of training range operations and soldier modernization in the base support category. The above activities accounted for about $181.3 million of the $1.5 billion of fiscal year 1995 obligations incurred in the special activities category. The Army flying hour program in the special activities program element accounted for $13.3 million out of a total Army flying hour program of $848.6 million in fiscal year 1995. Furthermore, the type of flying hour program activities included in the special activities program element included aircraft operating costs associated with (1) transporting headquarters command personnel, (2) the multinational force observers in Egypt, and (3) training Army pilots in newly fielded aircraft. The only portion of the $13.3 million that possibly should be included in the combat forces category is the costs associated with the multinational force observers in Egypt. In our opinion, the flying hour program costs for transporting headquarters command personnel should remain in the management, command, and servicewide activities category and the costs for training Army pilots in newly fielded aircraft should be shown in the training and recruiting category. The contingency operations that DOD said were in the special activities program element, and which they believe should be categorized as a combat force activity, incurred obligations totaling $633.2 million in fiscal year 1995. Of this total, $456.1 million was for operations to restore democracy in Haiti. In our opinion, such operations do not contribute to the day-to-day training of the combat forces to prepare them to carry out their national security mission. Therefore, we believe that these obligations were properly categorized as a management, command, and servicewide activity. There were other obligations, totaling about $177.1 million, for contingency operations that probably should be listed in the combat forces category. However, including these obligations in the combat forces category would not have a significant effect on the percentage of total obligations incurred for combat forces in fiscal year 1995—an increase from 21.5 percent to 22.4 percent. Finally, it should be noted that the report does not conclude, as suggested by DOD, that the Army budget for management, command, and servicewide activities was too large in comparison to the combat forces budget. The report reaches no conclusions about the relative significance of the combat forces budget as compared to the budget for management, command, and servicewide activities. DOD also commented that it disagreed with our statement that only one-third of Army’s and Air Force’s O&M funds are obligated for combat forces, including training and recruitment. DOD said that some noteworthy examples that we considered to be support activities, but that they believe contribute directly to combat readiness, include the national foreign intelligence program, national space assets, early warning systems, significant airlift costs, warfighting logistics, and command and control systems. We agree that the activities cited by DOD contribute to readiness as do many other O&M funded activities. The point of our report was not that certain O&M funded activities do or do not contribute to readiness. Instead, the report was intended to identify those expenditures that could be directly related to combat forces as opposed to infrastructure activities. In categorizing the various costs, we used the Institute for Defense Analyses, the JCS, and the Office of Program Analysis and Evaluation criteria for defining defense missions and infrastructure. In essence, the criteria provide that, if the activity deploys with the combat force, it is defense mission related. If, on the other hand, it operates from a fixed location, it is infrastructure. Based on this criteria, we believe that our categorizations of what is combat force related and what is infrastructure related are appropriate. DOD further commented that it did not agree with the report’s categorization of noncombat activities. DOD said that many of these activities enhance combat capability and are not merely “nice to have” programs that are funded at the expense of combat elements. The use of the term noncombat was to distinguish the “other than combat” activities from the category labeled “combat forces” activities. The purpose of this delineation was to illustrate what portion of the O&M funds were used for those O&M activities that could be directly related to the combat forces as opposed to those O&M activities that were infrastructure related. This categorization was not, in any way, intended to imply that the infrastructure related activities do not contribute to combat capability. To avoid confusion, we deleted the word “noncombat” from the report section. DOD also provided suggested clarification to other issues cited in the report. We considered the clarifying information and made changes where deemed appropriate. DOD comments are presented in their entirety in appendix IV. The scope and methodology of our review are discussed in appendix I. We are sending copies of this report to the Secretaries of the Army, the Navy, and the Air Force; the Director, Office of Management and Budget; and the Chairmen, House Committee on Government Reform and Oversight, Senate Committee on Governmental Affairs, the House and Senate Committees on Appropriations, House Committee on National Security, Senate Committee on Armed Services, and Senate Committee on the Budget. Please contact me on (202) 512-5140 if you have any questions concerning this report. Major contributors to this report are listed in appendix V. To obtain the budget and obligation data and an explanation of program elements and how they are used in the budget process, we held discussions with Army, Navy, and Air Force financial management officials. We also reviewed the Army’s manual on financial management, which provided a definition of the program elements associated with the operation and maintenance (O&M) appropriation. The Air Force did not have a similar manual but did provide us with some program element definitions. We also reviewed the Institute for Defense Analyses’ (IDA) recent paper, A Reference Manual for Defense Mission Categories, Infrastructure Categories, and Program Elements, to help us categorize the program element codes. To evaluate how the services are using their O&M funds, we asked them to provide us with their budget requests, appropriations, and obligations by program element code for fiscal years 1993 through 1995. The Navy was unable to provide any information at the program element level because that type of data is not maintained at the headquarters level. As a result, we did not include the Navy in our analysis. Also, the Army and the Air Force were unable to provide appropriation data by program element code because neither service allots the appropriation below the subactivity level. Using the Army and the Air Force databases, we divided the program element codes into four broad categories: combat forces and support of combat forces, training and recruiting, base support, and management, command, and servicewide activities. The rationale for these categorizations was the services’ program element definitions as well as definitions developed by IDA, the Office of Program Analysis and Evaluation, and the Joint Chiefs of Staff (JCS). The IDA and Office of Program Analysis and Evaluation divided the services’ program elements into two broad classes—defense missions and infrastructure. Defense missions were defined as those activities that produced the expected outputs of the Department of Defense as well as activities that directly support the mission by deploying with combat forces. Both the IDA and the JCS defined infrastructure as those activities that are operated from a fixed location. All program elements were then placed in the appropriate category even though the program element may have been categorized differently in the service’s budget document. For example, all program elements related to environmental activities were included in the base support category. Similarly, all program elements for management of headquarters activities were included in the management, command, and servicewide activities category. We then calculated the percentage of both budgeted and obligated funds by category to determine the funding levels for the various O&M activities. Our review was performed between October 1995 and February 1996 at the Army, the Air Force, and the DOD headquarters level in accordance with generally accepted government auditing standards. JCS Directed and Coordinated Exercises Air Traffic Control (ATC) Management Tactical Support-Maintenance of Tactical Equipment Support of the Training Establishment (continued) MANAGEMENT, COMMAND, AND Information Management Central Software Design Activities Information Management Reimbursable Nonautomation Programs Management Headquarters(Crypotologic) Management Headquarters(FCI) (continued) MANAGEMENT, COMMAND, AND Management Headquarters US Army Acquisition Executive Support Agency Management Headquarters Departmental Headquarters Support Management Headquarters Concept Analysis Agency Security and Intelligence Activities Management Headquarters (continued) MANAGEMENT, COMMAND, AND Miscellaneous Support to Other Nations Foreign Military Sales Program Personnel Automated Data Processing GDIP Support Intelligence, Telecommunications, and Defense Special Security System Acquisition Support to Project Managers (continued) MANAGEMENT, COMMAND, AND Combat Development Test, Experimentation, and Instrumentation End Item Integrated Material Management Construction and Real Estate Administration End Item Supply Depot Support Drug Abuse Prevention (OSD) Community and Family Support Activities Army Personnel Management and Support Activities Publications,Printing, and Reproductions (continued) MANAGEMENT, COMMAND, AND Records Management and Mailroom Operations Armed Forces Radio and TV Service Strategic Command and Control Facilities Strategic Army Communications (STARCOM) Short Range Attack Missile (AGM-69) STRAT War Planning System USSTRATCOM Advance Medium Range A/A Missile (Procurement) (continued) USSOCOM Command & Control Platform Overseas Air Weapon Control System Airborne Warning & Control System Tactical Airborne CMD & Control System Command Communications (TAC) Combat Air Intelligence System Activities JTIDS Class 2/2H Terminal Supt Act JT Tactical Command Program (TRI-TAC) Theater Nuclear Weapon Storage & Sec System (continued) Depot Maintenance (NON-IF) EURO-NATO Joint Jet Pilot Training Training (Offensive) Training (Defensive) TAC FTR TNG (Aggressor) Sq JCS Directed and Coordinated Exercises (continued) Base Communications (Administrative) Base Communications (Logistics) Base Communications (Training) Base Communications (Offensive) Command and Base Communications-Air Defense Visual Info Activities-OTH Program 3 (continued) (continued) Real Property Services-Tactical Air Force (continued) Minor Construction (RPM) (Defensive) (H) Minor Construction (RPM)-Cryptologic Minor Construction (RPM)-Other Program Minor Construction (RPM)-Service Academies Minor Construction (RPM)-TAC Air Forces Minor Construction (RPM)-Administrative Minor Construction (RPM) (Offensive) MANAGEMENT, COMMAND, AND CMN AF Improvement & Insert Program Operational Headquarters (Offensive) Management Headquarters (USSTRATCOM) Operational Headquarters (Defensive) Management Headquarters (Strategic DEF FOR) (H) US Space Command (SPACECOM) Activities US Element (NORAD Activities) Management Headquarters (US Element NORAD) Management Headquarters (US Space Command) US Central Command (CENTCOM) Activity Management Headquarters (US CENTCOM) Operational Headquarters (TAF) Theater Battle Management (TBM) C4I Management Headquarters (TAC Air Forces) Management Headquarters (Electronic Security) (continued) MANAGEMENT, COMMAND, AND Management Headquarters (Airlift) (Non-DBOF) Management Headquarters (Logistics) Operational Headquarters (Technical Training) Operational Headquarters (FT) Management Headquarters (Training) Air Force Combat Operations Staff Management Headquarters (Public Affairs) Management Headquarters (Departmental) Management Headquarters (Administrative) Management Headquarters-ADP Support (OSD) Management Headquarters-ADP Support (AF) Management Headquarters (International) Management Headquarters Technology Transfer Functions Management Headquarters (Cryptologic) Misc Support to Other Nations (continued) MANAGEMENT, COMMAND, AND SR-71 Squadrons (H) Communications (416-L) Podded Reconnaissance System (PRS) Missile and Space Technology Collection HUMINT (Controlled) HUMINT (Overt) Strategic Air CMD GDIP Activities Automated Data Processing GDIP SPT Intel Comm & Def Spec Security System (continued) MANAGEMENT, COMMAND, AND Ballistic Missile Tactical Warning/Attack Asses System Ballistic Missile Early Warning Systems Mobility Air Intel System Activities Air Force Operational Test and Evaluation Center Airlift Support Services (IF)(H) (continued) MANAGEMENT, COMMAND, AND Logistics Operations (Non-DBOF) Tactical Air Control System for Counternarcotics Military Drug Dogs Counternarcotics Support American Forces Info Service Field Activities USAF Civil Air Patrol Support Mission Evaluation Activity (Offensive) Civil Engineer Squadrons (HV Repair) (continued) MANAGEMENT, COMMAND, AND Service Support to Non-DOD Activities Non-REIMB Service Support to Non-DOD Activities-REIMB Joint Health Care Management Engineering Team (JLMET) NCMC-TW/AA Systems (H) Mission Evaluation Activity (Defensive) Strat Aerospace Intel System Activities Milstar Satellite Communications System (AF Terminals) Defense Meteorological Satellite Program Communucations NAVSTAR Global Positioning System (USER EQ) NAVSTAR GPS (Space/Grd Segments) (continued) MANAGEMENT, COMMAND, AND SAC Automated Command and Control System-ADP PACCS/WWABNCP System EC-135 CL V MODS Airborne Command Post (CINCEUR) National Emergency ABN Command Post-Communications Long-Haul Communications (DCS) WWMCCS/Global Command & Control System R-2508 Air Traffic Control Enhancement (continued) 1. The report was modified to reflect the fact that DOD has projected further decreases in the O&M account and increases in the procurement account as part of its force modernization efforts. 2. The report was modified to recognize that DOD is required to notify Congress when it reprograms $20 million or more among budget activities and if $20 million or more is moved from the predominant combat-related subactivity groups. 3. The report was clarified to point out that the Army obligated less than requested for combat forces and support of forces and that the Air Force obligated more than requested for combat forces and support of the forces. 4. The report was clarified to point out that the reason for the Army and the Air Force obligating more than requested for environmental actiities was because the requested amounts do not include funds transferred to the services from the Defense-wide environmental account. Sharon A. Cekala Robert J. Lane Carole F. Coffey Donna M. Rogers The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed how the Army and Air Force obligated their annual operation and maintenance (O&M) funds. GAO found that; (1) the services have a great deal of flexibility as to how they use their O&M funds; (2) this flexibility is evident in the O&M obligation trends, which illustrate that the proposed uses of O&M identified in the budget request may not always reflect how the funds are obligated; (3) this is particularly true for the Army, which historically obligates less funds for its combat forces than it requests even when Congress appropriates more than was requested; (4) the Army and the Air Force obligate about one-third of their O&M funds for activities directly related to combat forces, including training and recruitment; (5) the remainder goes to support infrastructure-type activities such as base support and management activities; (6) from fiscal years 1993 through 1995, the Army obligated less than 20 percent of its O&M funds for combat forces and support of the forces and the Air Force obligated about 26 percent of its O&M funds for such purposes; (7) when training and recruiting funds are added, the Army's and the Air Force's obligations are about 30 and 34 percent, respectively; (8) the balance of the O&M funds was obligated for infrastructure-type functions such as base support and management activities. During the 3-year period, the Army obligated 33 percent of its O&M funds for base support and 37 percent for management activities; (9) the Air Force obligated 30 percent of its O&M funds for base support and 35 percent for management activities for that same period; (10) the fastest growing accounts were minor construction, maintenance and repair, administrative activities, and international activities; (11) when the amounts obligated for combat forces and support of the forces and training and recruitment are compared to the amounts requested for these categories, the Army historically requested more than it obligated; (12) conversely, it often obligates more than requested for infrastructure and management activities; (13) in part, this may be due to Congress appropriating more than requested; (14) however, it can also be the result of O&M funds requested for one purpose being obligated for another purpose; (15) GAO's comparison of the amounts obligated and budgeted by the Air Force for these same functions showed that the Air Force obligated slightly more than it requested for combat forces; (16) with regard to training and recruiting, the Air Force obligated less than the amounts requested; and (17) it obligated more than it requested for base support and slightly less than it requested for management activities. |
The Bureau’s budget for real estate appraisals of Indian trust land is about $4.1 million for fiscal year 1999, and the agency estimates that approximately 27,000 appraisals will be completed this year. The Bureau does not maintain data on the number of appraisals that are prepared for leases, but appraisal logs from four area offices—Aberdeen, South Dakota; Muskogee, Oklahoma; Phoenix, Arizona; and Portland, Oregon—show that 43 percent of about 6,900 appraisals approved in those offices in calendar years 1997 and 1998 were for leases. Appraisers may be either Bureau employees or contractors, and all appraisals—regardless of who prepares them—must be reviewed and approved by Bureau review appraisers. Current Bureau guidance on appraisals requires that appraisers adhere to professional appraisal standards when preparing appraisals, regardless of whether they are for the sale, lease, exchange, or other disposition of the land. The standards that are the basis for the Bureau’s policies are the Uniform Standards of Professional Appraisal Practice (USPAP), the Uniform Appraisal Standards for Federal Land Acquisitions, and the standards set forth in the Uniform Relocation Assistance and Real Property Acquisitions Act. USPAP, which reflects the appraisal profession’s current standards for preparing and communicating the results of appraisals, is published by the Appraisal Standards Board of the Appraisal Foundation. The Uniform Appraisal Standards for Federal Land Acquisitions contain guidelines for determining fair market value and are intended to promote uniformity in the appraisal of real property among the various agencies acquiring property on behalf of the United States. The objectives of the Uniform Relocation Assistance and Real Policy Acquisitions Act include promoting public confidence in federal and federally assisted land acquisition programs. Bureau officials are responsible for ensuring that leases of Indian trust land reflect a fair annual rental, and they rely primarily on appraisals to estimate that value. However, the Bureau has not defined fair annual rental and does not have a clear policy on how that amount should be estimated. The Bureau’s appraisal handbook, revised in October 1998, states that the policies it contains apply to all real estate transactions and makes no exception for leases, and Bureau officials have said they believe that fair annual rental can be determined only through an appraisal. In effect, fair annual rental has come to mean no less than “fair market rental” as estimated in an appraisal. However, we found no statutory or regulatory requirement that appraisals be used to estimate fair market rental, and, in fact, some area offices use other methods in addition to appraisals to establish lease values. Appraisals are opinions, or estimates, of the fair market value of property, and the Bureau uses them to estimate property values for such transactions as sales, exchanges, leases, gifts, or inheritances. The value may be estimated using one or more of three approaches—comparable sales, cost, or income capitalization. The approach the Bureau’s appraisers most often use is the comparable sales approach, in which a property’s value is inferred from recent transactions involving properties similar to the one being appraised. In the cost approach, the appraiser estimates the value of the property on the basis of costs that would be incurred to replace an existing structure or improvement. In the income capitalization approach, the appraiser estimates a property’s capacity to generate benefits (usually income) and uses these benefits to derive the property’s present value. The appraised value of real property is estimated on the basis of its “highest and best use.” The highest and best use is that which is legally permissible, physically possible, and financially feasible and results in the highest value consistent with the market. While an appraisal is a tool to estimate the value of a property, its actual value is established only when it is sold or leased. Because of such practical considerations as land uses and staffing levels, different approaches are sometimes used to establish lease values in some areas. Officials in the Aberdeen and Billings offices told us that they do not have enough appraisers to appraise all leases and that they sometimes use other methods to determine the lease value of land. Some expressed concern that if appraisals are indeed required for all lease transactions, they are out of compliance with the Bureau’s requirements by using these other methods. An official from the office of the Bureau’s Deputy Commissioner emphasized that it is not the Bureau’s policy that staffing levels should dictate the methods used to establish the fair annual rental for trust land. The official said that the Bureau needs to have consistent procedures that apply to all offices. The Bureau has identified three general types of Indian trust land leases: agriculture, business, and other. Figure 1 shows the percentage of leases of trust land by type of use and the percentage of total leased acreage by type of use as of December 31, 1997. It also shows the percentage of total rent revenue by type of use for the year ending December 31, 1997. The Bureau’s method for establishing the lease value of land for agriculture varied depending on the crops grown and, in some cases, on the number of appraisers employed in the area. For example, on the Yakama Reservation in Washington (served by the Portland Area Office) and along the Colorado River in California (served by the Phoenix Area Office), the crops are high in value and of many varieties, such as fruits and vegetables. Those area offices employ seven and eight appraisers, respectively, and each tract being leased receives an appraisal. In contrast, on reservations served by the Aberdeen and Billings area offices, the crops are lower in value and more homogeneous, such as wheat and grass for grazing livestock. Those area offices employ fewer appraisers—two in Aberdeen and four in Billings—and often establish lease values by such methods as market surveys, which provide a range of prevailing rents in an area, and competitive bidding, which allows parties interested in leasing the land to submit bids for the tracts they wish to rent. Establishing the value of leases for business use is more complex than for agricultural use, according to Bureau officials. In each of the Bureau’s areas where we contacted officials, business leases were valued by appraisal. In addition to using sales of comparable properties to estimate their value, appraisers may consider a business enterprise’s gross or net return on sales (combining elements of the sales comparison and income capitalization approaches) to establish a lease rate. The areas we visited had different levels of business leasing activity. For example, in the Phoenix area, business leases make up about 13 percent of all leases. Two of the tribes in the area—the Salt River and Gila River Pima-Maricopa Indian Communities—have properties with opportunities for business development because of their proximity to Phoenix, Arizona. The Salt River community leases its property for a 140-acre retail center (described as the nation’s largest business development ever built on Indian land), two golf courses, and a solid waste disposal operation that serves the community and nearby cities. The Gila River community leases property for several tribal enterprises, including three industrial parks, a retail store, a billboard company, an airfield, a telephone company, and a marina. In contrast, there are comparatively fewer business opportunities for trust land in the Billings area: Only about 3 percent of all leases are issued for business use. Leases are also issued for uses of trust land other than agriculture and business. These leases typically have nominal rents; they include leases of homesites for tribal members (which can be leased to tribal members for as little as $1 per year) and special-use permits for temporary uses, such as fireworks stands. According to the Portland Area Office’s realty officer, the values for these leases are sometimes established by appraisals (especially when the landowner and lessee are unrelated individuals), and sometimes the values are more arbitrary (when the landowner and lessee are related or when the tribe owns the land and the lessee is a tribal member). This category can also include leases for residential use, when lessees rent property under a long-term lease (generally up to 25 years) and build a home on the land. The rent for these leases is established by appraisal, and appraisers use market data for comparable residential ground leases when such data are available. However, when comparable lease information is not available—as in the Portland Area Office--appraisers first estimate the market value of the land on the basis of sales of comparable residential properties, after adjusting the value to reflect that of the land only (without buildings or other improvements). Once that value has been determined, a rate of return is applied to the property’s estimated market value to arrive at the annual rental. In real estate markets where land values are rising, this method can result in increasing rental rates. These changes in rents are reflected in adjustments to the leases that, under the Bureau’s regulations, must occur at least once every 5 years. We provide information in appendix II on issues surrounding residential leases. The Bureau’s appraisers are held to the same general standards and use similar appraisal techniques as other federal appraisers, state appraisers, and private appraisers. However, these land managers also use other methods to establish lease values. For example, while Interior’s Bureau of Land Management (BLM) primarily uses appraisals to estimate the value of public land, it also uses administrative fee schedules to establish the price for such land uses as linear rights-of-way (e.g., for oil and gas pipelines or power lines) and communication sites (e.g., for broadcasting and transmitting television and radio signals). Managers of state-owned land— held in trust for such public institutions as schools—use a range of methods including market surveys and competitive bidding for cropland and appraisals for residential and business uses. Private farmers usually do not use appraisals to establish rent values but rely, instead, on their knowledge of the local market and on common practices in the area. BLM and the Forest Service are required to obtain fair market value for real-estate transactions and use appraisals in many—but not all—cases. While appraisers for both agencies are governed by the profession’s standards and by the Uniform Appraisal Standards for Federal Land Acquisitions and the Uniform Relocation Assistance and Real Property Acquisition Policies Act, both agencies are also bound by land-use authorizations and requirements in the Federal Land Policy and Management Act and by other statutes authorizing uses of federal land. For example, grazing fees on federal land (whether managed by BLM or the Forest Service) are established by a statutorily defined formula. Leasing is a small part of BLM’s appraisal workload—officials’ estimates ranged from 5 to 30 percent in the BLM state offices we contacted. BLM officials said that the agency rarely leases land for agricultural or residential use. These leases usually occur only when farms or residences have inadvertently extended onto public land and BLM allows the use to continue pending an exchange or sale of the land. The Forest Service issues special-use authorizations, including leases, for a variety of uses, including vacation homes and such business activities as ski areas and guide services. It is generally required to obtain fees that reflect the fair market value, as determined by appraisal or “other sound business management principles,” for the rights and privileges authorized. In 1996, we reported that most of these permits—about 15,200—were for lots where individuals could build recreation homes or cabins. The Forest Service’s method of establishing the value of land leased for vacation homes is similar to that used for Indian land—the market sales value is estimated by an appraisal, and the fees are computed by applying an annual rate-of-return to the market sales value. However, we reported that, in many instances, the fees the Forest Service charged did not reflect fair market value because, while the fees were adjusted annually for inflation, the appraisals on which the fees were based had not been updated in nearly 20 years. Both BLM and the Forest Service use fee schedules to determine the rent amounts for communication sites (for television and radio, for example) and certain rights-of-way (for oil and gas pipelines and power lines). We have reported on weaknesses in BLM’s and the Forest Service’s use of fee schedules in cases where they did not reflect fair market value. Specifically, in July 1994, we reported that many of BLM’s fees for communication sites were established on the basis of out-of-date appraisals and that the Forest Service’s fees were established on the basis of a 40-year-old, outdated formula. In April 1996, we reported that although the fee schedules for rights-of-way were established on the basis of rates for those uses on private land, they were subsequently adjusted downward because the industry and the agency’s management viewed the rates as too high. In both reports, we stated that the fee schedules could be updated to reflect fair market value through periodic appraisals or market surveys. The four states we contacted—Colorado, Minnesota, Montana, and Washington—use various methods to establish the value of leases on their trust land, depending on the use. For example, in Washington, agricultural leases are offered through a competitive public auction. Minimum rents for land used for crops, whether irrigated or not, are established on the basis of a “fair market value assessment,” that considers such factors as crop options, soil type, and water availability. The rents for the state leases in Washington reflect the private lease terms identified in the market value assessment. However, they may be lower than the rents for private leases because, unlike private landowners, the state does not provide such improvements as fences and water, and lessees pay certain state taxes on operations on state land. Rents for crops that are not irrigated, such as wheat and other small grains, are generally paid by crop-share; that is, the state takes possession of a percentage of the crop harvested and sells it at market. Rents for irrigated crops, such as corn, potatoes, and alfalfa, are generally paid in cash. Both the crop-share percentages and the cash rent amounts are established on the basis of market surveys of private leasing practices. Taking a different approach, Colorado establishes rents for agricultural leases by using income-based formulas—that is, the rents reflect the amount of income the land is expected to generate. The states we contacted that lease land for residential use generally establish a minimum acceptable rent by applying a rate of return to the property’s estimated market value. For example, Minnesota leases lakeshore property for residential purposes; establishing leases for 10 years with rents of 5 percent of the land’s appraised fair market sales value. Rents for business leases most often are established by using an appraisal to estimate the land’s sales value and then applying an annual rate of return to that value, although, according to a Colorado official, business rents sometimes also assess lessees a percentage of the business’ revenues. Private landowners may or may not use appraisals to value land leases, depending on the intended use of the land. According to several private appraisers we spoke to, rents for agricultural land are rarely set by appraisal: Landowners and lessees are generally familiar with prevailing lease rates and may informally negotiate the rent to be paid for a tract of land. The rent for a tract of land may be affected by the presence of such improvements as fences or water delivery systems, which could increase the market rent (if the landowner pays for them) or result in a rent credit (if the lessee pays for them). For business uses, lease rates are more likely to be estimated by appraisal. In those cases, appraisers often estimate the sales value of the property on the basis of recent sales of comparable properties and then apply a rate of return that reflects the risk inherent in the lease agreement. There are several reasons that any land—including Indian land—might not be leased. The landowners may choose not to lease the land or there may be no demand for the land because of poor soil quality, a slow farming economy, inaccessibility, or lack of water. However, in cases where trust land is in demand because, for example, it is near other valuable land (such as in Phoenix) or it can support valuable crops, there may be other impediments to leasing the land if it has not been leased. Bureau officials, tribal representatives, and lessees cited appraisal amounts, the time taken to prepare and review appraisals, and the Bureau’s cumbersome bureaucracy. Some lessees and Bureau realty officials asserted that Indian trust land remains unleased in some areas because the land is appraised at values higher than lessees want to pay. Bureau officials often will not approve a lease if the negotiated or offered rent is less than the appraised value. These officials interpret the requirement to obtain a fair annual rental to mean that the appraised amount is the minimum acceptable lease amount and told us they fear approving leases for less would cause the Indian landowners to submit appeals or file lawsuits challenging their decisions. However, Bureau officials told us that they can and do approve leases for less than the appraised value if the Indian landowners agree to accept less. According to Bureau and other appraisers, appraisals are estimates of a property’s value and should be used as a management tool for making informed leasing decisions. In our opinion, the estimates are not intended to be a “floor price” any more than a “ceiling price.” Concern over this issue is not new. According to a December 1987 report of the National Indian Agriculture Working Group, “the unswerving application of the appraised market rental rates has frequently resulted in the complete loss of income to Indian landowners when their land sits unleased due to the lack of flexibility in determining rental rates.” While a prospective lessee may believe that the appraised value of a tract of land is too high, the owner of that same tract of land may believe that it is too low. In the words of an administrative judge with the Interior Board of Indian Appeals (IBIA), “the determination of ‘fair annual rental’ requires the exercise of judgment and . . . reasonable people may differ in their calculation of ‘fair annual rental.’” Timing is an important factor affecting the accuracy of appraisals because, as land values increase or decrease over time, appraisals become outdated. For this reason, according to Bureau and other appraisers, appraisals have a limited useful life. The longer it takes to prepare and review an appraisal, the more likely it is that the data used in it to estimate a property’s value are too old to accurately reflect the current market. Representatives of the Salt River Pima-Maricopa Indian Community expressed frustration about the slowness of the Bureau’s Phoenix Area Office in reviewing and approving appraisals prepared by appraisers under contract to the community. According to Bureau officials, all appraisals— whether prepared by a Bureau appraiser or a contract appraiser—must be reviewed and approved by a Bureau review appraiser to ensure that they are consistent with USPAP. Community representatives said that it sometimes takes months to hear back from the area office when the review appraiser has a problem with their appraisals and that the Bureau’s slowness jeopardizes the Community’s business deals. We analyzed records from the Phoenix Area Office’s appraisal tracking system for the period from January 1, 1997, to December 3, 1998, to see how long it took to review or prepare appraisals. From the tracking system, we were able to compute review times for 30 contractor-prepared appraisals submitted by the Salt River Pima-Maricopa Community to the area office for review. Review time is defined as the number of days between the date a contractor appraisal was received by the Bureau for review and when it completed the review. We calculated that the 30 Salt River appraisals had an average review time of 146 days: 1 was reviewed and approved the same day, 16 were reviewed and approved in between 4 and 77 days, 8 were pending approval after between 297 and 512 days, and 5 were rejected after review periods ranging from 13 to 40 days. We were also able to compute the preparation time for nine Bureau-prepared appraisals of tribal land on the Gila River Reservation. Preparation time is defined as the number of days between the date the Bureau received an appraisal request and the date it returned the reviewed appraisal to the requester. The Gila River community recently hired an appraiser to estimate the value of some properties because the Bureau was taking too long. We calculated that the nine Gila River appraisals had an average processing time of 82 days: All were prepared, reviewed, and approved in between 40 and 126 days. While we did not specifically determine the reasons for the time required to prepare and review appraisals, the former chief review appraiser at the Phoenix Area Office cited workload issues and concern about the quality of contractor-prepared appraisals as reasons for some delays. He emphasized that USPAP requires a review appraiser to do sufficient work to be satisfied that an appraisal meets the standards and does not limit the time allowed for review. Some lessees and Bureau officials identified a variety of problems with the Bureau’s bureaucracy; for example, the Bureau’s processes were characterized as more cumbersome than the private sector’s (for example, the Bureau takes more time, requires more paperwork, and is less flexible). Some said Bureau staff show a lack of initiative and accountability for such things as being responsive to lessees and for leasing land on behalf of landowners. One lessee complained to us that a Bureau agency office closes its realty office on Mondays and Fridays. We also found a related situation when we attempted to contact a realty officer at one of the agency offices—twice in one week, we were told that he was not accepting any calls while he worked on a report. According to Bureau officials, the primary factor affecting the speed with which they can approve leases is the prevalence of tracts of land with multiple owners. This occurs when an Indian landowner dies without a will, and the property is divided among the landowner’s heirs in accordance with the Indian General Allotment Act of 1887, as amended. Over time, the number of owners of some tracts of land has increased as the ownership interests have passed through several generations of multiple heirs. The landowners may all be individual Indians; sometimes the tribe or non-Indians also own an interest. For example, in 1992 we reported that over one-third of the trust land tracts on the Yakama Reservation had multiple owners and that 19 percent of these tracts had more than 25 owners. Under the Bureau’s regulations, officials must notify and obtain concurrence from landowners owning a majority interest before leasing land; therefore, the more owners a tract of land has, the longer it may take for the Bureau to obtain their concurrence. In addition to appraisals, a number of other methods may be used to establish lease values. These methods do not preclude the use of appraisals, but appraisals would not necessarily be prepared for every lease transaction. These other methods include advertising for competitive bids, conducting market surveys, and applying fee schedules or formulas. We did not analyze the costs and benefits of these methods, but they are used to varying degrees by other federal and state land management agencies and by private landowners. While we recognize that the Bureau’s trust responsibility to Indian landowners is unique and differs from the relationships of other federal agencies to federal taxpayers and of state land managers to school trust funds, these other methods may be appropriate in some circumstances. In fact, some area offices currently use some of these methods, in some cases because they do not have enough appraisers to appraise all tracts of land before leasing them. Alternative approaches that are already in use in some Bureau offices include competitive bidding and the use of market surveys: Under its regulations, the Bureau is allowed to advertise tracts of unleased trust land for competitive lease bids if the landowner wishes to explore the market and is required to do so for leases that are not negotiated or for which a fair annual rental cannot be obtained through negotiations. When there is a competitive market, the high bid received in a competitive auction would establish the market rental value. The Bureau would then approve the granting of a lease to the highest bidder. Two of the agencies we visited (in the Billings and Portland areas) have advertised unleased trust land for competitive lease bids with mixed success (see app. I). Competitive bidding is also used to lease state-owned trust land in some states, such as Montana (for cabin and homesite leases) and Washington (for agricultural leases). In addition, when the demand for land is high, private landowners may use competitive bidding techniques by soliciting sealed bids from potential renters. Market surveys may be used to identify the range of prevailing lease rates for land in a specified area, particularly where the land use is homogeneous. Some Bureau offices—such as those in Aberdeen and Billings (for agricultural leases)—already use this method. Market surveys result in generalized statements of what rents should be, or parameters that decisionmakers can use in negotiating leases. The lease rate for a specific tract of land is compared with the range of rates identified in the market survey to determine if the lease rate is within that range. The market survey approach differs from an appraisal in that an appraisal is done for a specific property and is used to estimate the market value of that property. Some states also use this method for determining whether the rent for their trust land is consistent with prevailing rents in an area. Other approaches, which are not currently being used for Indian trust land, include the use of fee schedules and formulas: As with market surveys, fee schedules could be used where land is used for homogeneous purposes, such as grazing or the cultivation of some crops. Instead of appraising each site, land managers would refer to a fee schedule to establish the rent. We have reported on BLM’s and the Forest Service’s use of fee schedules for communication sites (for television and radio, for example) and rights-of-way (for oil and gas pipelines and power lines). While we support the concept of fee schedules, we have reported on weaknesses in their implementation in cases where they did not reflect fair market value. Some states, such as Colorado, use formulas to determine the appropriate lease value for cropland. Formulas can be used where information is available on expected income and costs associated with the land. For example, the Colorado State Land Board determines the per-acre rent value of irrigated cropland on the basis of the farmer’s expected per-acre income for the parcel of land. The board multiplies the state's share by the per-acre income (the state’s share varies by agricultural crop and practices) and reduces the total to reflect the farmer's irrigation costs. Washington also uses formulas to set rates for grazing permits on its trust land. Changes in current laws or regulations would not be necessary for the Bureau to adopt these or other alternative methods. Consistent with this view, in December 1998, a workgroup studying appraisal issues reported to the Deputy Commissioner for Indian Affairs that it found no statutes that specifically require the Bureau to conduct appraisals. A representative of the Deputy Commissioner’s office emphasized that this position has not been adopted by the Bureau and that a legal review that examines laws and court cases that apply Bureau-wide would be required before it would consider doing so. We discuss this workgroup and its results in greater detail below. The Department of the Interior is currently reviewing the Bureau’s appraisal process as part of an improvement project begun in 1997 by the Office of the Special Trustee for American Indians and the Bureau. The appraisal program was included in the project because of a lack of consistency in preparing appraisals across the Bureau’s area offices and because of a backlog of appraisals requested by agency officials but not yet completed. As of June 1998, the Bureau’s area offices reported a total of almost 1,500 appraisal requests that were more than 60 days old. The improvement plan included several proposed changes to the Bureau’s appraisal program at the time we began our review (July 1998); the plan was updated in the fall of 1998. Specific initiatives in the improvement plan, together with their status, follow: Appraisers must be certified in accordance with title XI of the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA). By the fall of 1998, 28 of the Bureau’s 43 appraisers were certified, including all of the area review appraisers, and the remaining appraisers were completing the certification requirements. The Bureau was to update its real estate appraisal handbook (issued in 1970), which it did in October 1998. The Bureau was to hire a Bureau-wide chief appraiser; the position was filled in April 1999. The Bureau was to identify the extent of the appraisal backlog. The backlog was identified as of June 1, 1998. The Bureau was to increase funding for the appraisal program. Funding is being requested under the Office of the Special Trustee’s budget to implement improvements in the appraisal program and to eliminate appraisal backlogs. The improvement plan was updated in the fall of 1998 to include two additional initiatives. The first directed the Bureau’s Office of Trust Responsibilities, with assistance from Interior’s Office of the Solicitor, to determine whether and to what extent existing laws, regulations, and court rulings require appraisals of trust land. The second directed Bureau offices to develop and maintain a database for tracking appraisals. In November 1998, the Bureau convened a workgroup to consider and recommend ways to reduce the backlog of appraisal requests, which are made for many types of land transactions, including sales, exchanges, rights-of-way, and leases of property. In December 1998, the workgroup made its recommendations to the Deputy Commissioner for Indian Affairs. The field solicitor in the Minneapolis Area Field Office reviewed the legal requirements for appraisals. He concluded that no laws specifically require the Bureau to conduct appraisals of property or interests in that property and that the statutes give the Secretary of the Interior discretion in determining the fair value of property. However, Bureau officials stated that the review was preliminary and that a comprehensive legal review by Interior’s Office of the Solicitor would be required before the Bureau would consider making changes to the program on that basis. The appraisal workgroup also reported that each area or agency office maintains its own tracking system and that all systems are adequate to monitor, or track, appraisal requests. According to the improvement plan, these tracking systems are designed to provide the Bureau with information on when most of the appraisals are needed and to enable Bureau management to use appraisal resources (funding and staff) more effectively. However, we obtained appraisal tracking data from four offices and found wide variability in the usability of the data; in some cases, data on an individual appraisal were virtually unusable for analyzing the status of the appraisal. We requested appraisal tracking data from five area offices (Aberdeen, Billings, Muskogee, Phoenix, and Portland) and obtained such data from four (Billings did not have an areawide system). Specifically, in Aberdeen’s system, 100 percent of the tracking records for 54 lease appraisals were usable for determining the status of the appraisals; in Portland’s system, 99 percent of the tracking records for 1,781 lease appraisals were usable; in Phoenix’s system, 66 percent of the tracking records for 545 lease appraisals were usable; and in Muskogee’s system, 39 percent of the tracking records for 585 lease appraisals were usable. The workgroup also recommended several short-term and long-term changes to the appraisal program, both in how appraisals are prepared and in how they are requested. Short-term changes include establishing reservation- or neighborhood-specific computer-generated models for determining the value of multiple ownership interests in land and training realty specialists on when to request appraisals and what type of report is sufficient for the realty action to be taken. According to Bureau officials, a great deal of their time is spent estimating the value of each of the multiple ownership interests in tracts of land. For homogeneous land such as cropland, grassland, or hayland within a reservation or neighborhood, the appraisal workgroup has recommended that Bureau offices use computer-generated models—similar to those used by tax assessors—to estimate the market value of these multiple interests. The appraisers would be responsible for collecting and entering capitalization and market rental rates for the land into the computer modules on a regular basis. In a separate initiative, the Department of the Interior has proposed legislation that would provide a way to consolidate very small ownership interests in Indian-owned land. It has requested a budget increase of $10 million in fiscal year 2000 to expand an ongoing pilot project to consolidate land ownership interests of 2 percent or less. According to the workgroup, many appraisals are prepared for transactions that are never completed (if, for example, the landowner or tribe decides not to lease the land). Although the exact number is not known, these unnecessary appraisals could be canceled—or never requested—if realty clerks were better trained in evaluating the need for appraisals. Also, the workgroup noted, the type and format of appraisal report has a significant impact on the cost and time required to complete the appraisal, and realty clerks often request more extensive reports than are called for by the type of transaction being considered. Long-term actions the workgroup recommended that the Bureau take include, among other things, creating appraisal guidelines that address specific circumstances in different geographic areas. These guidelines would give officials the flexibility to request limited—and, thus, less expensive and time-consuming—appraisal reports when appropriate. Under USPAP’s “departure” provision, appraisers may agree to prepare an appraisal that is less detailed or different from the work that would otherwise be required by USPAP’s guidelines. The appraiser must be certain that the resulting report would not be misleading and must clearly identify and explain the departure, and the client must agree that a limited appraisal is appropriate. Under this long-term action, the Bureau’s area or agency offices would be allowed to create guidelines on when different formats may be used for appraisal reports. Under its regulations, the Bureau of Indian Affairs is required to ensure that Indian land is leased for a fair annual rental. The Bureau often relies on appraisals, which must be prepared in conformance with professional appraisal standards—the same standards that apply to all professional appraisers, including other federal, state, and private appraisers. However, fair annual rental has not been defined and the Bureau does not have a clearly stated policy on how it should be determined. In some Bureau offices, methods other than appraisals are used when land uses and staffing levels make appraisals impractical, but officials have expressed concern about whether they are complying with the Bureau’s requirements in using these other methods. Consistent policies and procedures for deciding how lease values should be determined would alleviate these concerns and clarify for realty officials what methods they may rely on for valuing leases. Appraisals were cited as an impediment to leasing, both because officials adhere to the appraised value as a minimum lease value and because the processes are considered by some to be too time-consuming. However, we believe that, in addition to appraisals, other methods are available to Bureau officials for estimating a fair annual rental for Indian land and could be used under certain circumstances. Furthermore, we believe that these methods could be implemented without legislative or regulatory changes. This view is consistent with the results of a preliminary legal review conducted by the Minneapolis Area Office’s field solicitor. However, before the Bureau will consider adopting those findings Bureau-wide, officials say a Bureau-wide review of laws, regulations, and court cases must be conducted. The Department of the Interior has begun to review its use of appraisals and is considering alternatives to the current processes. One proposed improvement to the current system included making sure that Bureau offices have systems for tracking the status of appraisals. While a Bureau workgroup found that Bureau offices have adequate tracking systems, we found that the appraisal tracking records were not consistently usable. Because these tracking systems could provide the Bureau with information on when most appraisals are needed and could allow Bureau management to use appraisal funding and staff more effectively, the data in these systems should be more consistent and complete. In addition to concurring with the Department of the Interior’s ongoing efforts to review and revise the Bureau’s appraisal program, we recommend that the Secretary of the Interior direct the Commissioner of the Bureau of Indian Affairs to do the following: Develop a clear policy on how fair annual rental can be estimated using other methods in addition to appraisals, such as market surveys, fee schedules, and formulas, where appropriate. Establish consistent standards and guidelines for applying lease valuation methods. Review the area offices’ appraisal tracking data and ensure that the data are consistent and complete so that the Bureau can monitor and make the most effective use of its appraisal resources. We provided a copy of a draft of this report to the Department of the Interior for its review and comment. Interior agreed with our recommendations that the Bureau evaluate alternatives to appraisals for estimating fair annual rental, establish consistent standards for applying lease valuation methods, and ensure that appraisal tracking data are complete and consistent. Furthermore, Interior commented that work has begun to address the recommendations, and the Assistant Secretary for Indian Affairs stated that he is confident that they will be fulfilled. Interior provided technical clarifications on funding for the appraisal program, which we incorporated as appropriate. Interior’s comments appear in appendix IV. We conducted our review from July 1998 through June 1999 in accordance with generally accepted government auditing standards. We did not independently verify or test the reliability of the data provided by the Bureau’s offices. Details of our scope and methodology are discussed in appendix V. We will send copies of this report to the Honorable Bruce Babbitt, Secretary of the Interior; the Honorable Hilda Manuel, Deputy Commissioner, Bureau of Indian Affairs; and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions, please call me at (202) 512-3841. Key contributors to this report were Jennifer Duncan, Sue Naiberk, Cynthia Rasmussen, and Victor Rezendes. The Bureau of Indian Affairs has jurisdiction over roughly 56 million acres—about 87,500 square miles—which are held in trust by the Secretary of the Interior for Indian tribes and individuals. Indian trust land represents less than 3 percent of the total land base of the United States (3.5 million square miles) but is, in total, equal to almost twice the area of Pennsylvania or more than half the area of California. Over 95 percent of this trust land is located in states west of the Mississippi River, and much of it lies within the boundaries of about 280 Indian reservations. Indian tribes own the majority of the trust land—about 46 million acres, or 82 percent of the total—and individual Indians own the remaining 10 million acres, or 18 percent of the total. According to the Bureau’s most recent published data on land use, about 102,000 surface leases were in effect at the end of 1997. These leases covered almost 8 million acres (12,000 square miles) and generated over $104 million in rental income for the landowners. About 70 percent of the leased acreage was used for agricultural purposes, but about 65 percent of the leases were for other, nonbusiness purposes with nominal rents, including temporary special uses (such as a fireworks stand) and homesites for tribal members. Table I.1 presents data on leases and leased acreage reported by the Bureau as of December 31, 1997, for agricultural, business, and other surface uses. Table I.2 shows the revenue these leases generated in 1997. Neither the Bureau nor landowners are required to lease trust land. For land that is unleased, the process usually begins with an expression of interest by either the landowner or a potential lessee. For land that is already leased, Bureau realty staff identify which leases will expire within the next year or so and send a “90 day notice” to the owners to provide 3 months for them to negotiate leases with lessees. In either case, realty staff request appraisals for the tracts of land. If a lease agreement is successfully negotiated, the prospective lessee and at least one landowner sign and submit a lease application to the responsible Bureau agency office. The application is routed to various Bureau departments for review, including a determination as to whether the negotiated amount is at least equal to the appraised amount. For land with more than one owner, landowners owning a majority interest must consent to the lease. The application is then sent to the agency office superintendent for approval. If approved, a lease is prepared for the tract of land and signed by the landowner(s) and lessee; it is then returned to the agency office and reviewed for signatures, bonding, insurance, and rent and fee payments and is presented to the superintendent for approval. If no satisfactory lease agreement has been negotiated for expiring leases or if landowners wish to advertise their land for competitive bid, Bureau realty staff prepare, mail, and post lease advertisements. If sealed bids are received for the land, the bid amounts are compared with the appraised amounts. If the bid is acceptable—that is, if it equals or exceeds the appraised amount—a lease is prepared. If it does not equal or exceed the appraised amount, Bureau officials may either reject the bid or—as we found at the Fort Peck and Yakama reservations—begin negotiations with the prospective lessee to reach an acceptable rent amount. When the signed lease is returned to the agency office, it is reviewed for completeness, submitted to tribal officials for action if tribal land is involved, and submitted to the superintendent for approval. Most Indian trust land—more than 48 million acres (75,000 square miles) at the end of 1997—is not leased. This unleased land may be occupied and/or otherwise used by the various landowners (e.g., for residences or tribal enterprises such as agricultural operations), or it may be unused. The Bureau does not maintain statistics on the use or condition of all the unleased trust land. For this reason, the Bureau could not provide us with information for unleased trust land on (1) the number of acres that are currently used and the number of acres that are currently unused, (2) the number of acres of unused land that could be economically productive, or (3) the number of acres of potentially productive unused land that could be leased and could generate revenue for the landowners. These data are not available for at least two reasons. First, much of the trust land is not considered economically productive and there is therefore little or no interest in leasing it. While there are exceptions to this generalization, Bureau officials said they believe that the trust land that can support economic production is already leased. Second, the Bureau has limited staff resources to manage trust land, and these staff rely mostly on landowners or potential lessees to express interest and thereby initiate the Bureau’s leasing process. Officials said they do not believe that the Bureau has sufficient staff resources to identify unused and unleased trust land and actively market it to potential lessees. However, they also said that a computer system that would allow the Bureau to have this information is being developed and will be piloted in the Billings, Montana, area in the summer of 1999. The Bureau does not have good information on the interest or lack thereof in leasing trust land. We obtained data from two of the Bureau offices we visited that advertised tracts of unleased trust land for competitive bids in 1998: the Fort Peck agency office in Montana advertised 251 tracts, and the Yakama agency office in Washington advertised 1,425 tracts. In both cases, the tracts offered for lease had generally been leased, but the leases were due to expire. Responses to the advertisements varied widely between the two offices, indicating that interest in leasing trust land may also vary according to local conditions. The Fort Peck office received bids on 69 percent of its advertised tracts; in contrast, the Yakama office received bids on only 7 percent of its advertised tracts. Anecdotal information suggests that land without a history of being leased tends to remain unleased even when it is offered for competitive bid. Residential leases can present a variety of issues for the Bureau of Indian Affairs and for other land managers. These include controversies over rent adjustments, which we found at the Swinomish Reservation in Washington and on state land in Montana. We were also told of other problems with residential leases in some places, such as confusion among lessees over who owns the land. Rent adjustments were controversial on the Swinomish Reservation and on state trust land in Montana. The controversy on the Swinomish Reservation focuses on five allotments of trust land located on Puget Sound (about 75 miles from Seattle) that are divided into about 250 lots on or near the water, many of which are leased for residential use. At one time, these lots were very primitive—they were considered “camping lots”—and lessees made only small investments in putting houses or other structures on the lots. Given the small amounts they invested, lessees could choose with relative ease not to renew their leases (thereby losing their investments) if their rents increased over time. However, as these lots became more attractive for permanent residences, lessees built increasingly expensive homes on them and increased their investments. In the early 1990s, two events dramatically increased lessees’ costs: The lots were reappraised and the rents were increased to reflect the increased land value, and the Swinomish Community improved the water and sewer systems in the area. Annual rents increased from an average of about $1,200 to between $5,000 and $6,000, and the improvements resulted in utility assessments that ranged from $8,000 to $11,500 for each of the lots and, in many cases, were charged to the lessees. The Community arranged for funds from Skagit County (under a state block grant) of up to $8,000 per lot to defray the utility assessment costs for low-income lessees; 30 lessees—about one-third—qualified for the grant. Lessees asserted that the new appraisals overstated the value of the lots and that the resulting lease increases were inappropriately high; however, landowners asserted that the appraisals might have understated the value of the lots. Lessees appealed the increased rents, which were upheld by the Interior Board of Indian Appeals. The lessees filed suit in the U.S. District Court for the Western District of Washington, which dismissed the case in March 1999. Following this dismissal, the Community—which has an ownership interest in two of the allotments—plans to meet with other landowners and Bureau officials to discuss possible changes that may relieve some of the lessees’ concerns. These include changes in the lease term (such as increasing the term from the current 25 years to 50 years) and alternative methods for adjusting rents (such as allowing the prepayment of rents or linking rent increases to a federal Treasury index). A similar rent adjustment controversy occurred in Montana where, according to one official, the state leases over 1,000 sites for cabins and homes. The controversy began in the late 1980s, when Montana began setting rental rates at 5 percent of the respective property’s appraised market value—a 5-percent rate of return. In response to the change, there was such an outcry from lessees that, according to the official, the Montana legislature intervened and directed the state agency to reduce the rate of return on which the rent was based by 30 percent, to 3.5 percent of the property’s appraised sales value. Leases of residential properties can pose other problems. For example, Bureau officials in the Phoenix Area Office told us of a situation on one reservation where lessees are confused about who actually owns the land. According to these officials, the Colorado River Tribe in Arizona and California leased land to a non-Indian for use as a trailer park; the lessee then sublet parcels for trailer-home use. Because these subleases have tended to be longer-term and, in some cases, were even transferred to a sublessee’s heirs, some sublessees are confused over who actually owns the land. In addition, Bureau officials in the Portland Area Office told us about a controversy with lessees of oceanfront property on the Tulalip Reservation in Washington. Some of the land is eroding, and some lessees believe the Bureau should reduce their lease rents to cover the costs of moving their homes away from the eroding banks. The Bureau disagrees; it will instead measure each lot, appraise the land, and reduce the rent accordingly if the lot size has decreased through erosion. According to Bureau officials, the existing lease documents include a provision that warned lessees of the erosion problem and made lessees responsible for maintaining the banks. Other land managers told us they avoid leasing property for residential purposes. For example, a Minnesota state official told us his agency was disposing of its residential properties, which are primarily lakeshore properties. A Washington state official said his agency has four residential properties and will sell them if there is an opportunity to do so. About 527,000 acres of trust land (and about 222,000 acres of nontrust land, primarily owned by non-Indians) lie within the boundaries of 16 Indian irrigation projects administered by the Bureau. The costs of operating and maintaining these projects are supposed to be paid through assessments that are levied annually against the acres that can be irrigated within each project (called “assessable” acres). Landowners are responsible for paying these assessments (or their lessees may agree to do so), whether the land is being leased or is being used by the landowner to produce crops. In January 1999, the Bureau reported that, in total, about 543,000 acres were considered to be assessable and about 231,000 of these acres were leased. Table III.1 provides additional information on the status of trust and nontrust land within the Indian irrigation projects. While roughly three-quarters of the total acreage within the irrigation projects was considered to be assessable, there are striking differences in the percentages of trust and nontrust assessable acres that were reported as leased: 61 percent of the assessable trust land was leased, whereas only 1 percent of the assessable nontrust land was leased. Bureau officials told us that most of the non-Indian landowners farm their land rather than renting it out and that the trust land is generally not being farmed unless the acres are leased. However, the Bureau does not have data on unleased trust acreage that is or is not in agricultural production. In January 1999, the Bureau reported that unpaid assessments totaled more than $22 million ($15 million in unpaid principal and $7 million in unpaid interest and penalties). Whereas 93 percent of the unpaid assessments related to trust land, trust land represents only 70 percent of the total assessable acreage. One project, the Wapato Irrigation Project in Washington, accounts for about two-thirds of the total unpaid assessments. Trust land represents 56 percent of the Wapato project’s 146,000 total acres and 145,000 assessable acres and 99 percent of its 44,000 leased acres. However, 92 percent of the almost $15 million in unpaid assessments (including interest and penalties) for the Wapato project relate to trust land. In 1997, we reported that the main reason for past due assessments was the Bureau’s practice of deferring the collection of assessments from owners of trust land that was not in agricultural production. Specifically, we found that the Bureau had sometimes declined to mail assessment bills, had failed to collect assessments from some lessees, and did not aggressively collect past due assessments. We reported that changing farm economics and poor soil conditions were among the reasons that land within the project area was out of production. In addition, we reported that the Bureau had not often exercised its authority to grant leases of trust lands on behalf of landowners but that the superintendent had decided to do so. For example, in leasing parcels that have multiple owners, the superintendent of the Yakama agency had decided to approve the leases on behalf of the owners rather than letting the land remain idle because the Bureau was unable to locate enough of the landowners to consent to lease the land. We also reported that the Yakama agency had begun marketing unleased trust land more extensively, expanding its advertising of trust land available for lease to newspapers in major cities such as Seattle and planning to do more. We obtained information on the Bureau of Indian Affairs’ methods of establishing the lease value of Indian land through discussions with officials from the Department of the Interior and the Bureau at their headquarters offices in Washington, D.C., and at Interior’s Office of Audit and Evaluation in Denver, Colorado. We also met with officials at the Bureau’s Portland (Oregon) Area Office and its Puget Sound and Yakama agencies, the Billings (Montana) Area Office and its Northern Cheyenne and Fort Peck agencies, and the Phoenix (Arizona) Area Office and its Salt River and Pima agencies. We spoke by telephone with Bureau officials at the Aberdeen (South Dakota) and Muskogee (Oklahoma) area offices. We obtained and reviewed the Bureau’s guidance on appraising trust land, including the area offices’ specific guidance, and obtained and examined examples of appraisals and leases. We examined various reports on appraisals and leasing, including reports by Interior’s Office of Inspector General, the National Indian Agriculture Working Group, and GAO. Through discussions with officials from Interior’s Bureau of Land Management (BLM) and the Department of Agriculture’s Forest Service at their headquarters offices in Washington, D.C.; at BLM field offices in Colorado, Oregon, Washington, and Idaho; and Forest Service offices in Colorado and Oregon, we obtained information on how BLM and the Forest Service value surface leases on the land they manage. We also obtained and reviewed documents containing appraisal guidance for BLM and the Forest Service. To identify methods used to establish rents for leases on trust land in various states, we contacted officials in Colorado, Montana, Minnesota, and Washington, either in person or by telephone. We also interviewed, either in person or by telephone, private appraisers and representatives of the American Society of Farm Managers and Rural Appraisers and the Colorado chapter of the Appraisal Institute. To identify impediments to leasing Indian trust land, we met with representatives of the Swinomish, Yakama, Northern Cheyenne, Sioux and Assiniboine, Salt River Pima-Maricopa, and Gila River Pima-Maricopa tribes and spoke with lessees of Indian land at the Yakama and Fort Peck reservations. We obtained and examined documents related to the Bureau’s analysis of its appraisal backlog and obtained appraisal workload logs from the Portland, Phoenix, Aberdeen, and Muskogee area offices. We used the appraisal workload logs to determine the time it takes to prepare and review appraisals at the various area offices. To determine whether the Bureau has a legal or regulatory requirement to appraise trust land for leasing, we reviewed laws and regulations relevant to the leasing of Indian land. We identified alternative methods of establishing the rent value of land through discussions with Bureau and other land-management officials and with private appraisers and landowners, as well as through a review of prior GAO reports on land-management practices. We identified the Department of the Interior’s efforts to improve the appraisal process through discussions with Interior and Bureau officials. We obtained and examined documents describing the ongoing Trust Management Improvement Project and recommendations of the appraisal workgroup. To provide information on the leasing of trust land, we obtained statistics on leasing and owning Indian land from the Bureau’s headquarters. We also obtained information on unleased land within irrigation districts from the Bureau’s National Irrigation Information Management System in Albuquerque, New Mexico, and the results of competitive lease auctions in two Bureau area offices. We conducted our review from July 1998 through June 1999 in accordance with generally accepted government auditing standards. We did not independently verify or test the reliability of the data provided by the Bureau’s offices. Indian Trust Funds: Interior Lacks Assurance That Trust Improvement Plan Will Be Effective (GAO/AIMD-99-53, Apr. 28, 1999). Forest Service: Barriers to and Opportunities for Generating Revenue (GAO/T-RCED-99-81, Feb. 10, 1999). Indian Programs: BIA’s Management of the Wapato Irrigation Project (GAO/RCED-97-124, May 28, 1997). U.S. Forest Service: Fees for Recreation Special-Use Permits Do Not Reflect Fair Market Value (GAO/RCED-97-16, Dec. 20, 1996). Military Bases: Update on the Status of Bases Closed in 1988, 1991, and 1993 (GAO/NSIAD-96-149, Aug. 6, 1996). U.S. Forest Service: Fee System for Rights-of-Way Program Needs Revision (GAO/RCED-96-84, Apr. 22, 1996). Federal Office Space: More Businesslike Leasing Approach Could Reduce Costs and Improve Performance (GAO/GGD-95-48, Feb. 27, 1995). Federal Lands: Fees for Communications Sites Are Below Fair Market Value (GAO/RCED-94-248, July 12, 1994). Hawaiian Homelands: Hawaii’s Efforts to Address Land Use Issues (GAO/RCED-94-24, May 26, 1994). Bank and Thrift Regulation: Better Guidance Is Needed for Real Estate Evaluations (GAO/GGD-94-144, May 24, 1994). Forest Service: Little Assurance That Fair Market Value Fees Are Collected From Ski Areas (GAO/RCED-93-107, Apr. 16, 1993). Appraisal Reform: Implementation Status and Unresolved Issues (GAO/GGD-93-19, Oct. 30, 1992). Resolution Trust Corporation: Better Qualified Review Appraisers Needed (GAO/GGD-92-40BR, Apr. 23, 1992). Indian Programs: Profile of Land Ownership at 12 Reservations (GAO/RCED-92-96BR, Feb. 10, 1992). Rangeland Management: Current Formula Keeps Grazing Fees Low (GAO/RCED-91-185BR, June 11, 1991). Farm Programs: Conservation Reserve Program Could Be Less Costly and More Effective (GAO/RCED-90-13, Nov. 15, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the Bureau of Indian Affairs' methods of establishing the lease value of Indian land, focusing on: (1) how the Bureau uses appraisals and other methods to establish the lease value of Indian land; (2) how its appraisal methods compare to those of other federal and state agencies and of private appraisers and what other methods are used to value federal, state, and private leases; (3) what impediments to leasing Indian trust land have been identified; (4) what alternatives to appraisals could be used to establish the lease value of Indian land, including any changes in federal laws and regulations that would be required; and (5) what efforts the Bureau has made to improve its appraisal methods. GAO noted that: (1) the Bureau relies mostly on appraisals to ensure that Indian land is leased for a fair annual rental; (2) however, the Bureau has not defined fair annual rental and does not have a clear policy on how that amount should be determined; (3) GAO found no statutory or regulatory requirement that appraisals be used to establish lease values; (4) under certain circumstances, some Bureau offices use other methods in addition to appraisals; (5) the standards and methods that apply to Bureau appraisers also apply to other appraisers, including other federal, state, and private appraisers; (6) however, managers of other lands also use other methods to establish lease values; (7) according to several private appraisers GAO spoke to, the rents for agricultural leases on private land are often not set by appraisal; (8) however, leases for other uses on private land, such as business uses, may be valued by appraisal; (9) appraisal amounts were considered a particular problem because of Bureau officials' reluctance to approve leases for less than the appraised value; (10) in addition, while Bureau and other appraisers stated that there is no standard for the amount of time it should take to prepare or review an appraisal, some Indian communities expressed frustration with the time taken by the Bureau's processes; (11) in addition to appraisals, other methods are available for establishing lease values in some circumstances; (12) such other methods include advertising for competitive lease bids, conducting market surveys, and applying fee schedules or formulas; (13) laws and regulations do not require the use of appraisals to establish lease values and would not need to be changed for the Bureau to adopt these or other alternative methods to establish rents for leases; (14) Bureau officials said a more comprehensive review of laws, regulations, and court cases would need to be conducted before Bureau-wide changes would be considered; (15) the Department of the Interior is reviewing the Bureau's use of appraisals and is considering improvements to the Bureau's processes; (16) proposed improvements include training realty staff on the circumstances under which appraisals should be requested to limit the number of unnecessary appraisals and automating and thus streamlining the valuation processes for certain types of real estate transactions; and (17) the improvement plan also includes a recommendation that the Bureau develop a system for tracking appraisals to allow more effective use of appraisal resources. |
The speed, functionality, and accessibility that create the enormous benefits of the computer age can, if not properly controlled, allow individuals and organizations to easily eavesdrop on or interfere with computer operations from remote locations for mischievous or malicious purposes, including fraud or sabotage. As public and private organizations use computer systems to transfer more and greater amounts of money, sensitive economic and commercial information, and critical defense and intelligence information, the likelihood increases that malicious individuals will attempt to penetrate current security technologies, disrupt or disable our nation’s critical infrastructures, and use sensitive and critical information for malicious purposes. Because the threats have persisted and grown, in January 2008, the President began implementing a series of initiatives—commonly referred to as the Comprehensive National Cybersecurity Initiative (CNCI)—aimed primarily at improving DHS and other federal agencies’ efforts to protect against intrusion attempts and anticipate future threats. Two of these initiatives are related to improving cybersecurity R&D—one is aimed at improving the coordination of federal cybersecurity R&D, and the other is aimed at developing a plan for advancing the United States’ R&D in high- risk, high-return areas. We recently reported that CNCI faces significant challenges, including defining roles and responsibilities and coordinating efforts. Several federal entities oversee and aim to coordinate federal cybersecurity research; private entities have structures in place aimed at coordinating research; and numerous federal agencies and private companies fund or conduct this research. OSTP and OMB, both in the Executive Office of the President, are responsible for providing high-level oversight of federal R&D, including cybersecurity. OSTP promotes the work of the National Science and Technology Council, which prepares R&D strategies that are intended to be coordinated across federal agencies. The council operates through its committees, subcommittees, and interagency working groups, which coordinate activities related to specific science and technology disciplines. Table 1 contains a brief description of the roles and responsibilities of the federal organizations and groups involved in the oversight and coordination of cybersecurity research. The private sector also has cybersecurity R&D working groups aimed at better coordinating R&D. Under an existing information-sharing framework within a plan referred to as the National Infrastructure Protection Plan, two Sector Coordinating Councils—Financial Services and Information Technology—have R&D working groups. These groups are composed of representatives from companies, associations, and other key sector participants to coordinate strategic activities and communicate broad sector member views associated with cybersecurity R&D throughout their sectors. Specifically, these working groups are charged with conducting annual reviews of R&D initiatives in their sectors and recommending updates to those priorities based on changes in technology, threats, vulnerabilities, and risk. Five agencies—NSF, DHS, DOD, DOE, and NIST—fund and conduct much of the government’s cybersecurity R&D. According to agency officials, NSF’s main cybersecurity R&D program is the Trustworthy Computing Program. This program is to support research and education activities that explore novel frameworks, theories, and approaches toward secure and privacy-preserving systems. According to the Subcommittee on Networking and Information Technology Research and Development’s (NITRD) supplement to the 2011 budget, NSF’s budget was approximately $71.4 million for cybersecurity R&D. DHS’s R&D efforts are aimed at countering threats to the homeland by making evolutionary improvements to current capabilities and developing revolutionary new capabilities. DHS’s cybersecurity R&D program resides in the agency’s Science and Technology Directorate. DHS has created R&D tools and made them accessible to the broader research community, such as an experimental research testing environment and a research data repository. In November 2009, DHS issued A Roadmap for Cybersecurity Research, which was an attempt to establish a foundation on which a national R&D agenda could be built. Furthermore, it was intended to provide detailed R&D agendas related to specific cybersecurity problems. Several agencies within DOD have cybersecurity R&D programs. The department’s Defense Research and Engineering organization within the Office of the Director provides coordination and oversight and supports certain cybersecurity research activities directly. The office is responsible for DOD’s science and technology activities as well as for oversight of research and engineering. Although the department’s research organizations (e.g., the Office of Naval Research, the Army Research Laboratory, and the Air Force Research Laboratory) have cybersecurity programs, the largest investments within its cybersecurity R&D are with the Defense Advanced Research Projects Agency (DARPA) and the National Security Agency (NSA). DARPA is the central R&D organization for the department, and its cybersecurity R&D budget for fiscal year 2010 is approximately $144 million. Its mission is to identify revolutionary, high-risk, high-payoff technologies of interest to the military, then to support the development of these technologies through transition. NSA also performs extensive cybersecurity research. Its research programs focus on high-speed encryption and certain defense capabilities, among other things. For fiscal year 2010, the agency’s budget was approximately $29 million for cybersecurity R&D. The research is conducted and supported by its National Information Assurance Research Group. In addition to DARPA and NSA, approximately $70 million was budgeted for fiscal year 2010 to the Office of the Secretary of Defense and other research organizations within DOD for additional cybersecurity R&D. DOE also conducts and funds cybersecurity R&D. Nearly all of DOE’s cybersecurity R&D investments are directed toward short-term applications. This work is conducted principally at the national laboratories. DOE reported to NITRD that it had spent $3.5 million on cybersecurity R&D for fiscal year 2010, and requested the same amount for fiscal year 2011. Additionally, DOE conducts cybersecurity R&D for other departments, such as DOD. NIST’s cybersecurity research program is multidisciplinary and focuses on a range of long-term and applied R&D. NIST also conducts security research in support of future standards and guidelines. NIST’s fiscal year 2010 budget for cybersecurity was about $29 million. The agency also receives funding from other agencies—such as DHS, the Department of Transportation and the General Services Administration—to work on projects that are consistent with its cybersecurity mission. In addition, many private sector companies pursue government grants or contracts to conduct cybersecurity R&D on behalf of the government, or they independently self-fund cybersecurity research. The private sector generally conducts cybersecurity R&D in areas with commercial viability, which are focused on developing products to help their customers better secure their systems and networks. For example, representatives from one private sector company stated that they have set up unused computers that attempt to attract hackers for the purpose of analyzing the attacker. Another company is conducting R&D related to the Internet’s architecture. According to private sector officials, cybersecurity R&D does not necessarily have to be conducted by large companies; some small companies have made large contributions. Various public and private sector entities have issued reports that provide guidance and make recommendations for improvements in the nation’s activities related to specific aspects of cybersecurity, including R&D. The following key reports offer guidance and direction related to cybersecurity R&D: In February 2003, the White House’s The National Strategy to Secure Cyberspace identified five national priorities, one of which includes reducing cyberspace threats and vulnerabilities. As part of this priority, the strategy tasked the Director of OSTP with coordinating the development of a federal government R&D agenda for cybersecurity and updating it on an annual basis. In February 2005, the President’s Information Technology Advisory Committee (PITAC) recommended several changes in the federal government’s cybersecurity R&D portfolio. One of the report’s recommendations was to strengthen coordination and oversight of federal cybersecurity efforts. The President’s Council of Advisors on Science and Technology (PCAST) found in its 2007 report, entitled Leadership Under Challenge: Information Technology R&D in a Competitive World, that the existing federal networking and information technology R&D portfolio was unbalanced in favor of low-risk, small-scale, and short-term efforts. The council recommended that federal agencies increase support for larger- scale, longer-term R&D. In December 2008, the Center for Strategic and International Studies (CSIS) Commission on Cybersecurity for the 44th Presidency issued a series of recommendations for a comprehensive national approach to securing cyberspace. As part of the review, CSIS recommended the creation of a new National Office of Cyberspace, which would work with OSTP to provide overall coordination of cybersecurity R&D. The Institute for Information Infrastructure Protection’s report, entitled National Cyber Security: Research and Development Challenges Related to Economics, Physical Infrastructure, and Human Behavior, stated that a national cybersecurity research agenda was urgently needed that prioritizes problems; encourages and tracks innovative approaches; and provides a pipeline of short-, medium-, and long-term projects. The National Security and Homeland Security Councils’ report, entitled Cyberspace Policy Review: Assuring a Trusted and Resilient Information and Communications Infrastructure, recommended that a framework for R&D be developed. The report also recommended that the administration appoint a cybersecurity policy official to coordinate the nation’s cybersecurity policies and activities. Accordingly, as we have previously mentioned, in December 2009, President Obama appointed a national Cybersecurity Coordinator. Among many things, this official is tasked with updating the national cybersecurity strategy. We have a review under way that is assessing the implementation status of the recommendations that were made in the Cyberspace Policy Review. In November 2009, DHS issued a report entitled A Roadmap for Cybersecurity Research, which identifies critical needs and gaps in 11 cybersecurity research areas. In addition to the recent cybersecurity reports, we have reported on the importance of furthering cybersecurity R&D. Specifically, in September 2006, we reported on actions taken by federal entities to improve the oversight and coordination of federal cybersecurity R&D activities. We found that federal entities had taken several important steps to improve the oversight and coordination of federal cybersecurity R&D; however, a federal cybersecurity research agenda had not yet been developed. Furthermore, the federal government’s R&D repositories did not contain information about all of the federally funded cybersecurity research projects. As a result, we recommended, among other things, that the Director of OSTP establish firm timelines for the completion of the federal cybersecurity R&D agenda, which includes near-term, mid-term, and long- term research. We also recommended that the Director of OMB issue guidance to agencies on reporting information about federally funded cybersecurity R&D projects to the governmentwide repositories. Although OMB and OSTP have taken initial steps, the agencies have not fully implemented these recommendations. Additionally, in March 2009, we testified on key improvements needed to strengthen the national cybersecurity strategy. Based on input we received from expert panels, we identified 12 key improvements that are essential to enhancing the strategy and our national cybersecurity posture. One of these improvements was placing greater emphasis on cybersecurity R&D, including consideration of how to better coordinate government and private sector efforts. While efforts are under way by OSTP, NITRD, and individual agencies to improve cybersecurity R&D, significant challenges remain. We identified, through input from experts from relevant federal, private, and academic organizations, six major challenges that are impeding efforts to improve cybersecurity R&D. According to key expert bodies, a national cybersecurity R&D agenda should embody several characteristics. Specifically, according to the National Strategy to Secure Cyberspace, a national R&D agenda should include near-term (1 to 3 years), mid-term (3 to 5 years), and long-term (5 years and longer) goals. Additionally, an agenda should include national- level R&D priorities that go beyond goals specific to agencies’ and companies’ missions. It is also essential that cyberspace security research efforts are ranked across all sectors and funding sources to ensure that national goals are addressed. Additionally, according to the Institute for Information Infrastructure Protection, it is important that an agenda include perspectives from both the public and private sectors. An agenda should also specify timelines and milestones for conducting cybersecurity R&D activities. Moreover, in 2006, we recommended that OSTP develop a federal cybersecurity R&D agenda that includes near-term, mid-term, and long-term research. Additionally, pursuant to the High-Performance Computing Act of 1991, as amended by the Next Generation Internet Research Act of 1998 and the America COMPETES Act of 2007, NITRD is responsible for setting goals and priorities for cybersecurity R&D. However, despite its legal responsibility and our past recommendations, NITRD has not created a prioritized national or federal R&D agenda. Officials from DOD, DOE, and DHS indicated that there is a lack of a prioritized cybersecurity R&D agenda. Furthermore, the aggregated ranked responses from 24 cybersecurity R&D private and academic experts we contacted indicated that the lack of a prioritized national R&D agenda is the top challenge that they believe should be addressed. While officials from NITRD and OMB stated that they consider the following key documents to comprise a national R&D agenda, these documents do not constitute, whether taken collectively or separately, a prioritized national agenda: NITRD’s 2006 Cyber Security and Information Assurance Working Group’s Federal Plan for Cyber Security and Information Assurance R&D: As we have previously reported, this plan was intended to be the first step toward developing a federal agenda for cybersecurity research, which provides baseline information about ongoing federal R&D activities; however, mid-term and long-term cybersecurity research goals were not defined. Furthermore, the plan does not specify timelines and milestones for conducting R&D activities, nor does it assign responsibility for implementation. Additionally, this plan was published in 2006, and many experts indicated that it is outdated. For example, NSF officials, who were co-developers of the plan, stated that the document does not take into account new types of threats that have appeared in the past 4 years, and some of the issues identified in the 2006 report are less critical today. According to NITRD officials, this plan is intended to be a 5-year plan, and they do not plan to update it until 2012. The National Security and Homeland Security Councils’ 2009 Cyberspace Policy Review: Assuring a Trusted and Resilient Information and Communications Infrastructure: This report presents relevant high-level challenges and recommendations for improvements that cover the spectrum of cybersecurity issues. However, according to NSF officials, the report does not contain sufficient detail related to R&D to be a research agenda. Furthermore, DHS officials stated that the Cyberspace Policy Review does not attempt to articulate a national-level R&D agenda. August 2009 OMB and OSTP memorandum, “Science and Technology Priorities for the FY 2011 Budget (M-09-27)”: This memorandum also does not provide guidance on cybersecurity R&D priorities. As pointed out by DHS officials, this memorandum provides high-level points for consideration but does not provide a clear national cybersecurity R&D agenda. Moreover, DOD stated that the memorandum only provides general guidance for departments and agencies as they develop their overall science and technology programs. National Science and Technology Council’s 2008 Federal Plan for Advanced Networking and Research and Development: This plan specifically focuses on establishing goals and time frames for enhancing networking capabilities, which includes enhancing networking security and reliability. However, networking is just one of several areas that need to be addressed in the cybersecurity R&D arena. The private sector organizations and cybersecurity R&D experts that we contacted also did not consider the documents to constitute a national R&D agenda. Several private sector representatives stated that they exclusively use their own strategies to determine their cybersecurity R&D priorities. According to NITRD’s Cyber Security and Information Assurance Interagency Working Group (CSIA IWG) members, they have recently begun working on developing a framework that focuses on three main cybersecurity R&D themes. The DOD co-chair of CSIA IWG stated that he believes the framework will constitute a national cybersecurity R&D agenda. The three themes that comprise the framework are (1) supporting security policies and security services for different types of cyber space interactions; (2) deploying systems that are both diverse and changing, to increase complexity and costs for attackers and system resiliency; and (3) developing cybersecurity incentives to create foundations for cybersecurity markets, establish meaningful metrics, and promote economically sound and secure practices. NITRD officials stated that they expect the framework to be finalized in time for the 2012 budget submission. However, these three themes do not cover all of the priorities that should be included in a national cybersecurity R&D agenda. For example, among other things, issues such as global-scale identity management, which was identified by DHS as a top problem that needs to be addressed, and computer forensics, which was identified by the private sector and several key government reports as a major area needing government focus, are not included in this framework. Beyond developing a federal plan as we have previously recommended, there is a need for a broader national cybersecurity R&D agenda. Until such an agenda is developed that (1) contains short-term, mid-term, and long-term priorities, (2) includes input from both public and private sectors, and (3) is consistent with the updated national cybersecurity strategy (when it is available), increased risk exists that agencies and private sector organizations will focus on their individual priorities for cybersecurity R&D, which may not be the most important national research priorities. According to key expert bodies, leadership for improving cybersecurity in R&D is composed of several attributes. Specifically, PITAC indicated that federal cybersecurity R&D efforts should be focused, coordinated, and overseen by a central body. More specifically, the committee recommended that NITRD become the focal point for coordinating federal cybersecurity R&D efforts. Furthermore, according to CSIS, NITRD should lead the nation toward an aggressive research agenda. Additionally, our previous work has highlighted the need to define and agree on roles and responsibilities, including how an effort will be led. In doing so, the entities can clarify who will do what, organize their joint and individual efforts, and facilitate decision making. Although NITRD is primarily responsible for providing leadership in coordinating cybersecurity R&D, it has played a facilitator role, rather than leading agencies in a strategic direction toward a cybersecurity R&D agenda. Experts from 24 private sector and academic R&D entities ranked this challenge as the second most important cybersecurity R&D challenge, and officials from 2 federal agencies indicated that they agreed that there is a lack of government leadership. For example, 2 private sector experts stated that there is confusion about who in the government is leading the cybersecurity R&D area. Another private sector expert stated that while NITRD is playing a facilitator role, there is no central entity that is strategically leading cybersecurity R&D in the federal government. NITRD has intentionally decided to play a facilitator role. Specifically, NITRD carries out several activities, such as hosting monthly meetings in which agencies discuss their initiatives and compiling all of its participating agencies’ cybersecurity R&D efforts and budgets; however, it generally does not make any specific decisions about how these efforts could be better coordinated. Recently, NITRD pointed to the National Cyber Leap Year initiative and the output from that initiative—CSIA IWG’s cybersecurity R&D framework that is under development—as evidence of NITRD’s leadership approach; however, this framework has not been completed. Until NITRD exercises its leadership responsibilities, federal agencies will likely lack overall direction for cybersecurity R&D. We have previously emphasized the importance of establishing a process to ensure widespread and ongoing sharing of key cybersecurity-related information between federal agencies and private sector entities. Additionally, according to the 2009 Cyberspace Policy Review, it is important that the federal government share cybersecurity R&D information with the private sector. To improve R&D-related information sharing, in 2008 the Information Technology Sector Coordinating Council (IT-SCC) R&D working group proposed a framework to the Information Technology Government Coordinating Council and NITRD to establish a process for federal agencies and the private sector to share key information on R&D initiatives. Approximately 2 years have passed since the IT-SCC made its proposal, and still no decision has been made on whether the government will pursue the working group’s proposal, nor has the government developed an alternative approach to sharing key R&D information. According to federal and private experts, key factors exist that reduce the private sector’s and government’s willingness to share information and trust each other with regard to researching and developing new cybersecurity technologies. Specifically, private sector officials stated that they are often unwilling to share details of their R&D with the government because they want to protect their intellectual property. On the government side, officials are concerned that the private sector is too focused on making a profit and may not necessarily conduct R&D in areas that require the most attention. Additionally, government and private sector officials indicated that the government does not have a process in place to communicate the results on completed federal R&D. The private and public sectors share some cybersecurity R&D information, but such information-sharing generally occurs only on a project-by-project basis. For example, NSF’s Industry University Cooperative Research Center initiative establishes centers to conduct research that is of interest to both industry and academia, and DOD’s Small Business Innovation Research program funds R&D at small technology companies. However, according to federal and private sector experts, widespread and ongoing information-sharing generally does not occur. Without sharing such information, gaps in research among public and private sectors R&D is difficult to identify. More recently, NITRD has taken steps to work more formally with the private sector and academia, such as hosting the National Cyber Leap Year Summit in August 2009, which aimed to bring together researchers and developers from the private and public sectors. Nevertheless, without an ongoing process for industry and government to share cybersecurity R&D information, the nation could be at great risk of funding duplicative efforts or having gaps in needed R&D. Several entities have emphasized that cybersecurity R&D should include long-term, complex projects. Specifically, the President’s 2003 National Strategy to Secure Cyberspace indicated that it is important that the Director of OSTP develop a cybersecurity research agenda that includes long-term (5 years and longer) research. In 2006, we reported that researchers had indicated the need for long-term efforts, such as researching cybersecurity vulnerabilities, developing technological solutions, and transitioning research results into commercially available products. Furthermore, in August 2007, PCAST recommended that federal agencies increase support for larger-scale, longer-term R&D. While federal officials point to specific long-term cybersecurity R&D investments, such as DOD’s development of a National Cyber Range and NSF’s Trustworthy Computing Program, OSTP has not established long- term research goals in a national agenda, the absence of which continues to plague the advancement of cybersecurity R&D. According to experts, one of the contributing factors to the limited focus on long-term R&D is that industry is focused on short-term, profit-generating R&D. Furthermore, experts stated that unless there is commercial viability, industry generally does not invest time or money. Another major contributing factor is that the federal government has been focused on obtaining and implementing new solutions immediately. For example, federal cybersecurity grants generally require grantees to deliver their research within a 3 year period, and, according to a cybersecurity expert at Purdue University, in many cases grantees are required to show the progress of their research within 6 months. Although highly beneficial, short-term R&D, by definition, has limited focus and is not intended to independently tackle the more complex and fundamental problems related to cybersecurity, such as security problems related to the Internet’s infrastructure. If the focus on cybersecurity R&D continues to be short-term and confined to our current technological environment, it may result in stunted research and growth, short-term fixes for systems, and networks that may not necessarily be developed with the most appropriate security. Legislation and several key reports have stressed the importance of having sufficient cybersecurity education programs and an ample supply of qualified cybersecurity professionals. Specifically, the Cyber Security Research and Development Act stated that the United States needs to expand and improve the pool of information security professionals, including researchers, in the workforce. In addition, the INFOSEC Research Council reported that it is important that the United States enhance cybersecurity academic education and training. In December 2008, the Center for Strategic and International Studies Commission on Cybersecurity for the 44th Presidency reported that the federal government needs to increase the supply of skilled workers and to create a career path (including training and advancement) for cyberspace specialists in the federal government. Furthermore, one of the national Cybersecurity Coordinator’s responsibilities is updating the national cybersecurity strategy, which addresses the cybersecurity human capital needs, among other things. While several federal programs intended to promote cybersecurity-related professions exist today—such as NSF’s Pathways to Revitalize Undergraduate Computing Education program and DOD’s Science, Mathematics and Research for Transformation Scholarship for Service program, which seek to develop a U.S. workforce with computing competencies—government officials and private sector experts agree that more can be done. For example, DHS officials indicated there is a shortage of cybersecurity R&D management officials. DOD officials indicated that more can be done to encourage personnel to pursue security degrees, and officials from DOE stated that it is very difficult to find highly qualified researchers with the requisite experience. Private sector experts voiced similar concerns, such as the need to cultivate talented people and the need for employees with more cybersecurity R&D experience. Government officials and cybersecurity experts suggested that several factors have contributed to the lack of human capital expertise in the area of cybersecurity R&D. For example, federal officials and cybersecurity experts suggested that unclear career paths in cybersecurity have contributed to the lack of a sufficient skill base. Another expert stated that colleges or universities do not have the appropriate tools and products to adequately teach cybersecurity to students. While it has been 7 years since The National Strategy to Secure Cyberspace articulated plans for improving training and creating certifications, human capital weaknesses still exist. Without obtaining information on the shortages in researchers in the cybersecurity field, it will be difficult for the national Cybersecurity Coordinator to update the national cybersecurity strategy with the appropriate cybersecurity human capital plans for addressing such weaknesses. Congress has recognized the importance of making available information on federal R&D funding for coordinating federal research activities and improving collaboration among those conducting federal R&D. To improve the methods by which government information is organized, preserved, and made accessible to the public, the E-Government Act of 2002 mandated that OMB ensure the development and maintenance of a governmentwide repository and Web site that integrates information about federally funded R&D, including R&D related to cybersecurity. The Director of OMB delegated this responsibility to NSF. As we have previously reported, NSF maintained a repository for federally funded R&D, known as the Research and Development in the U.S. (RaDiUS) database; however, the database was incomplete and not fully populated. Therefore, in 2006, we recommended that OMB issue guidance to agencies on reporting information about federally funded cybersecurity R&D projects to RaDiUS. OMB did not implement our recommendation. In 2008, the database was decommissioned because, according to a senior official at NSF, the data were incomplete, users had difficulty using it, and the database was built with antiquated technology. In March 2010, OMB officials stated that they are currently evaluating several repositories to replace RaDiUS as a centralized database to house all government-funded R&D programs, including cybersecurity R&D. While officials stated that they anticipate making a decision on a database by the end of fiscal year 2010, officials were unable to specify when a database would be in place that tracks all cybersecurity R&D information. Additionally, it is not clear how this fits into the overall coordination efforts for which NITRD is responsible. Tracking funding that is allocated to classified R&D adds to the complexity of this challenge. For example, according to a DOD official, the majority of DOD’s cybersecurity R&D is composed of either classified R&D or unclassified components of a program mixed with classified components, thereby rendering the entire program as classified. As such, it is difficult to identify the exact funding that is allocated to classified versus unclassified R&D. There is currently no mechanism in place that identifies all cybersecurity R&D initiatives governmentwide and associated funding. DHS officials stated that it would be helpful to have a clearinghouse that they could use to view what activities are already being conducted by the government. In addition, a private sector expert stated that having a centralized database in place would improve coordination between the public and private sectors. However, challenges to maintaining such a mechanism exist. For example, an OSTP official indicated that it is difficult to develop and enforce policies for identifying specific funding as R&D. Additionally, the level of detail to be disclosed is also a factor because national security must also be protected. However, without a mechanism to track all active and completed cybersecurity R&D initiatives, federal researchers and developers as well as private companies lack essential information about ongoing and completed R&D, thus increasing the likelihood of duplicative efforts, inefficient use of government funding, and lost collaboration opportunities. Additionally, without a complete understanding of how much each federal agency is spending on cybersecurity R&D, it may be difficult to make the appropriate resource allocation decisions. OSTP and NITRD have recently taken steps to try to improve the coordination and oversight of cybersecurity R&D. However, key challenges still exist, and, until these challenges are addressed, the United States may continue to struggle in protecting and securing its critical systems and networks. Specifically, the absence of a national cybersecurity R&D agenda and leadership increases the risk that efforts will not reflect national priorities, key decisions will be postponed, and federal agencies will lack overall direction for their efforts. Furthermore, without sufficient attention to complex, long-term research projects and input on the current weaknesses and shortages in researchers in cybersecurity, the nation risks falling behind in cybersecurity and not being able to adequately protect its digital infrastructure. Finally, the lack of a mechanism to track all active and completed cybersecurity R&D initiatives and the lack of a process for sharing information among the public and private sectors may result in duplicative efforts or gaps in needed R&D. To help address the key cybersecurity R&D challenges, we are recommending that the Director of the Office of Science and Technology Policy, in conjunction with the national Cybersecurity Coordinator, direct the Subcommittee on Networking and Information Technology Research and Development to exercise its leadership responsibilities and take the following four actions: Establish a comprehensive national R&D agenda by expanding on the CSIA IWG framework and ensure that it contains priorities for short-term, mid-term, and long-term complex cybersecurity R&D; includes input from the private sector and academia; and is consistent with the updated national cybersecurity strategy (when available). Identify and report shortages in researchers in the cybersecurity field to the national Cybersecurity Coordinator, which should be used to update the national cybersecurity strategy with the appropriate plans for addressing human capital weaknesses. Establish a mechanism, in working with the Office of Management and Budget and consistent with existing law, to keep track of all ongoing and completed federal cybersecurity R&D projects and associated funding, to the maximum extent possible without jeopardizing national security. Utilize the newly established tracking mechanism to develop an ongoing process to make federal R&D information available to federal agencies and the private sector. We received written comments on a draft of this report, which were transmitted via e-mail by OSTP’s Assistant Director for Information Technology R&D. We also received written comments from the Director of NIST. Letters from these agencies are reprinted in appendixes II and III. In addition, we received comments from a Senior Science Advisor from NSF and technical comments from the Director of the Departmental Audit Liaison from DHS, via e-mail. Additionally, representatives from DOE indicated via e-mail that they reviewed the draft report and did not have any comments. Officials from DOD and OMB did not respond to our request for comments. The Assistant Director for Information Technology R&D from OSTP agreed with our recommendation and provided details on the office’s plans and actions to address our recommendation. For example, to address the part of the recommendation to establish a comprehensive national R&D agenda, OSTP has begun updating its current 5-year plan for cybersecurity R&D. Additionally, to address the portion of the recommendation to identify and report shortages in researchers in the cybersecurity field, NITRD officials plan to provide an assessment of these shortages as part of their annual planning and review processes. The Assistant Director for Information Technology R&D also indicated that OSTP did not concur with certain findings within our report; however, he did not provide any additional information. The Director of NIST indicated that he agreed with our recommendation. However, he stated NIST officials recommended that we make two changes to the draft report. First, the officials believe that OSTP and NITRD are coordinating research activities and working with the federal government research community to identify a research strategy that meets critical future needs in cybersecurity. We acknowledge in the report that NITRD facilitates several activities, such as hosting monthly meetings in which agencies discuss their initiatives and compiling all of its participating agencies’ cybersecurity R&D efforts and budgets. We also acknowledge that NITRD hosted the National Cyber Leap Year Summit in August 2009, which aimed to bring together researchers and developers from the private and public sectors. Nevertheless, as we state in the report, NITRD is not leading agencies in a strategic direction toward a cybersecurity agenda. Second, officials requested that we add a sentence that officials from NIST believe that a prioritized research strategy is evolving and agencies will base their research agenda on this strategy and their mission needs. We acknowledge in the report that NITRD is currently working on developing a framework that focuses on three main cybersecurity R&D themes. NITRD officials expect the framework to be finalized in time for the 2012 budget submission. However, these themes do not cover all of the priorities that should be included in a national cybersecurity R&D agenda. Regarding comments from NSF’s Senior Science Advisor, she indicated that she generally agreed with our recommendation. The Senior Science Advisor and the Departmental Audit Liaison from DHS provided technical comments, which have been incorporated in the report where appropriate. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will be sending copies of this report to interested congressional committees; the Secretaries of Homeland Security, Defense, Energy, and Commerce; the Directors of the Office of Science and Technology Policy, Office of Management and Budget, and National Science Foundation; and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions on the matters discussed in this report, please contact David A. Powner at (202) 512-9286 or pownerd@gao.gov or Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The objective of our review was to determine the key challenges to enhancing national-level cybersecurity research and development (R&D) efforts among the federal government and private companies. To identify the key agencies involved in federal cybersecurity R&D, we researched several cybersecurity R&D-related documents, including the President’s Information Technology Advisory Committee 2005 report, the Subcommittee on Networking and Information Technology Research and Development’s (NITRD) Cyber Security and Information Assurance Working Group’s 2006 Federal Plan for Cyber Security and Information Assurance R&D, the Institute for Information Infrastructure Protection 2009 Report, and the National Security and Homeland Security Councils’ Cyberspace Policy Review. We also reviewed NITRD’s 2010 Supplement to the President’s Budget, which lists key agencies that fund and conduct cybersecurity R&D, and a previous GAO report to identify the agencies that provide high-level oversight. These agencies include the Departments of Defense, Energy, and Homeland Security; the National Institute of Standards and Technology; the National Science Foundation; the Office of Management and Budget; and the Office of Science and Technology Policy. To identify private sector organizations with a major role in cybersecurity R&D, we consulted and interviewed cybersecurity experts in the information technology (IT) and communication sectors. We developed a list of companies through the membership lists of IT and communication private sector councils, which are composed of a wide range of companies that specialize in these areas. We narrowed down the list by asking each company whether they conduct cybersecurity R&D and whether they would be willing to speak to us about their cybersecurity R&D priorities, as well as their views on what role the government should be playing in the cybersecurity R&D arena. Those that responded positively to our questions consisted of 18 companies that we included in our review. We also identified 9 additional private sector and academic organizations. We selected these experts on the basis of those we have consulted in previous reviews or who were recommended to us by other experts. Additionally, we identified other academic experts from our Executive Council for Information Management and Technology, which is composed of public- and private-sector IT management experts who assist us in obtaining different perspectives on current IT management and policy issues. We included the following industry and academic entities in our review: Alcatel-Lucent AT&T Carnegie Mellon University Digital Intelligence Google IBM Corporation Information Security Forum Information Technology Sector Coordinating Council In-Q-Tel Intel Corporation Lumeta Corporation McAfee, Inc. Microsoft Net Witness Neustar Purdue University Oracle Corporation Raytheon BBN Technologies Renesys StrongAuth, Inc. Symantec University at Albany, Center for Technology in Government Verizon Business Three of the 27 academic and private organizations asked us not to include their names in our report, and one expert was a private sector consultant who was a former director of the National Coordination Office. To identify key challenges to enhancing national-level cybersecurity R&D efforts, we analyzed documentation, such as agencies’ research plans and cybersecurity reports, and interviewed federal officials and industry experts. We then aggregated the identified challenges and validated the top challenges by asking the experts to rank the challenges in order of importance. In addition, we analyzed relevant federal law and policy, including the National Strategy to Secure Cyberspace, the High-Performance Computing Act of 1991, the E-Government Act of 2002, the Cyber Security Research and Development Act, the Next Generation Internet Research Act of 1998, and Homeland Security Presidential Directive 7. We also reviewed prior GAO reports. We conducted this performance audit from June 2009 to June 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the contacts named above, the following staff also made key contributions to this report: Shannin O’Neill, Assistant Director; Rebecca Alvarez; Jamey Collins; Eric Costello; Min Hyun; Sairah Ijaz; Kendrick Johnson; Anjalique Lawrence; Lee McCracken; and Kevin Walsh. | Computer networks and infrastructures, on which the United States and much of the world rely to communicate and conduct business, contain vulnerabilities that can leave them susceptible to unauthorized access, disruption, or attack. Investing in research and development (R&D) is essential to protect critical systems and to enhance the cybersecurity of both the government and the private sector. Federal law has called for improvements in cybersecurity R&D, and, recently, President Obama has stated that advancing R&D is one of his administration's top priorities for improving cybersecurity. GAO was asked to determine the key challenges in enhancing national-level cybersecurity R&D efforts among the federal government and private companies. To do this, GAO consulted with officials from relevant federal agencies and experts from private sector companies and academic institutions as well as analyzed key documents, such as agencies' research plans. Several major challenges impede efforts to improve cybersecurity R&D. Among the most critical challenges are the following: 1) Establishing a prioritized national R&D agenda. While R&D that is in support of specific agencies' missions is important, it is also essential that national research efforts be strategically guided by an ordered set of national-level R&D goals. Additionally, it is critical that cyberspace security research efforts are prioritized across all sectors to ensure that national goals are addressed. Accordingly, the National Strategy to Secure Cyberspace recommended that the Office of Science and Technology Policy (OSTP) coordinate the development of an annual cybersecurity research agenda that includes near-term (1-3 years), mid-term (3-5 years), and long-term (5 years or longer) goals. Although OSTP has taken initial steps toward developing such an agenda, one does not currently exist. OSTP and Office of Management and Budget officials stated that they believe an agenda is contained in existing documents; however, these documents are either outdated or lack appropriate detail. Without a current national cybersecurity R&D agenda, the nation is at risk that agencies and private sector companies may focus on their individual priorities, which may not be the most important national research priorities. 2) Strengthening leadership. While officials within OSTP's Subcommittee on Networking and Information Technology Research and Development (NITRD)--a multiagency coordination body that is primarily responsible for providing leadership in coordinating cybersecurity R&D--have played a facilitator role in coordinating cybersecurity R&D efforts within the federal government, they have not led agencies in a strategic direction. NITRD's lack of leadership has been noted by many experts as well as by a presidential advisory committee that reported that federal cybersecurity R&D efforts should be focused, coordinated, and overseen by a central body. Until NITRD exercises its leadership responsibilities, federal agencies will lack overall direction for cybersecurity R&D. 3) Tracking R&D fundingand establishing processes for the public and private sectors to share key R&D information. Despite a congressional mandate to develop a governmentwide repository that tracks federally funded R&D, including R&D related to cybersecurity, such a repository is not currently in place. Additionally, the government does not have a process to foster the kinds of relationships necessary for coordination between the public and private sectors. While NITRD hosted a major conference last year that brought together public, private, and academic experts, this was a one-time event, and, according to experts, next steps remain unclear. Without a mechanism to track all active and completed cybersecurity R&D initiatives, federal researchers and developers as well as private companies lack essential information about ongoing and completed R&D. Moreover, without a process for industry and government to share cybersecurity R&D information, the nation is at risk of having unforeseen gaps. GAO is recommending that the Director of OSTP direct NITRD to exercise its leadership responsibilities by taking several actions, including developing a national agenda, and establishing and utilizing a mechanism to keep track of federal cybersecurity R&D funding. OSTP agreed with GAO's recommendation and provided details on planned actions. GAO recommends that TSA establish milestones for a staffing study, verify the accuracy of all reported screening data, develop a contingency plan for screening domestic cargo, and develop plans for meeting the mandate as it applies to inbound cargo. TSA partially concurred with verifying screening data and did not concur with developing a contingency plan because it did not believe such actions were feasible. GAO believes these recommendations remain valid, as discussed in this report. TSA agreed with all other recommendations. |
USSTRATCOM’s global missions provide a wide range of capabilities that are intended to respond to a dramatically changing security environment brought about by emerging global, transregional, and asymmetric threats to U.S. national security. Unlike the command’s nuclear deterrence and space operations missions, the command’s global strike; integrated ballistic missile defense; intelligence, surveillance, and reconnaissance; information operations; global command and control; and combating weapons of mass destruction missions had not been previously assigned to a unified command. These newer missions have been performed, mostly ad hoc, by multiple DOD organizations and the military services but did not have a primary joint sponsor and central focus within DOD. The command’s most recent reorganization, begun in late 2004, shifted the day-to-day planning and execution responsibility for most of its missions from its headquarters to several new subordinate organizations. USSTRATCOM intends that its latest organizational construct will provide greater focus, continuity, and performance for its missions and better accommodate the execution of the command’s global responsibilities by reducing organizational layers and enabling communication and information to flow more easily from the most senior levels of leadership to those producing the information. The command envisions that this new organizational construct will reduce the cycle time for reaching and implementing decisions for its missions, increase the effectiveness of the products and services it provides in support of the regional combatant commands, and provide improved access to all of the command’s capabilities. USSTRATCOM, for example, has recently established a joint space operations center, under its Joint Functional Component Command for Space and Global Strike, to more effectively respond to requests from regional combatant commands for space capabilities. As shown in figure 1, the current USSTRATCOM organization is comprised of a command headquarters, service component or supporting commands, joint functional component commands, centers, and task forces. Under the new organization, USSTRATCOM’s headquarters would focus primarily on overseeing tasks for command and control; strategic-level integration; and advocacy, including developing strategy and plans, managing command and control operations and support, and advocating for mission capabilities. It also has responsibility for designating objectives; assigning missions, tasks, forces, and resources; defining policy and concepts; and providing direction to the command’s subordinate organizations. Additionally, USSTRATCOM headquarters has responsibility for planning and deploying forces for the command’s nuclear mission. The reorganization created four new joint functional component commands for (1) space and global strike; (2) integrated missile defense; (3) intelligence, surveillance, and reconnaissance; and (4) network warfare. These commands have day-to-day responsibilities for operational and tactical-level planning and execution and management of forces. The new organization also includes the USSTRATCOM Center for Combating Weapons of Mass Destruction, Joint Information Operations Center, and Joint Task Force for Global Network Operations that work with the command, the unified commands, and mission partners to provide operational solutions to mission-related problems. The command has also geographically aligned many of its subordinate organizations with supporting military services and Defense agencies to leverage the expertise and resources in their respective mission areas. For example, the command has partnered and co-located its Joint Functional Component Command for Intelligence, Surveillance, and Reconnaissance with the Defense Intelligence Agency in the Washington, D.C., area to take advantage of the agency’s capabilities and improve access and coordination with DOD and national intelligence agencies. To further strengthen the partnership between the organizations, the commander of the component command is also the Director of the Defense Intelligence Agency. In response to intelligence information requests from the combatant commanders, the agency would globally prioritize intelligence collection requirements and the joint functional component command would then prioritize and task the appropriate intelligence, surveillance, and reconnaissance assets to best meet those requirements. Appendix IV provides additional information about the command’s key mission organizations. Additionally, the reorganization established new command relationships with the military services to better focus service support. USSTRATCOM accesses capabilities from each of the services through its three service component commands—the Army Forces Strategic Command, Air Force Forces Strategic Command, and Marine Corps Forces Strategic Command—and the Commander, U.S. Navy Fleet Forces Command. Unlike the other services, the Navy Fleet Forces Command is a supporting command rather than a designated service component command to USSTRATCOM. However, Fleet Forces Command’s overarching responsibilities in supporting USSTRATCOM are consistent with those of the other service components. Each service command acts as the primary focal point for its respective service capabilities and has responsibilities for ensuring that forces provided to USSTRATCOM are organized, trained, and equipped to support the command in carrying out its missions and providing the administrative chain of command and control for its respective service forces. Because of its expanded set of missions, USSTRATCOM’s budget has grown significantly from $276.8 million of total obligation authority in then-year dollars in fiscal year 2003 to $500.4 million in fiscal year 2006, excluding appropriations for military personnel and USSTRATCOM service component commands and other supporting agencies. The command’s annual budget is expected to increase to $551.4 million by fiscal year 2011. Table 1 details the command’s historic and projected budget by major appropriations account from fiscal years 2003 through 2011. The command’s budget is comprised mostly of operation and maintenance funding, with lesser amounts of research and development and procurement funding associated with programs for intelligence, information operations, network warfare, command and control, and planning systems. Appendix I provides more details about USSTRATCOM’s budget. Since its establishment, USSTRATCOM’s authorized number of military and civilian positions has increased by about 300. As of October 2005, the command’s overall authorized personnel level was composed of 2,947 military and civilian positions, of which 91 percent were filled. Of the 2,947 positions, military positions comprise about 72 percent of the positions (2,122), with the Air Force providing the largest number of positions (1,256). Civilian positions make up the remaining 28 percent (835). The command has begun to fill positions in its new mission organizations from within its existing authorized personnel levels by transferring positions from its headquarters to the new organizations over a 3-year period beginning with fiscal year 2005. The command’s authorized personnel levels are made up of a relatively few number of skills, although the mix of military skills has changed since 2002. Additional information about USSTRATCOM’s authorized personnel levels is in appendix II. USSTRATCOM has made progress in implementing its new missions and has taken a number of positive actions in each of its mission areas to prepare or update concepts of operations, plans, guidance, and policy; identify resources needed for mission planning and execution; and establish an organization to more effectively manage its responsibilities and provide the range of capabilities across its mission areas. Many of the command’s actions are consistent with the useful practices and lessons learned with high-performing organizations undergoing successful transformations that we have identified in our past work. However, further steps are needed to build on this progress in order to achieve the broad goals envisioned by the President and Secretary of Defense in creating the command. While the command has taken initial steps to include its new missions in its exercise program, USSTRATCOM has not yet fully developed a robust exercise program that integrates exercise support available from the U.S. Joint Forces Command, which can provide planning, training, and exercise tools. In addition, while USSTRATCOM’s leadership has provided general guidance to its mission organizations, it has not provided specific information or identified consistent requirements for achieving full operating capability and most of the command’s new mission organizations have not established clear criteria for determining when they have reached this milestone. Also, while the command has adopted some key management principles, the command has not yet developed strategic goals and outcome-oriented performance measures and criteria for assessing results across the command and in each of its mission areas. Since its establishment, USSTRATCOM has made progress in implementing its new missions and has taken a wide range of positive actions to integrate these missions into its organization, such as developing various plans, concepts, and guidance; establishing procedures and processes; identifying personnel and funding resources; developing new relationships; building communication networks; and providing education, training, and exercises. For example, the command has prepared concepts of operations for its missions and organization, such as operations for network warfare and global integrated missile defense, and has recently approved a concept of operations describing the processes it will use in integrating its diverse capabilities and providing warfighting options to regional combatant commands. Additionally, USSTRATCOM has taken other actions, including (1) establishing collaboration tools and processes to improve communication for planning, execution, and evaluation among its organizations and customers; (2) creating various processes and groups within the command to advocate for the capabilities necessary to accomplish its missions, such as advocating for modification of the Trident II missile to provide an improved near-term conventional global strike capability; and (3) upgrading and expanding its facilities, such as improvements to the command’s headquarters command center. The command has also taken actions to demonstrate the value added of its missions for other combatant commands and DOD organizations. For example, to implement its mission responsibilities for preventing and defending against intrusions into DOD’s critical information network systems, the command’s Joint Task Force for Global Network Operations has recently instituted stringent use controls and trained system users to improve security and reduce vulnerabilities for these systems. As its missions have matured, USSTRATCOM has also undertaken several reorganizations to more effectively manage its responsibilities and provide the range of capabilities across its mission areas. Many of the actions the command has taken to implement its latest reorganization are consistent with the useful practices and lessons learned with high-performing organizations undergoing successful transformations that we have identified in our prior work, including establishing a matrixed, horizontal organizational structure that provides a greater external focus for its customers, forms partnerships with key organizations, and openly shares information. As discussed earlier, its latest reorganization intends to leverage essential competencies of associated components and key supporting agencies and decentralize the responsibility of its headquarters for the day-to-day planning and execution of its primary mission areas to several interdependent mission organizations. While the command’s mission organizations differ in the extent of their maturity, USSTRATCOM has focused considerable attention over the past year on establishing their responsibilities, command and agency relationships, and operational competencies, and assigning personnel to these new organizations. Its senior leadership has also taken an active and visible role in supporting the organizational changes underway. USSTRATCOM has restructured its exercise program to better incorporate its missions and has conducted a few training exercises involving all of its missions and new organizations. While the command is taking steps to address the challenges in more fully including its missions in its exercises, it has not yet fully developed a robust exercise program that integrates exercise support available from the U.S. Joint Forces Command’s Joint Warfighting Center, which can provide planning, training, and exercise tools. USSTRATCOM restructured its exercise program in 2003. It began incorporating its newer missions into its exercises beginning in November 2004 and brought together all of its missions in the same exercise in its two most recent exercises, Global Lightning in November 2005, and Global Thunder in April 2006. Global Lightning is an annual USSTRATCOM-sponsored command-post exercise, which involves the commander and his staff in testing and validating the communications within and between headquarters and simulated forces in deterring a military attack and employing forces as directed. The annual Global Thunder exercise is the command’s main nuclear deterrence field training exercise, which uses actual forces in training commanders, staff, and individual units at all levels of their warfare skills. Another command-post exercise, Global Storm, is designed to cover the command’s missions that are most relevant in the early stages of conflict, such as information operations and intelligence, surveillance, and reconnaissance. The command plans to conduct this exercise annually depending on scheduling and resource considerations. The command faces challenges in effectively executing its exercise program across its missions and new organizations. However, the command is taking some actions to overcome these challenges, and some of these challenges should lessen over time as the command’s missions and organizations mature. First, many of the command’s operational concepts, directives, and instructions used in designing and executing exercises have not yet been approved, developed, or revised to reflect its new organization. For example, at the time of the November 2005 Global Lightning exercise, some USSTRATCOM mission organizations were executing their processes and procedures without the benefit of complete and approved doctrine because several key concepts of operations for its missions, such as the concept of operations for horizontally integrating its missions, were still in draft form. According to USSTRATCOM officials, the command has to prepare plans for an exercise many months in advance even if its doctrine continues to evolve. The officials said that USSTRATCOM incorporates any changes to doctrine and guidance as it develops its exercise plan, but these changes are more difficult to make as the plan becomes more complete and the exercise nears. A USSTRATCOM official told us that doctrine and guidance should become more stable and change less frequently as the command’s missions, organization, and processes mature. Second, several of the command’s new mission organizations are still being established, which has affected their ability to fully participate in the command’s recent exercises and identify exercise objectives. For example, at the time of the November 2005 Global Lightning exercise, the new joint functional component commands had existed for less than 1 year, and the Center for Combating Weapons of Mass Destruction had been established for only 3 months. According to the Chief of Staff for the intelligence, surveillance, and reconnaissance component, the component was not able to establish full connectivity during the exercise because it was still operating out of temporary facilities. Further, the new mission organizations were too immature, did not have staff in place, and lacked the established processes and procedures needed to plan their own objectives for the November 2005 exercise, according to USSTRATCOM officials. Instead, the new organizations’ exercise objectives for the November 2005 Global Lightning exercise were established by the command’s headquarters and linked to a broader set of critical tasks and responsibilities. Moreover, while the command’s Center for Combating Weapons of Mass Destruction personnel participated extensively in the November 2005 Global Lightning exercise, no specific exercise objectives had been developed for the center’s mission area. To begin addressing the challenge of increasing involvement of its new organizations in exercise development, the command has advocated the establishment of an exercise or training group within each of its mission organizations and some groups have been created, such as in the space and global strike and integrated missile defense components. Additionally, in preparation for the next Global Lightning exercise in fall 2006, the mission organizations plan to be more involved in preparing exercise objectives for their mission areas and intend to send their personnel to training workshops conducted by the U.S. Joint Forces Command’s Joint Warfighting Center to learn how to develop these objectives. Third, the command has found it difficult to design an exercise that fully covers all of its responsibilities because its missions are so diverse and their relevancy to the exercise is dependent on the type and stage of a particular crisis. USSTRATCOM’s intent is to design its exercises so as to integrate the unique and interdependent capabilities of its global missions to provide a range of options throughout the various stages of a crisis and possible conflict. For example, the command has found that some of its missions, such as information operations, quickly become overlooked during its exercises as events move from crisis into actual conflict. Moreover, the command believes that its exercise program needs to place greater emphasis on the early stages of a crisis because much of USSTRATCOM’s daily operations are conducted before and just after a crisis has begun. To foster greater inclusion of its missions into its exercises, the command used a series of brief, scripted training events that preceded its first Global Lightning exercise in November 2004 to provide opportunities to incorporate some of its missions, particularly intelligence, surveillance, and reconnaissance. During the November 2005 Global Lightning exercise, the command incorporated a timeline that extended from the early to the later stages of conflict to allow designers to prepare a scenario suitable for a more complete range of the command’s missions. The Commander, U.S. Strategic Command, also has directed that the annual Global Thunder exercise and other training events incorporate multiple missions to provide additional evaluation opportunities. Additionally, the command has designed its Global Storm exercises to specifically focus on those missions that are most pertinent before conflict begins. USSTRATCOM has not fully made use of the exercise support available from the U.S. Joint Forces Command. While USSTRATCOM has taken steps to obtain greater assistance from the Joint Forces Command’s Joint Warfighting Center to help the command address its challenges in executing a robust exercise program, the command and the center have not reached agreement on the extent of support the center will provide. Our past work has shown that robust exercise programs are important for assessing and improving mission capabilities, particularly when multiple organizations are involved in mission execution. Moreover, DOD’s recently issued Strategic Plan for Transforming DOD Training supports an increased training focus for many missions assigned to USSTRATCOM, including combating weapons of mass destruction, global strike, information operations, and ballistic missile defense. U.S. Joint Forces Command has lead responsibility for joint force training, and is responsible for helping combatant commanders to identify training requirements and methods, and for assisting them with executing exercises and other training events. As part of U.S. Joint Forces Command, the Joint Warfighting Center provides support to combatant commands in identifying requirements, objectives, methods, and tools for planning, implementing, and evaluating exercises. The center trains combatant command staff to better design exercise objectives that are clearly linked to the command’s essential tasks. It can also send independent observer teams to an exercise to assess the command’s performance and prepare after-action reports and related assessments. The Under Secretary of Defense for Personnel and Readiness has overall responsibility for ensuring that DOD’s joint training programs and resources are sufficient to produce ready forces and overseeing the implementation of DOD’s training transformation strategy. USSTRATCOM has taken steps to obtain greater assistance from the Joint Warfighting Center in recent exercises. The command, for example, obtained limited support from the center during its April 2006 Global Thunder exercise, including teams to observe the participation and activities of its space and global strike component. However, USSTRATCOM’s requirements have not been typically identified far enough in advance for the center to assign staff and commit resources in providing the full range of requested support. For example, command officials told us that USSTRATCOM sought extensive Joint Warfighting Center support for the November 2005 Global Lightning exercise, but the center had already committed to supporting a U.S. Northern Command exercise that was scheduled over the same time period. The center was able to provide USSTRATCOM indirect support, such as providing simulated video news clippings to add context to the events in the exercise scenario, when the command linked its Global Lightning exercise to the U.S. Northern Command exercise. USSTRATCOM’s relationship with the Joint Warfighting Center is still developing. In the past, the center had a limited working relationship with USSTRATCOM and involvement in its exercises because the command’s exercises had been largely focused on its nuclear deterrence mission, which limited the involvement of other DOD organizations. As a result, the center had not included the level of support for USSTRATCOM’s program that it provided to other combatant commands in its past plans. However, to provide Joint Warfighting Center observers with access to more areas and aspects of its exercises, including activities involving the command’s nuclear deterrence mission, USSTRATCOM is changing its security procedures to grant center observers temporary clearances during the exercises. The Joint Warfighting Center’s recent support for USSTRATCOM’s exercise program has helped the command to better define its requirements for future support, but these requirements continue to evolve. USSTRATCOM officials told us that since requirements for future support from the center have traditionally been determined from prior support experience, the command’s limited relationship with the center in the past and the recent restructuring of the command’s exercise program have not yet provided a basis for determining the support needed from the center. The officials said that the specific requirements for the center’s assistance would be easier to determine as more exercises with the center’s involvement are completed. According to a USSTRATCOM official, a key exercise objective in its April 2006 Global Thunder exercise was to expose center personnel on a limited scale to the command’s exercise program. At the same time, the command would gain exposure to the services provided by the center. A center official told us that this type of interaction with the center would help USSTRATCOM to better define and identify its future requirements for center support. Over the long term, USSTRATCOM plans to seek much greater support from the center but has not yet fully defined its requirements. While the Joint Warfighting Center currently supports only one of USSTRATCOM’s exercises each fiscal year, USSTRATCOM officials told us that the center has committed to supporting both of its annual Global Lightning and Global Thunder exercises for fiscal year 2007, including the use of observation teams to help the command evaluate its performance. However, as of March 2006, center officials told us it was unclear how the center would adjust its current resources to support the November 2006 Global Lightning exercise because of the timing of that exercise and its linkage to a U.S. Pacific Command exercise, for which the center is already planning to provide support. In the long term, a center official told us that while the center plans to provide greater support to USSTRATCOM, the center can better plan and make resources available if it is provided with well-defined requirements 3 to 5 years in advance as other commands do. As a result, without fully providing the U.S. Joint Forces Command with well-defined requirements to plan the necessary resources to support USSTRATCOM’s program, USSTRATCOM may not be able to receive the supported needed to execute a robust exercise program to effectively implement its missions. USSTRATCOM had provided overall guidance to each of its subordinate organizations for assessing two key milestones–initial operating capability and full operating capability–used to implement these organizations. However, this guidance does not fully establish clear and well-documented objectives, goals, or criteria to use in determining when these milestones have been achieved. Our prior work shows that it is important that organizations undergoing major transformations provide clear and complete guidance to subordinate organizations on the requirements and expectations for successful implementation of organizational changes. Each of the new subordinate mission organizations has already declared initial operating capability—the first milestone in implementing these organizations. However, without applying specific criteria, such as the extent to which mission organizations are staffed and trained and their mission tasks implemented, in determining when full operating capability– the second milestone–is achieved, the command may not have an accurate understanding of the extent to which its mission organizations are prepared to effectively carry out their missions. After its most recent reorganization, USSTRATCOM issued implementation directives that provide general guidance for establishing each of the five new subordinate organizations. The directives broadly describe the organizations’ responsibilities, authorities, tasks, personnel and resources requirements, and schedules for implementation. Additionally, the command prepared an implementation plan that summarizes the implementation directives and provides additional direction for establishing the new subordinate organizations, including timelines and implementation tasks. USSTRATCOM also created a reorganization management team working group comprised of representatives from headquarters and the new organizations to assist with and coordinate the reorganization activities. USSTRATCOM’s implementation guidance calls for each new organization to declare initial operating capability and full operating capability, which are key milestones used to indicate the organization’s progress in implementing plans, procedures, and structures and achieving the readiness required to perform its missions. In addition, the guidance provides some general criteria to follow before declaring initial operating capability or full operating capability. For example, the guidance requires that prior to the initial operating capability milestone, each new organization would develop a mission statement; a detailed concept of operations for the organization to manage and execute its assigned forces and missions, including personnel requirements; and a task hand-over plan for the transfer of functions from headquarters. The guidance also requires formal updates on the new organizations’ progress toward achieving the milestones during quarterly command conferences. Table 2 shows that each of the new organizations stated that it had achieved initial operating capability in 2005. The Joint Functional Component Command for Integrated Missile Defense achieved full operating capability in February 2006 and the other four organizations plan to reach this milestone between September 2006 and January 2007. While the implementation guidance provides general criteria for achieving initial and full operating capability, it lacks clarity and specificity for reaching these milestones. The Commander, U.S. Strategic Command, has delegated authority for establishing the new mission organizations and decisions for declaring initial and full operating capability to the senior leaders of these organizations. Headquarters representatives of the reorganization management team told us that a good deal of subjectivity is involved in deciding when each milestone has been achieved. In addition, we found that the commander or director of each new organization has interpreted the milestones differently when developing the organization’s approach and assessment criteria for achieving the milestones. For example, the criteria used by each organization to determine initial operating capability last year varied greatly among the organizations: The commander of the intelligence, surveillance, and reconnaissance component declared reaching the milestone based on such factors as the component having its deputy commander in place, establishing the component’s online Web portal that facilitates external communication across various classified links, and beginning its intelligence campaign planning support for three regional combatant commands. The commander of the integrated missile defense component declared reaching the milestone based on completing preparation of several documents, for example, ballistic missile defense emergency activation plans and a supporting plan for one of the command’s contingency plans; undertaking the process of making operational several required functions, such as ballistic missile defense situational awareness and operational oversight of the ballistic missile defense command and control system; and assuming responsibility for performing most of its directed tasks. The acting deputy commander of the network warfare component told us the component declared initial operating capability on the basis that its mission responsibilities were already being performed by a predecessor organization that became the new component. Space and global strike component officials told us that the component based its initial operating capability decision largely on the results of its performance in events before and during USSTRATCOM’s November 2005 Global Lightning exercise. However, the component did not publish and make available the criteria that would be used to evaluate the component’s performance during the exercise, according to the component’s chief of staff. Similarly, the objectives, goals, and criteria that would be used for determining full operating capability vary among the organizations. According to network warfare component officials, the component plans to base its full operating capability decision on 8 to 10 items that were explained during a briefing to USSTRATCOM officials in early 2005, which include the component having adequate staffing and funding; its tactics, techniques, and procedures guidance approved; and its functions, tasks, and authorities clearly defined. The chief of staff for the space and global strike component told us that the component has considerable criteria for evaluating full operating capability. For example, several concepts of operations related to the component’s mission areas contain tasks that the component needs to perform. Other criteria include such goals as setting up a training program for new staff and developing a visual information panel in its command center. However, the official said that the component has not clearly assembled all of its criteria to make them readily accessible to those outside the component. The integrated missile defense component, which declared full operating capability in February 2006, used criteria that included the component’s assuming responsibilities and tasks delineated in the USSTRATCOM implementation directive, completing facility construction, getting staff trained and certified, developing approved joint mission essential tasks, and initiating reporting of operational readiness. The component considered its full participation in USSTRATCOM’s November 2005 Global Lightning exercise and the incorporation of the lessons learned from the exercise into its participation in a subsequent U.S. Pacific Command exercise as critical factors for declaring full operating capability. Additionally, although the target dates for declaring full operating capability are soon approaching, some of the new organizations have not fully developed the criteria that will be used to assess their milestone decisions. Although the USSTRATCOM Center for Combating Weapons of Mass Destruction plans to achieve the milestone in December 2006, center officials told us in February 2006 that the center is still deciding how to define full operating capability. Similarly, the deputy commander for the intelligence, surveillance, and reconnaissance component told us in April 2006 that the component, which plans to reach the milestone in September 2006, has not fully decided on the criteria it would use because the selection of criteria has not been a high priority among the component’s implementation activities. However, the official told us that the component needs to have its criteria approved about 3 months before it decides to declare its milestone achieved. USSTRATCOM has adopted some key management practices, but the command has not yet fully developed a results-oriented management approach for continuously assessing and benchmarking its performance in achieving desired outcomes and for identifying actions to improve performance. Our prior work and the work of others show that organizations undertaking complex transformations can increase their likelihood of success by adopting a results-oriented management framework, which includes key management practices and results- oriented management tools to guide implementation efforts and progress toward achieving desired outcomes. These tools and practices include establishing long-term goals and objectives and performance measures and criteria for assessing results and value added; strong and inspirational leadership to set the direction, pace, and tone and provide a clear, consistent rationale for implementing the framework; and timelines to achieve results. While USSTRATCOM uses different techniques to review its progress in implementing its missions and responsibilities, these techniques do not provide the range of quantifiable metrics and criteria needed to fully assess the command’s progress toward achieving its goals and objectives and value added. The command’s senior leadership has taken an active role in articulating and supporting the command’s transformation, a factor that we have identified in prior work as critical to success. The Commander, U.S. Strategic Command, has addressed a variety of audiences to discuss the need for changing the way the command is organized in order to be more effective, and has described the needs and reasons for change in command concepts of operations and guidance. USSTRATCOM has also prepared guidance that assigns responsibility and describes the processes for implementing and integrating its missions. For example, to support its most recent reorganization, the command has prepared a draft integrating guidance document intended to provide a consolidated, objective framework describing how the command is organized, as well as its responsibilities, relationships, and processes. It also has issued a more detailed horizontal command-and-control integration concept of operations to identify how it brings together all of its missions and capabilities to support national objectives. Our prior work has shown that successfully transforming organizations have leaders who define and articulate a compelling reason for change; set the direction, pace, and tone for transformation; and assign accountability for results. The command has also created a collection of first principles to better align the command with national defense priorities, focus its efforts for integrating and synchronizing its missions, and provide advocacy for its missions as they mature. Table 3 provides USSTRATCOM’s nine principles, which include establishing a globally focused organization built to collaborate with all elements of national power; establishing operationally interdependent components; and embracing effects-based operations. The command also identified areas of emphasis that contain several key objectives for mission support, such as (1) for combating weapons of mass destruction, integrate and enable capabilities across the DOD enterprise; (2) in organizing for the global fight, embrace horizontal integration; and (3) for global force management, optimize the employment of low-density and high-demand intelligence, surveillance, and reconnaissance systems. However, USSTRATCOM has not yet developed clear, well-defined, outcome-based goals and measures to indicate how the command will measure success, track the progress it is making toward its goals, and give its leaders critical information on which to base decisions for improving the command’s implementation efforts. While the command’s first principles and areas of emphasis provide direction for better focusing its implementation efforts, these principles are process-oriented, tactical goals, rather than long-term, results-oriented strategic goals and objectives that can provide the basis for determining the command’s performance and progress. Our prior work has shown that long-term strategic goals and objectives are important for an organization to explain the results it expects, what it intends to accomplish, and how these goals would be assessed. Outcome- based performance measures should be objective and results oriented with specific target levels to meet performance goals. Measuring performance allows organizations to track progress toward goals and provides crucial information on which to base organizational and management decisions. The command has adopted some processes and metrics to monitor its performance and provide information on its progress in implementing its missions; however, these processes and metrics are largely subjective and do not provide the command with the full range of both quantitative and qualitative outcome-based performance measures it needs to fully assess progress in achieving its goals. Organizations use evaluation and corrective action plans to examine the success of a program and to improve performance by identifying appropriate strategies to meet those goals that were not met. In contrast, USSTRATCOM’s current processes result in largely subjective assessments and are intended to support more limited purposes. For example, according to an official responsible for coordinating the command’s readiness reporting, the command has adapted its readiness reporting process to include inputs from each of the command’s mission organizations and service components. The official said that this process gives the USSTRATCOM commander access to a broad perspective on the command’s overall readiness. However, the readiness reports resulting from the process discuss the commander’s subjective assessment of USSTRATCOM’s ability to execute its missions, based on short-term internal and external factors affecting the command’s operations. Similarly, the command’s annual training assessments are subjective evaluations, based on observations of prior training, exercises, real-world operations, and other factors, which are used to set priorities for future training priorities. USSTRATCOM senior officials told us that the command has not yet established strategic goals and outcome-based performance metrics to fully assess the command’s progress because the command is still sorting out the implementation of its new organizational construct. Although command officials stated they believe such metrics are needed and the command should begin to develop them, they have not yet developed a process or assigned responsibility for developing metrics. While the development of such metrics will present a significant challenge due to the complex nature of the command’s missions, such an effort is needed so that the command can assess its progress and identify areas that need further improvement. For example USSTRATCOM officials believe they can and should develop metrics to assess the extent to which they are efficiently allocating intelligence, surveillance, and reconnaissance systems to optimize the use of high-demand aircraft. Without developing strategic goals and the full range of outcome-based performance measures, the command will lack a process to evaluate its performance, identify areas that may need improvement, and take corrective actions. USSTRATCOM has not clarified the roles and responsibilities of its service component organizations and lacks a commandwide outreach strategy for enhancing its relations with other DOD organizations. Since its most recent reorganization, USSTRATCOM has provided some guidance to its service component commands. However, the command’s guidance is not always specific and service officials believe that additional guidance from USSTRATCOM would help to more clearly define their responsibilities, expectations, and relationships with the command, particularly with its new mission organizations. In addition, USSTRATCOM lacks a commandwide strategy to effectively manage and coordinate its external outreach activities with the large number of commands and organizations it interacts with in executing its diverse missions. Without clear service component guidance and a comprehensive communications strategy, USSTRATCOM’s service components will not have complete information on the command’s expectations for their support and the command may not have the most effective approach for building relationships, promoting its capabilities, and providing the most effective level of support to other combatant commands and organizations. While USSTRATCOM has provided broad guidance to its service components, Army, Navy, and Air Force component officials told us they lack specific guidance that clarifies and provides more detailed information on their responsibilities, requirements, expectations, and relationships with the command and, particularly, its newer mission organizations. Our prior work has shown that it is important for organizations undergoing significant change to provide clear and complete guidance to their subordinate organizations. Without clearly defined, specific guidance, it can be difficult for the service components to effectively organize, plan, and identify resources to provide the expected support. Moreover, the lack of this guidance can also limit the understanding that USSTRATCOM’s headquarters and its organizations have about the components’ organizations, organizational relationships, and range of support they provide. USSTRATCOM has provided guidance to its service components in its concepts of operations, orders, plans, and other documents and through meetings and other activities between command and service component staffs, such as conferences, videoconferences, and command exercises. Guidance and expectations have also been provided during routine and crisis-oriented collaborative planning activities among the command’s organizations and service components. However, USSTRATCOM Army component officials told us that much of the command’s overall guidance, such as USSTRATCOM’s standing operational order for its global strike mission and its overarching concept of operations, is too general and often does not provide enough specific information for the service components to fully understand the command’s requirements and expectations. Our review of USSTRATCOM guidance found that key guidance lists the overarching responsibilities for the command’s service components, such as providing support for the command’s operations and planning and advocacy activities. Some mission-specific guidance, such as the concept of operations for the space and global strike missions, provides additional responsibilities for each of the components that relate to a specific mission area or organization. In particular, this concept of operations assigns the Air Force service component responsibility for establishing an operations center for global strike planning and execution, and for performing day-to-day command and control of space forces assigned to the command. In contrast, much of the remaining guidance we reviewed provided few specific details on what is expected or required to carry out the components’ responsibilities, such as the type of military personnel skills, planning systems, or secure communications lines that are needed to effectively support the command. Additionally, several guidance documents we reviewed that contain references to the services are still in draft, such as the command’s integrating guidance, or need revision as a result of the command’s recent reorganization. For example, in 2004 the command drafted a concept for integrating its missions that included detailed annexes describing the how the command’s service components were to monitor global events affecting U.S. interests; analyze, evaluate, and communicate information; predict likely consequences of military operations on U.S. and adversary forces; and plan and execute operations in support of each of the command’s mission areas. However, according to a USSTRATCOM official the command leadership decided not to include specific expectations for its service components following the decision to reorganize the command and establish the joint functional component commands in late 2004. As a result, the command’s most recently drafted guidance does not yet completely reflect service responsibilities and expectations and unique support that may be required to support USSTRATCOM’s new organization. According to USSTRATCOM officials, the command does not plan to provide additional formal guidance to its service component organizations at this time. The relationships between the command’s service components and new subordinate mission organizations are still evolving. Army component officials told us that USSTRATCOM’s new mission organizations have not yet developed a full understanding of the Army service component’s responsibilities, and as a result, USSTRATCOM’s expectations may not be consistent with the support that can be provided by the Army. For example, the acting chief of staff for USSTRATCOM’s Army service component told us that according to the Joint Staff’s Unified Action Armed Forces policy publication, which clarifies all command relationships and other authorities, the Army’s service component has responsibility for providing Army personnel with training in service- related tasks. The official told us the USSTRATCOM command assumed that training in the use of joint systems, such as secure communications lines operated by the USSTRATCOM command for integrated missile defense, would be done by the service component. However, the respective USSTRATCOM command is responsible for providing any joint training to service personnel. The official said the Army could provide this training if USSTRATCOM defined this requirement in its guidance. Army component officials also told us that the Army can better respond to USSTRATCOM requirements when expectations are more clearly described in guidance and related documents. For example, USSTRATCOM cited a requirement in its draft concept of operations for a small Army detachment to be assigned to USSTRATCOM’s intelligence, surveillance, and reconnaissance command. The Army provided this type of detachment based on that requirement. Similarly, the head of the Eighth Air Force’s air operations center, which is part of the USSTRATCOM Air Force service component, told us that the component has clear guidance about its responsibilities to provide direct support to USSTRATCOM’s space and global strike command, and therefore, has a clear understanding of what is required to support the component. The space and global strike command has provided information on the direct support expected from the Air Force in its concept of operations. However, the official said the requirements and expectations for supporting other USSTRATCOM mission organizations, such as the Joint Functional Component Commands for Intelligence, Surveillance, and Reconnaissance and Network Warfare, are not as clearly known because USSTRATCOM has not yet provided guidance on the required Air Force support for those organizations. According to Navy Fleet Forces Command officials, USSTRATCOM has not provided clear and specific guidance on the command’s responsibilities and expectations, despite its unique relationship to USSTRATCOM. Officials of the Navy Fleet Forces Command told us that the Fleet Forces Command has a unique relationship to USSTRATCOM because it is a supporting command and not a traditional service component. The officials said their command is not formally assigned to and under USSTRATCOM’s operational chain of command, but rather their command provides advice to USSSTRATCOM on the best use of Navy forces and capabilities in support of its missions. The officials said that clear and specific guidance is necessary to provide an understanding of their command’s unique relationship to USSTRATCOM headquarters and organizations. In March 2006, USSTRATCOM, in consultation with the Fleet Forces Command, did issue a command instruction that clarifies the Fleet Forces Command’s relationship with USSTRATCOM and its responsibilities, which include taking part in the command’s collaborative planning processes, participating in its exercise program, and helping USSTRATCOM prepare its readiness review reports. However, while this document helps to clarify the Navy component’s support responsibilities, it neither sets priorities for the Fleet Forces Command nor includes mission-specific requirements. According to service officials, USSTRATCOM’s unique organization, complex planning processes, and global focus are very different than more traditionally organized combatant commands that have clearly defined geographic areas of responsibility. In contrast to more traditional regional combatant commands, USSTRATCOM has constructed a collaborative planning process, which is globally focused, and involves a much broader range of military capabilities. As this planning process continues to evolve, the role and involvement of the service components will change. For example, the director of the Army component’s planning and exercise group told us that USSTRATCOM’s new mission organizations have not always provided well-documented requirements for certain Army capabilities, which has delayed the Army component’s ability to provide the needed capabilities to these organizations. The official told us that in the summer of 2005 the Army component had difficulty in both staffing its office and initially providing information operations capabilities to support command missions because USSTRATCOM had not documented the Army requirements for these capabilities. The Army official said that although the Commander, U.S. Strategic Command, has been satisfied with the Army’s support for this mission area, greater clarity about USSTRATCOM’s expectations would have helped the Army component to better identify its authorized personnel requirements and ensure that the required Army capabilities were more quickly available. Unlike the other service components, however, the Marine Corps Forces component is satisfied with the guidance that has been provided, according to a Marine Corps component official. The official said the component does not need additional guidance at this time because the component has a more limited role and fewer responsibilities than the other services in supporting USSTRATCOM and its organizations. The official said that the Marine Corps’ component of about 20 people largely serves as a conduit to USSTRATCOM to ensure Marine Corps representation and provide inputs, when needed, on command issues. While USSTRATCOM routinely conducts outreach with other combatant commands and organizations, it lacks a common approach across the command because it has not developed a comprehensive, commandwide outreach strategy to effectively manage these activities. Without an outreach strategy, the command and its organizations do not have a consistent, coordinated approach to use in developing and expanding relationships, educating other organizations on the command’s capabilities, and providing the most effective level of support to other commands and organizations. In our prior work in identifying key practices adopted by organizations undergoing successful transformations, we found that it is essential for organizations to adopt a comprehensive communication strategy that reaches out to customers and stakeholders and seeks to genuinely engage them in the organization’s transformation. In particular, successfully transformed organizations have found that by communicating information early and often, organizations are able to build trust and increase understanding among their stakeholders about the purpose of planned changes. Organizations use these communication strategies to provide a common framework for conducting consistent and coordinated outreach throughout their organizations by clearly presenting the organization’s rationale, specific objectives, and desired outcomes of outreach efforts. These strategies also cover the range of integrated information activities to be implemented and clearly articulate how all the various components of the strategy will be coordinated and managed in order to achieve the objectives most efficiently and effectively. Additionally, outreach strategies provide measurable criteria against which to evaluate the outcomes of organizations’ outreach efforts and determine whether any adjustments are necessary. Because USSTRATCOM supports or is supported by a large number of commands and organizations in executing its diverse set of global missions, the command considers its external outreach efforts essential to (1) develop effective relationships and communications, (2) promote and educate others about the value of its missions and capabilities, and (3) obtain information on how the command can best support other organizations. USSTRATCOM and its organizations regularly use a wide range of methods and activities to promote its missions and capabilities to combatant commands, military services, and DOD and other government organizations. These methods and activities include conferences and symposia, exercises and training events, senior leadership visits, exchange of liaison staff, routine meetings, and voice and electronic communication. The command has also established a strategic knowledge integration Web site, which is called SKIWeb, on DOD’s classified computer network to provide information about the command and the status of its activities and allow open exchange among its staff and other individuals with access to the network. While USSTRATCOM officials told us that USSTRATCOM has developed good working relationships with other combatant commands and organizations across DOD since its establishment in 2002, they believe that the command’s missions, capabilities, and authorities are not yet fully understood by others. The USSTRATCOM commander’s summary report for its November 2005 Global Lightning exercise states that while the command has expended a great amount of effort in developing processes and strategies to integrate the command’s missions, the organizations it supports, particularly other combatant commands, have a vague understanding of the “value added” by USSTRATCOM capabilities. The report states that USSTRATCOM’s ability to provide capabilities and influence global events are not clearly understood, nor do some other commands’ headquarters completely understand how to access that capability. For example, in observing the Global Lightning exercise, U.S. Central Command and other participants told us that they were unsure of value added by USSTRATCOM in planning for global strike operations in their theater. However, USSTRATCOM officials said USSTRATCOM brings the full range of capability options into global strike planning, particularly nonkinetic capability options such as computer network operations; other commands are just beginning to see the potential value of these options. Additionally, USSTRATCOM has also had to change the perceptions held by other organizations that the command is responsible only for nuclear deterrence, which was the case with the previous U.S. Strategic Command, but has other essential missions that are global in scope and span all levels of military operations. While some missions, such as nuclear deterrence and military space, are well practiced and have established histories and interactions with outside organizations, others, such as its combating weapons of mass destruction and intelligence, surveillance, and reconnaissance missions, are less mature and still evolving. Further, many of USSTRATCOM authorities, responsibilities, and capabilities are still being refined, clarified, and demonstrated to other organizations in exercises and training events and in real-time military activities. For example, the deputy commander of USSTRATCOM’s intelligence, surveillance, and reconnaissance command told us that USSTRATCOM’s evolving role in providing support for decisions on allocating intelligence, surveillance, and reconnaissance assets is not yet clear to all of the regional combatant commands. The official said that some combatant commands have concerns about how USSTRATCOM responsibilities could affect their ability to exercise operational and tactical control over any assets assigned to their commands. According to the official, these commands do not yet understand that USSTRATCOM’s role is to provide overall management for these assets rather than control their operational use. Moreover, DOD commands and organizations are still getting acquainted with USSTRATCOM’s new organizational construct, particularly the new subordinate organizations that are responsible for the day-to-day management of several command missions. The command’s new organization does not follow the headquarters-centric model, in which information flows vertically, that is used by other combatant commands. According to the Commander, U.S. Strategic Command, horizontal flows of information and command and control run counter to traditional military thinking, which prefers a vertical chain of command. While the new organizational structure has the potential to greatly expand the command’s opportunities to conduct external outreach, relationships and communication links are still being developed or reestablished with other organizations. Each of the command’s organizations conduct numerous outreach activities daily, but these efforts are often not well coordinated and consistently conducted to achieve the most optimal benefit for the command. We also found that USSTRATCOM does not have an approach for comprehensively collecting information on the needs and priorities of the combatant commands and other stakeholders who use its capabilities, information which USSTRATCOM could then use to determine how it can provide the most effective level of support. USSTRATCOM has recognized the need to develop a comprehensive outreach strategy to increase understanding among other combatant commands about the specific capabilities and contributions that the command can provide to their operations. Both of the command’s summary reports for its October 2004 and November 2005 Global Lightning exercises recommended development of an outreach strategy for identifying USSTRATCOM capabilities for the benefit of combatant commands and stakeholders. The November 2005 report recommended that the strategy provide an integrated methodology for conducting effective outreach and education of the command’s capabilities. The report also recommended (1) improving the command’s SKIWeb Web site to allow outside users to more easily identify capabilities, (2) providing briefings and seminar support to the Defense and interagency community, and (3) developing outreach products to provide key information about the command. The report states that much of the understanding and credibility of the command can be achieved though an effective outreach plan that is focused at other commands, at the interagency level, and with the services to demonstrate and provide understanding about its global support capabilities. USSTRATCOM headquarters officials told us that the command does not have any current plans to develop an outreach strategy as recommended in each of the two exercise reports. To provide the most effective level of support to other combatant commands, U.S. Joint Forces Command recently developed an approach that could serve as a best practice in identifying the priorities of the commands it supports for inclusion in an external outreach strategy. Under U.S. Joint Forces Command’s approach, the command asks each of the other combatant commands to provide a list of its top priorities for the type and level of support needed from the command in the coming year. These lists are incorporated into the command’s annual plans and are used to make adjustments in its activities and resources to best meet the needs of its customers. During the year, the command schedules periodic updates with staffs of the other commands to determine to what extent the command is addressing these priorities or whether the priorities have changed. A USSTRATCOM headquarters official responsible for coordinating the command’s priorities with the U.S. Joint Forces Command told us that approach has been helpful for USSTRATCOM in communicating the command’s priorities for support. The official said that USSTRATCOM added to the effectiveness of the approach by preparing a detailed matrix that identified and ranked the command’s priorities and provided contact information for command staff. USSTRATCOM has been assigned a new role in providing the President and the Secretary of Defense with an expanded set of military options to more effectively respond to emerging global, transregional, and asymmetric threats to U.S. national security, including those involving weapons of mass destruction. While the command has made progress in implementing its global missions, its ability to strengthen implementation efforts and ensure that its leadership has critical information on the effectiveness of its missions and organizations will continue to be limited until it identifies long-term support requirements for its exercise program; establishes clear, consistent criteria for assessing the establishment of its newest mission organizations; and fully implements a results-oriented approach for evaluating its progress. The U.S. Joint Forces Command offers a range of capabilities and resources for supporting command exercises. Until it clearly identifies the long-term support it requires from the U.S. Joint Forces Command, and the Joint Forces Command incorporates these requirements into its plans, USSTRATCOM will continue to lack a robust exercise program, which is essential for evaluating its capabilities and identifying areas in need of improvement. Additionally, absent clear, consistent guidance from the command, four new mission organizations that have not yet achieved full operating capability are establishing their own criteria for this milestone, which results in different understandings of what it means to reach this milestone and how it would be evaluated. Without establishing clear, consistent criteria at major points in implementation, the command cannot create a foundation on which to assess and measure the success of these organizations even after full operating capability has been declared. Further, while the command has adopted some elements of a results- oriented management approach, without a process that includes criteria and benchmarks for measuring the progress toward mission goals at all levels of its organization, the command will be limited in its ability to adjust to the many uncertainties surrounding its mission areas, measure the success of its efforts, and target shortfalls and gaps and suggest corrective actions, including any needed adjustments to future goals and milestones. Similarly, without complete and clearly articulated expectations and requirements, the service components will not have the information needed to fully determine the personnel, resources, and capabilities required to support the command and respond to its requests and tasks in a timely way. In addition, in the absence of a commandwide communications strategy to conduct consistent, coordinated outreach to other commands and organizations, USSTRATCOM cannot effectively develop and expand relationships, foster education about its capabilities, and provide the most effective level of support to other commands and organizations. Lastly, without incorporating into its external outreach strategy a systematic tool to help identify the priorities of the combatant commands and organization it supports—similar to one used by the U.S. Joint Forces Command—USSTRATCOM is limited in its ability to fully address the priorities for support of the other commands and organizations, improve feedback, and identify resources needed to respond to these priorities. To better determine and obtain the assistance that can be provided by the U.S. Joint Forces Command’s Joint Warfighting Center in supporting USSTRATCOM’s exercise program, we recommend the Secretary of Defense direct the Commander, U.S. Strategic Command, to fully identify and request in a timely manner the long-term services and resources required from the U.S. Joint Forces Command’s Joint Warfighting Center to support the command’s program and to reach agreement with the U.S. Joint Forces Command on the support to be provided. We further recommend that the Secretary direct the Under Secretary of Defense for Personnel and Readiness and the Commander, U.S. Joint Forces Command, (1) in the near term, to make any possible adjustments among the Joint Warfighting Center’s current resources to more fully support USSTRATCOM’s exercise program; and (2) in the long term, incorporate USSTRATCOM requirements for support in the center’s plans to provide the full range of assistance necessary to help USSTRATCOM execute a robust exercise program. To strengthen USSTRATCOM’s efforts to implement its missions and provide greater visibility of its progress, we recommend that the Secretary of Defense direct the Commander, U.S. Strategic Command, to take the following four actions: Provide clear and complete guidance to the Joint Functional Component Commands for Space and Global Strike, Intelligence, Surveillance, and Reconnaissance, and Network Warfare, and the USSTRATCOM Center for Combating Weapons of Mass Destruction that clearly defines full operating capability and provides specific, common criteria for determining what is required and how it will be assessed. This guidance should be developed, in consultation with these organizations, before each organization declares full operating capability. Develop a comprehensive, results-oriented management process for continuously assessing and benchmarking the command’s overall progress in achieving desired outcomes and for identifying corrective actions to enhance the command’s efforts to implement and integrate its missions. Develop or refine performance measures that clearly demonstrate performance results and ensure that those measures cascade down through the command; assign clear leadership with accountability and authority to implement and sustain the process; and develop and ensure that goals and objectives are clear and achievable and timelines are established. Set a specific time frame for completing development of this process. Provide additional guidance to the command’s service components that clearly defines and provides more specific information about their responsibilities, requirements, relationships, and expectations for supporting the command’s headquarters and subordinate mission organizations. Set a specific time frame for approval of this guidance. Develop and implement a commandwide communications strategy to guide and coordinate USSTRATCOM’s efforts to conduct outreach with other combatant commands and Defense and other organizations to develop effective relationships and communications, promote and educate others about the value of its mission and capabilities, and obtain information on how the command can best support other commands and organizations. This strategy should include the command’s rationale, specific objectives, desired outcomes, and strategies for conducting outreach with other commands and organizations, and criteria against which the command can evaluate the success of its efforts. Given the importance of the new role assigned to USSTRATCOM by the President and the Secretary of Defense to provide an expanded set of military options to more effectively respond to emerging threats to U.S. national security, Congress should consider requiring the Commander, U.S. Strategic Command, to develop a longer-term, comprehensive and transparent, results-oriented management process for continuously assessing and benchmarking the command’s overall progress in achieving desired outcomes and for identifying corrective actions to enhance the command’s efforts to effectively carry out its missions, as outlined in our recommendation to DOD. In developing this process, the Commander, U.S. Strategic Command, should develop and ensure that long-term goals and objectives are clear and achievable and milestones and timelines for achieving desired outcomes are established; develop or refine performance measures that clearly demonstrate performance results and ensure that those measures cascade down through the command; and assign clear leadership with accountability and authority to implement and sustain the process. The Commander, U.S. Strategic Command, should set a specific time frame for developing and implementing this process. Additionally, the Commander should periodically report to Congress on the command’s progress in achieving desired outcomes. DOD’s Assistant Secretary of Defense for International Security Policy provided written comments on a draft of this report. DOD generally agreed with our three recommendations regarding U.S. Joint Forces Command’s support of USSTRATCOM’s exercise program. DOD did not agree with our other four recommendations that USSTRATCOM provide clear and complete guidance to its joint functional component commands on achieving full operating capability; develop a comprehensive results- oriented management process to assess and benchmark the command’s overall progress; provide additional guidance to its service components; and develop and implement a commandwide communications strategy. In regard to these four recommendations, DOD commented that measures are already in place that address the issues raised by the report. We disagree that the actions taken by USSTRATCOM to date fulfill the intent of our recommendations and are complete. While USSTRATCOM has taken some positive actions on these issues, we do not believe that the command’s actions go far enough, are specific enough, or are sufficiently transparent in improving evaluation of the command’s progress in implementing its mission areas, providing more complete guidance to its mission and service component organizations, and strengthening its external communications with other organizations and commands. Therefore, we believe our recommendations are still warranted and we have added a matter for congressional consideration for Congress to direct the Commander, U.S. Strategic Command, to develop and implement a longer-term results-oriented management process for assessing the command’s overall progress and periodically reporting to Congress its progress in achieving desired outcomes. DOD’s comments are reprinted in their entirety in appendix V and more specific information on DOD’s comments on our recommendations and our assessment of these comments follows below. DOD generally agreed with our recommendations regarding USSTRATCOM’s exercise program. Specifically, DOD agreed with our recommendation that USSTRATCOM should identify and request, in a timely manner, the long-term services and resources required from the U.S. Joint Forces Command’s Joint Warfighting Center to support USSTRATCOM’s exercise program. In its comments, DOD said that while the center had provided limited exercise planning, execution, and assessment support to USSTRATCOM, the command and the center have steadily built a relationship over the past year to support USSTRATCOM’s seven mission areas and are jointly solving problems that hindered the center’s support in previous USSTRATCOM exercises. The department partially agreed with our recommendation that the Under Secretary of Defense for Personnel and Readiness and the Commander, U.S. Joint Forces Command, in the near term make any possible adjustments among the Joint Warfighting Center’s current resources to more fully support USSTRATCOM’s program. DOD commented that the Office of the Deputy Under Secretary of Defense for Personnel and Readiness is currently conducting an in-depth review of the joint training programs to determine how it can provide better flexibility and synergism through joint training investments. DOD agreed with our recommendation that the Under Secretary of Defense for Personnel and Readiness and the Commander, U.S. Joint Forces Command, in the long term, incorporate USSTRATCOM’s requirements for support into the Joint Warfighting Center’s plans. DOD commented that its current review of joint training programs intends to match, to the greatest extent possible, joint training requirements and resources, including the training support provided by the U.S. Joint Forces Command. DOD also said while USSTRATCOM’s requirements must compete with other training priorities for joint training funding, the center can better plan and make resources available if USSTRATCOM provides the center with well-defined requirements 3 to 5 years in advance. DOD did not agree with our recommendation that the Commander, U.S. Strategic Command, provide additional guidance to its joint functional component commands that clearly defines full operating capability and provides specific, common criteria for determining what is required and how it will be assessed. DOD commented that the Commander, U.S. Strategic Command, has provided specific guidance in the form of a tailored implementation directive that assigns specific duties, responsibilities, tasks, and authorities to the components. DOD also said that the Commander is continuing to work closely with the component commanders to develop, implement, and assess the measures of progress by which full operating capability will be declared and will report to the Secretary of Defense when the milestone is achieved for each mission area. We believe that the command’s tailored implementation directives do not go far enough in providing clear and specific criteria for assessing whether specific duties, responsibilities, tasks, and authorities assigned to each organization have been met. For example, during our review we found that the components had different interpretations as to what criteria might apply for declaring full operating capability. We believe that it is important for USSTRATCOM and its organizations to have a clear definition of full operating capability and the criteria, or measures of progress, in place as early as possible, by which the achievement of the milestone will be assessed for each of the new mission organizations. These criteria should be complete and readily accessible so the command and its mission organizations will have confidence in the extent that planned capabilities will be achieved at full operating capability. After declaring full operating capability, each of the new organizations will require further actions to more completely implement and enhance their mission capabilities and responsibilities. Establishing clear, documented criteria for assessing and measuring success for declaring full operating capability can provide a baseline and a sound foundation for assessing the future progress of the organization in carrying out its mission responsibilities. DOD also disagreed with our recommendation that USSTRATCOM develop a comprehensive results-oriented management process for continually assessing and benchmarking the command’s overall progress in achieving desired outcomes and for identifying corrective actions to enhance the command’s efforts to implement and integrate its missions. In its comments, DOD stated that a variety of directives, including concepts of operations, articulate the command’s goals and objectives. The department also stated that the command conducts periodic exercises, external inspections, and in-progress reviews to help assess the command’s effectiveness in making operational the assigned mission areas and achieving stated objectives. While these actions by USSTRATCOM may be helpful to the command’s leadership, they do not represent a comprehensive and transparent plan for assessing progress in achieving desired outcomes. Moreover, DOD interpreted our recommendation as being directed at the metrics to be used by the command’s organizations in declaring full operating capability for its missions, which are scheduled to occur by early 2007. However, our recommendation calls for creation of a longer-term, comprehensive, results-oriented management process that would provide the command with a framework for continuously assessing its future progress in achieving desired outcomes in each of its mission areas and the command’s overall goals and objectives. Because of the importance of the command’s new role in providing expanded military options for addressing emerging threats, we continue to believe that creation of a results-oriented management process that establishes long-term goals and objectives, milestones and timelines for achieving desired outcomes, performance measures that clearly demonstrate performance results, and clear leadership to implement and sustain the process is needed. Therefore, we have included a matter for congressional consideration to require the Commander, U.S. Strategic Command, to develop such a process that would improve transparency and accountability of the extent to which the command is achieving desired outcomes in each of its mission areas. DOD also did not agree with our recommendation that the Commander, U.S. Strategic Command, provide additional guidance to the command’s service components that clearly defines and provides more specific information about their responsibilities, requirements, relationships, and expectations for supporting the command’s headquarters and subordinate mission organizations. In its comments, DOD said that the duties and responsibilities of USSTRATCOM and its service components are documented in Joint Publication 0-2, Unified Action Armed Forces. The department also stated that day-to-day liaison activities between the command and the services are provided by on-site service component representatives. While broad guidance is provided in the Joint Staff’s Unified Action Armed Forces publication on the relationships and authorities of the military services in supporting combatant commanders and by USSTRATCOM in various documents, we continue to believe that additional guidance from the Commander, U.S. Strategic Command, to the command’s service components is needed to provide clear and specific information about their responsibilities, requirements, relationships, and expectations for supporting the command’s headquarters and subordinate mission organizations, particularly since the components have expressed a desire for further guidance from the command. As USSTRATCOM continues to implement its new organization and develop capabilities in each of its mission areas, this additional guidance can strengthen relationships with the services by (1) providing better information for the components in effectively organizing, planning, and identifying resources to support the command; and (2) increasing understanding among the command’s headquarters and its organizations about the components’ organizations, organizational relationships, and the range of support they provide. Lastly, DOD disagreed with our recommendation that USSTRATCOM develop and implement a commandwide communications strategy to guide and coordinate the command’s efforts to conduct outreach with other combatant commands and Defense and other organizations. DOD commented that USSTRATCOM provides and promotes insight to all its activities through its classified Web site; maintains a senior officer representative at each of the combatant commands and with the Joint Staff; and, as a supporting command, conducts continuous liaison activities with other combatant commands. DOD also stated that Web- based mission area training for USSTRATCOM missions is available on the U.S. Joint Forces Command’s Web site. However, as discussed in our report, we found that while USSTRATCOM organizations routinely conduct outreach activities to promote its missions and capabilities, these activities are often not well coordinated and consistently conducted to achieve the most optimal benefit for the command. Both of USSTRATCOM commander’s summary reports prepared after its two most recent Global Lightning exercises in 2004 and 2005 recommended that the command develop a comprehensive outreach strategy to increase understanding among other combatant commands about the specific capabilities and contributions that the command can provide to their operations. The November 2005 Global Lightning report also recommended that the strategy provide an integrated methodology for conducting effective outreach and education of the command’s capabilities. Therefore, we continue to believe that USSTRATCOM needs a commandwide communications strategy to provide a framework to effectively manage these activities and a common approach for conducting consistent and coordinated outreach across the command. We are sending copies of this report to interested congressional committees; the Secretary of Defense; Chairman, Joint Chiefs of Staff; the Commander, U.S. Strategic Command; and the Commander, U. S. Joint Forces Command. We will make copies available to others upon request. In addition the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-4402 or stlaurentj@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions to this report are listed in appendix VI. This appendix provides information on trends and changes we identified in the United States Strategic Command’s (USSTRATCOM) historic and projected budget, from fiscal years 2003 through 2011. To perform our analysis, we identified trends and changes in USSTRATCOM’s budget since its establishment in October 2002 by obtaining and analyzing the command’s historic, current, and projected funding for fiscal years 2003 through 2011. We used data prepared to support the President’s fiscal year 2006 budget request, which were the most current official data available when we conducted and completed our work. We also discussed with USSTRATCOM officials anticipated changes to the budget resulting from the fiscal year 2007 President’s budget request, and efforts taken by the command to identify how its funding is allocated by mission responsibility and subordinate organization. We took steps to assess the reliability of the data used in this analysis, including (1) performing electronic testing of required data elements, (2) comparing the data to another independently prepared data source, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for our purposes. This appendix provides information on trends and changes we identified in the United States Strategic Command’s (USSTRATCOM) military and civilian authorized personnel levels since its establishment in October 2002. Our analysis shows that USSTRATCOM’s overall authorized personnel level has remained relatively stable since 2002, and that the percentage of filled military and civilian positions has increased. The command is transferring positions to its new mission organizations from its headquarters organization, rather than increasing its overall commandwide authorized personnel level. Although the command has expanded the number of professional military skills of its authorized personnel, the majority of its military positions encompass relatively few types of skilled positions. We also determined that while Air Force and Navy military positions continue to make up most of USSTRATCOM’s authorized personnel, the proportion of civilian positions is increasing. To determine how USSTRATCOM’s authorized personnel level has changed since its establishment in 2002, we obtained and reviewed USSTRATCOM projections and historic data that identify (1) the number of authorized civilian and military positions assigned to USSTRATCOM, (2) the number of authorized positions filled by individuals assigned to the command, and (3) the professional military skills associated with the command’s military positions. The data we obtained include USSTRATCOM positions assigned to the command’s headquarters near Omaha, Nebraska, its mission organizations, and to various other locations and assignments. We also obtained the command’s projections for authorized personnel levels for the new mission organizations, and discussed these projections with officials responsible for managing the command’s authorized personnel. In our analysis, we did not consider staff positions from organizations that are supporting several of USSTRATCOM’s mission organizations, such as the Air Force Space Command, Eighth Air Force, Army Space and Missile Defense Command, Defense Intelligence Agency, National Security Agency, Defense Threat Reduction Agency, and Defense Information Systems Agency. The data also do not include part-time reservists or contractors. We took steps to assess the reliability of the data used in this analysis, including (1) performing electronic testing of required data elements, (2) comparing the data to another independently prepared data source, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for our purposes. To address the extent to which the United States Strategic Command (USSTRATCOM) has made progress in implementing its new missions and assessing mission results, we reviewed a wide range of Department of Defense (DOD) and command documentation including USSTRATCOM guidance, plans, directives, speeches and testimony statements, and reports; implementation plans and directives for creating its new mission organizations; and documentation related to DOD’s implementation of its New Triad concept to transform U.S. strategic capabilities. We also spoke with various officials involved in the command’s implementation efforts about their roles, related plans, and actions. When possible, we met with the command and other organizations’ senior leadership to discuss and obtain their views on various command issues, including: Commander, U.S. Strategic Command; Commander, Joint Functional Component Command for Intelligence, Surveillance, and Reconnaissance/Director, Defense Intelligence Agency; Commander, Joint Functional Component Command for Network Warfare/Director, National Security Agency; Commander, Joint Functional Component Command for Integrated Missile Defense/Commander, Army Space and Missile Defense Command; Commander, Joint Task Force for Global Network Operations/Director, Defense Information Systems Agency; Director, USSTRATCOM Center for Combating Weapons of Mass Destruction/Director, Defense Threat Reduction Agency; Commander, Air Force Space Command; and Chief of Staff, U.S. Joint Forces Command. To determine the extent to which USSTRATCOM has a robust exercise program for demonstrating its capabilities, we reviewed the command’s annual training plan, which describes the command’s individual exercises, establishes an exercise schedule, and sets expectations for the participation of the command’s mission organizations. For the November 2005 Global Lightning exercise, we reviewed the exercise plan, collection management plan, after-action report, and final exercise report. We also observed that exercise and discussed the exercise results with the participants. We also reviewed the collection management plan and the after-action report prepared for the April 2006 Global Thunder exercise, and after-action reports prepared for the April 2005 Global Thunder, October 2004 Global Lightning, and October 2003 Global Guardian exercises. We obtained guidance from the Joint Staff that describe the roles and responsibilities of U.S. Joint Forces Command for supporting combatant command exercises. In addition we held discussions with command officials from the exercise and training branch and with other exercise observers to obtain their views on USSTRATCOM efforts to plan and schedule its exercises. We also met with officials from the new joint functional component commands as well as the Joint Task Force for Global Network Operations and the USSTRATCOM Center for Combating Weapons Of Mass Destruction to identify challenges to more fully including their missions in the commands exercises and assist in our understanding of the extent to which the command’s mission organizations were able to participate in the command’s exercises. Command officials also briefed us on the evolution of the command’s exercise program since its establishment, and plans for the future. Finally, we met with officials from the U.S. Joint Forces Command’s Joint Warfighting Center to determine the extent to which they have been involved in identifying requirements, objectives, methods, and tools for planning, implementing, and evaluating USSTRATCOM exercises to strengthen the design and execution of the command’s exercises, such as participant training and independent observer team support and evaluation. To determine the extent to which USSTRATCOM and its mission organizations had developed criteria for assessing their progress toward achieving full operating capability, we reviewed documents from the command and each of the new mission organizations. These documents included the command’s implementation directives for each new mission organization and the overarching command reorganization implementation plan for the current reorganization. We also reviewed briefings from each of the mission organizations that gave status information on the organizations’ efforts towards achieving full operating capability. We held discussions with USSTRATCOM officials who were part of the command’s reorganization management team and with the senior leadership, when possible, to determine their roles and management approach in assisting the mission organizations’ efforts to reach full operating capability and to obtain an understanding of what reaching full operating capability means as a milestone in developing the new USSTRATCOM organization. We met and held discussions with the senior staff of each mission organization on their criteria for measuring the organization’s progress toward full operating capability. To determine the extent to which USSTRATCOM has developed a results- oriented management approach to establish goals, continually track its progress, achieve better synergy among its missions, and gauge the results of its efforts, we reviewed key documentation and interviewed officials to determine what steps, if any, the command has taken to develop and follow this approach. We reviewed relevant GAO reports that identified and reviewed management approaches of other government and private sector organizations. We used the practices and implementation steps identified in these approaches as criteria for reviewing USSTRATCOM documents and for discussions with command officials about their approach to transforming the USSTRATCOM organization. We then compared USSTRATCOM’s approach against these examples of success that we had identified in other organizations to determine the extent to which USSTRATCOM had these elements in place. We reviewed key USSTRATCOM documents, including its first principles (i.e., its long-term goals) related to reporting on the command’s performance and those from its biannual readiness reporting and its annual training assessments. We reviewed the command’s implementation plan and related directives for establishing USSTRATCOM’s joint functional component commands. We compared these documents to implementation plans used by other organizations, including the U.S. Atlantic Command and U.S. Northern Command, and reorganization plans, such as the Report to Congress on the Plan for Organizing the National Nuclear Security Agency and the Department of Homeland Security Reorganization Plan, to determine any differences in the elements and details for implementation that were considered in these plans and the extent to which they had developed, used, or planned to use outcome-based performance goals and measures. To assess the extent to which USSTRATCOM has made progress defining organizational responsibilities and establishing relationships with other DOD commands and organizations, we obtained and reviewed relevant documents and spoke with various officials involved in implementing and advocating for the command’s new missions about its roles and related plans and actions. To determine the extent to which the command has clarified the roles and expectations of its service component organizations, we reviewed command documentation including draft integrating guidance, concepts of operations, orders, plans, and other documents. We met with officials from each of the command’s service component/supporting commands and discussed the extent to which they believed the command’s guidance and expectations was sufficiently clear about their supporting roles. We also discussed with command officials the extent to which guidance was provided to the service components through meetings and other activities. To determine the extent to which USSTRATCOM has developed a common approach and comprehensive strategy to enhance its outreach to numerous DOD organizations on which its success depends, we met with the Commander, U.S. Strategic Command, and with officials in the command’s directorate responsible for advocacy. We also met with senior leadership in all of the subordinate mission organizations to understand the extent to which a clear, coordinated, and unified outreach strategy is in place and to identify the range of methods and activities the command and its subordinate mission organizations use to engage and promote its missions and capabilities with combatant commands, military services, and DOD and other government organizations. We met with officials at the U.S. Joint Forces Command and U.S. Northern Command and discussed command relationships, the ways that USSTRATCOM officials performed outreach with these organizations, sought their viewpoint on lessons that should be learned in communicating the command’s missions and responsibilities, and their perspectives on USSTRATCOM progress. During USSTRATCOM’s Global Lightning exercise in November 2005, we also obtained insights from participants on the command’s effectiveness at performing its outreach activities. We also reviewed several GAO reports that addressed key practices organizations should implement during a significant reorganization or transformation. We used the reports to identify successful communication and outreach practices employed by other U.S. and foreign government organizations. We reviewed the USSTRATCOM commander’s summary report for its November 2005 Global Lightning exercise to identify any lessons learned, from participating in the exercise with two other combatant commands, on the success of the command’s outreach efforts. During our review, we obtained and analyzed USSTRATCOM budget and authorized personnel data to identify trends in acquiring the resources, personnel levels, and skills needed to implement the command’s missions. We took steps to assess the reliability of the data used in these analyses, including (1) performing electronic testing of required data elements, (2) comparing the data to other independently prepared data sources, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for our purposes. For additional methodological details about how we performed our analyses, see appendixes I and II. We performed our work from May 2005 through June 2006 in accordance with generally accepted government auditing standards. In conducting our work, we contacted officials at the command’s headquarters, service, and functional components; think-tank organizations; and other relevant stakeholders. Table 6 provides information on the organizations and offices contacted during our review. The United States Strategic Command (USSTRATCOM) organization is comprised of a command headquarters, joint functional component commands, task forces, and centers, which are located around one of four metropolitan areas: Omaha, Nebraska; Colorado Springs, Colorado; San Antonio, Texas; and Washington, D.C. Each of the command’s organizations is supported by a primary Defense agency or service partner organization. Table 7 shows the primary responsibilities and related information for key USSTRATCOM organizations. In addition to the individual named above, Gwendolyn R. Jaffe, Assistant Director; Alissa H. Czyz; David G. Hubbell; Amanda M. Leissoo; Kevin L. O’Neill; Roderick W. Rodgers; and Mark J. Wielgoszynski, Analyst- in-Charge, made key contributions to this report. | In 2002, the President and Secretary of Defense called for the creation of the United States Strategic Command (USSTRATCOM) to anticipate and counter global threats. Currently, USSTRATCOM has responsibility for seven mission areas including nuclear deterrence and integrated missile defense. GAO was asked to determine the extent to which USSTRATCOM has made progress in (1) implementing its new missions and assessing mission results and (2) defining organizational responsibilities and establishing relationships with other Department of Defense (DOD) commands and organizations. To assess progress, GAO compared USSTRATCOM's efforts with lessons learned in implementing successful organizational transformations. Since its establishment in 2002, USSTRATCOM has made progress in implementing its new missions by taking a wide range of actions such as developing concepts of operations for its new missions, establishing processes and procedures, and identifying and obtaining personnel and resources needed to begin operations. However, further steps are needed to build on this progress in order to achieve the broad goals envisioned by the President and Secretary of Defense in creating the command. While the command's leadership recognizes the need to build on progress to date and has some additional actions underway to expand and enhance capabilities in its seven mission areas, GAO identified several areas in which more specific actions are needed to help the command achieve its vision. Specifically, the command has taken initial steps to include its new missions in its exercise program but has not yet fully developed a robust exercise program that integrates exercise support available from the U.S. Joint Forces Command, which can provide USSTRATCOM with several planning, training, and evaluation tools. In addition, most of USSTRATCOM's new mission organizations have not established clear criteria for determining when they will reach full operating capability. Furthermore, USSTRATCOM has not developed performance measures and criteria for assessing results across the command and in each of its mission areas. GAO's prior work examining organizational change and defense transformation shows that each of these tools is important for transforming organizations to increase their likelihood of success, particularly when multiple organizations are involved in mission execution. Developing plans in each of these areas should help the command demonstrate it can provide added value to the combatant commanders and give the President an expanded set of military options for responding to future threats--two key DOD goals. USSTRATCOM has also made progress in establishing an overall organizational framework and identifying subordinate mission organizations that have responsibility for the daily management of operations. However, it has not fully clarified roles and expectations of its service component organizations and had not developed a commandwide approach for enhancing outreach to other DOD organizations. While USSTRATCOM has provided some guidance to its service component organizations, because this guidance has not been specific or well documented, the Army, Navy, and Air Force do not fully understand their expectations in providing support to the command. In addition, while USSTRATCOM conducts some outreach with other combatant commands and organizations, it lacks a commandwide approach to effectively manage outreach activities. GAO has previously found that it is essential for organizations to develop a comprehensive communication strategy that seeks to engage customers and stakeholders. Providing additional guidance and developing a communications strategy should help USSTRATCOM's service component organizations to better understand their roles and enable the command to build effective relationships with other commands. |
Medicare’s payments in MSP situations can vary depending on the circumstances of the situation. CMS oversees all MSP activities and administers the MSP program, with contractors performing most of CMS’s administrative activities within the process for MSP situations involving NGHPs. The process for MSP situations that involve NGHPs generally includes five basic components—notification, negotiation, resolution, mandatory reporting, and recovery. Medicare payments can vary in different MSP situations. In most MSP situations involving NGHPs, Medicare will pay initially for medical treatment related to the incident and later seek to recover those payments. When CMS is notified that an MSP situation exists in which an NGHP has accepted primary responsibility for ongoing medical services, Medicare will start denying the related claims. However, more commonly, CMS is notified about a potential MSP situation that is not yet resolved, and Medicare continues to make payments until the situation is resolved and there is a settlement, judgment, award, or other payment. Medicare does this to ensure that the beneficiary has access to needed medical services in a timely manner. CMS refers to any payments made by Medicare for services where another payer has primary responsibility for payment as conditional payments. Once a resolution is reached between the beneficiary and the NGHP, Medicare will seek to recover any conditional payments made. To help prevent Medicare from making future payments related to MSP situations involving NGHPs, when an individual is expected to have future medical expenses (including Medicare-covered drug expenses) related to his/her accident, injury, or illness, CMS states that all parties involved in negotiating a resolution of those situations are responsible for protecting Medicare’s interests. One way to accomplish this is through a Medicare set-aside arrangement (MSA)—a voluntary arrangement where a portion of the proceeds from a settlement are set aside to pay for all related future medical expenses that would otherwise be reimbursable by Medicare if Medicare were the primary payer.payments for medical expenses related to the MSP situation until the MSA funds are exhausted. While MSAs can be used in liability or no-fault situations, they are most common for workers’ compensation situations, where they are known as Workers’ Compensation Medicare Set-Aside Arrangements (WCMSA). CMS oversees all MSP activities and administers the MSP program, through activities such as developing program policy and guidance. In addition, CMS communicates to stakeholders—including NGHPs, beneficiaries, providers, and attorneys—about the MSP process, policies, and guidance. For example, CMS maintains websites related to parts of the MSP process, from which NGHPs and beneficiaries can obtain information about their respective responsibilities in MSP situations involving NGHPs. GAO has established guidelines on internal control that are relevant for federal agencies such as CMS. Internal control includes the components of an organization’s management that provide reasonable assurance that certain objectives are being achieved, including effective communication with external stakeholders. Since 2006, CMS has had three contractors to perform most of its administrative activities within the MSP process: the Coordination of Benefits Contractor (COBC), the Medicare Secondary Payer Recovery Contractor (MSPRC), and the Workers’ Compensation Review Contractor (WCRC). Current contractor responsibilities are as follows: COBC: The COBC collects, manages, and maintains information in the CMS data systems about other health insurance coverage for Medicare beneficiaries and initiates MSP claims investigations. The information the COBC collects is available to other CMS contractors. MSPRC: The MSPRC uses information updated by the COBC as well as information from CMS’s data systems to identify and recover Medicare payments that should have been paid by another entity as primary payer. Once a resolution has been reached between the beneficiary, or other individuals authorized by the beneficiary, and the NGHP, the MSPRC calculates the final amount owed to Medicare and issues a demand letter to the beneficiary or other individual authorized by the beneficiary. WCRC: The WCRC evaluates proposed WCMSA amounts and projects future medical expenses related to workers’ compensation accident, injury, or illness situations that would otherwise be payable by Medicare. The WCRC generally only reviews proposed WCMSA amounts for current Medicare beneficiaries within certain thresholds, referred to as CMS workload review thresholds. WCRC- recommended WCMSA amounts are forwarded to one of six CMS regional offices for final approval. The process for MSP situations that involve NGHPs generally includes five basic components—notification, negotiation, resolution, mandatory reporting, and recovery. However, the details of the process, and the administrative tasks that must be conducted, can vary depending on when in the process notification occurs, the type of insurance involved (liability, no-fault, or workers’ compensation), and the type of resolution reached.may vary, in general, the process contains the following components: While the details vary by situation and the timing of notification Notification: The COBC is notified that a beneficiary’s accident, injury, or illness is an MSP situation and creates a record. Notification can come from various sources—including the beneficiary, an attorney, a physician, or the NGHP—and can occur at various times during the process for MSP situations involving NGHPs. While mandatory reporting requires NGHPs to report MSP resolutions to the COBC, NGHPs or other involved parties may also provide voluntary notification earlier in the process. For example, a beneficiary’s attorney could provide notification of an MSP situation involving an NGHP shortly after an accident occurs. After notification of the MSP situation, Medicare usually continues to make conditional payments although it may begin denying claims. Once the record for an MSP situation is created by the COBC, the MSPRC issues an MSP rights and responsibilities letter to the beneficiary or the beneficiary’s representative, such as an attorney, which explains the applicable MSP law and how MSP recovery works. Negotiation: Negotiation occurs between the NGHP and the injured beneficiary or the beneficiary’s representative. The point in the process at which notification of a potential MSP situation is made can affect the number and amount of conditional payments made by Medicare as well as whether, and the extent to which, information on conditional payments is available during the negotiation. For example, if CMS is notified about a potential MSP situation early in the process, the MSPRC can provide information about what it has identified as any related claims that have been paid by Medicare. This information may then be used during the negotiations. This information is provided in writing through a conditional payment letter. For workers’ compensation situations that involve future medical expenses, the WCRC may be involved in reviewing proposed WCMSA amounts. Resolution: Resolution is reached between the beneficiary or the beneficiary’s attorney and the NGHP. The type of resolution varies and can include the NGHP assuming ongoing responsibility for payment of medical claims related to the injury or illness, a lump sum payment, a Medicare set-aside arrangement, or a combination of any of these. The beneficiary or the beneficiary’s representative submits the resolution information to the MSPRC. For resolutions that include a WCMSA, no payments are made by Medicare for medical expenses related to the workers’ compensation injury or illness until the set- aside is exhausted. The administrator of the WCMSA, typically the beneficiary or the beneficiary’s representative, must submit an annual accounting of the set-aside funds to the MSPRC. Mandatory reporting: The NGHP reports the resolution to the COBC. Regardless of whether notification of the MSP situation occurred earlier in the process, after a resolution is reached in which the Medicare beneficiary or someone on the beneficiary’s behalf receives a settlement, judgment, award, or other payment from the NGHP, the NGHP is required to report information about the MSP situation and its resolution to the COBC under mandatory reporting. The data NGHPs are required to submit include information to identify the beneficiary; diagnosis codes for the injury, accident, or illness; information concerning the policy or insurer; information about the injured party’s representative or attorney; and settlement or payment information. Recovery: The MSPRC seeks to recover Medicare’s conditional payments that have been made. The MSPRC calculates the total amount owed to Medicare and issues a demand for payment— referred to as a demand letter. This letter is typically issued to the beneficiary or the beneficiary’s representative. The MSPRC compares the resolution data reported by the NGHP under mandatory reporting to any resolution data submitted by the beneficiary, or the beneficiary’s representative, to ensure that the resolution data match. Either payment is received and the case closed or a response is received challenging all or part of the demand. If no response is received, debt delinquent more than 180 days is referred to the Department of the Treasury for collection action. The beneficiary has the right to question, appeal, or request a waiver of recovery of the amount demanded. Figures 2, 3, and 4 illustrate how the process could work for MSP situations that involve an auto liability insurer, a no-fault insurer, and a workers’ compensation plan, respectively. In each case, the timing of notification and the parties involved in each step can vary. During the initial implementation of mandatory reporting for NGHPs, the workloads of and CMS payments to MSP contractors, and Medicare savings, all increased. For example, since fiscal year 2008, CMS payments to the MSP contractors have increased by about $21 million while Medicare savings from NGHP MSP situations—including savings from claims denials and conditional payment recoveries—have increased by about $124 million. However, because mandatory reporting is still being phased in, particularly for most liability settlements, it is too soon to determine the full impact of its implementation. CMS MSP contractors’ NGHP workloads increased during the initial implementation of mandatory reporting, and workloads are expected to continue to increase as mandatory reporting is phased in. The NGHP workloads of all three MSP contractors increased to varying degrees during the initial implementation of mandatory reporting. For example, from fiscal year 2008 through fiscal year 2011, the number of MSP situations involving NGHPs that were voluntarily reported to the COBC increased by 176 percent and the number of WCMSAs submitted to the WCRC increased by 42 percent (see table 1). Although mandatory reporting for NGHPs did not begin to be phased in until January 1, 2011, CMS officials told us that the effects of the mandate began earlier as the voluntary reporting of MSP situations (by NGHPs, attorneys, or beneficiaries) increased after the law’s passage in December 2007. CMS officials told us they expect that the COBC’s and MSPRC’s workloads will continue to increase once mandatory reporting is phased in for most liability MSP situations. CMS officials and an NGHP stakeholder group both told us that many liability MSP situations were not reported to CMS prior to mandatory reporting. CMS officials could not estimate the extent of future increases because CMS has no reliable estimates on the actual number of liability cases that include MSP situations. The increased number of WCMSA proposals submitted to the WCRC during the past 4 years may be due, in part, to the NGHP industry’s increased submission of ineligible and $0 WCMSA proposals in reaction to mandatory reporting. While the number of WCMSA submissions increased by 42 percent from fiscal year 2008 through fiscal year 2011, some of these submissions were not eligible for WCRC review—for example, they did not meet the minimum reporting thresholds—and the number of ineligible WCMSA submissions has grown rapidly. Ineligible submissions increased by about 148 percent from 2008 through 2011, growing from about 4,500 ineligible submissions in 2008 to about 11,200 ineligible submissions in 2011. Although mandatory reporting did not add any new WCMSA requirements, a CMS official told us the NGHP industry may be submitting more WCMSA proposals that are not eligible for WCRC review because it wants documentation from CMS stating that a WCMSA did not meet CMS’s review thresholds. Similarly, although not directly related to any reporting requirements, WCRC officials said that they have also seen an increase in $0 WCMSA proposals. A workers’ compensation plan may submit these proposals when a settlement amount meets the minimum thresholds and is eligible for WCRC review, but the plan is asserting that it does not have responsibility for paying the beneficiary’s future medical expenses. WCRC officials told us that when an NGHP submits a $0 WCMSA proposal, it may be seeking CMS confirmation that it does not have responsibility for paying the beneficiary’s future medical expenses. The total amount of CMS payments to the MSP contractors increased during the initial implementation of mandatory reporting. Total CMS payments to the MSP contractors in fiscal year 2011 were about $21 million higher than payments in fiscal year 2008 (see table 2). Payments for the MSPRC’s services increased by the greatest amount over this period—increasing about $16 million from 2008 through 2011. While CMS’s overall contractor payments increased during this time period, the percentage increases in payments to the COBC and MSPRC were substantially lower than the increases in their workloads (see table 3). In order to control costs and contractor workloads, CMS is taking steps to improve the overall efficiency of the MSP program. CMS officials told us that they intend to move the MSP program to more of a “self-service” model. In this model, NGHPs, attorneys, and beneficiaries could obtain or submit required information through contractor websites or contractor automated phone lines, rather than submitting information via mail or fax, or waiting to speak to a customer service representative, as has traditionally been the process. This may result in increased efficiencies in the MSP process, for example, by allowing both NGHP stakeholders and MSP contractors to receive necessary information more quickly. Officials estimated that these steps will be able to reduce the workload performed per case by the MSP contractors. Medicare savings increased during the initial implementation of mandatory reporting for NGHPs, but an accurate estimate of savings could take years to determine because of the lag time between initial notification of MSP situations and recovery, the fact that not all reported situations result in recoveries, and the fact that mandatory reporting is still being phased in. MSP savings from known NGHP situations that CMS is able to track—including savings from claims denials and conditional payment recoveries—increased by about $124 million from fiscal year 2008 through fiscal year 2011. Savings attributable to liability insurance increased by the greatest amount during this time period, growing from about $342 million in fiscal year 2008 to about $448 million in fiscal year 2011. In addition to these savings, Medicare also avoids costs as a result of the use of MSAs. CMS only tracks cost-avoided savings attributable to approved WCMSA proposals, not other types of MSAs, and accounts for the savings by reporting the total WCMSA amounts approved each fiscal year. These numbers therefore represent the maximum cost-avoided savings that could potentially be realized through these WCMSAs in the future. See table 4 for the total amount of MSP savings from NGHP situations and WCMSAs approved from fiscal year 2008 through fiscal year 2011. Because of a change in CMS policy implemented in 2009, it is unclear to what extent the increases in approved WCMSA amounts can be attributed to mandatory reporting. While Medicare savings attributable to NGHP MSP situations have been increasing overall, it is too soon to determine the total impact that mandatory reporting will have on NGHP Medicare savings. Savings amounts have not increased as quickly as the overall increase in NGHP MSP situations reported to CMS. There are two reasons why this may be occurring. CMS officials told us that because it can take several years for a case involving an NGHP MSP situation to be resolved, there is a delay between when increases are seen in the number of new situations reported and when increases are seen in the amounts of demands and recoveries. Additionally, since there is not necessarily a recovery demand issued for every NGHP situation reported, an increase in the number of reported cases will not necessarily result in a corresponding increase in recoveries. These MSP situations represent cost-avoided savings, but CMS officials told us that to the extent that these situations are working appropriately and CMS is not receiving claims, they have no way of knowing the savings associated with these situations. Within the process for MSP situations involving NGHPs, we identified key challenges related to contractor performance, demand amounts, aspects of mandatory reporting, and CMS guidance and communication. CMS has addressed, or is taking steps to address some, but not all, of these challenges. Challenges related to the timeliness of the MSPRC and WCRC were identified, including recent significant increases in the time required to complete certain processes or tasks, and CMS reported taking steps to address the challenges with each of these contractors’ performance. Problems related to the timeliness of the MSPRC have been identified, and several actions have been taken or are under way by CMS to address these problems. NGHPs and beneficiary advocates have cited performance problems with the MSPRC that include the length of time taken to answer phone calls and to issue demand letters after resolutions for an MSP situation were provided to the MSPRC. MSPRC data show that from fiscal year 2008 through fiscal year 2011 the average wait time for NGHP callers has increased from an average of less than 3 minutes to an average of more than 38 minutes. During that same period, the number of NGHP-related calls handled by the MSPRC’s customer service representatives increased from about 550,000 in fiscal year 2008 to about 630,000 in fiscal year 2011, and the number of calls abandoned after 31 seconds or more increased from about 30,000 in fiscal year 2008 to about 220,000 in fiscal year 2011. CMS officials told us that while the MSPRC did not have a specific performance standard for average call wait times in its contract, they found the current average wait time of over 38 minutes for NGHP-related phone calls unacceptable. In fiscal year 2011, the MSPRC averaged about 76 days to issue a demand letter when notice of settlement was the initial notification of the MSP situation to the MSPRC. If the MSPRC was aware of the MSP situation prior to receiving the notice of settlement, it averaged about 48 days to issue a demand letter. Delays in issuing demand letters could result in delays in distributing funds from MSP situation resolutions to beneficiaries. CMS officials stated that the agency has a performance standard stating that the issuance of a demand letter within 20 days is timely if the case was established prior to settlement and the initial conditional payment letter was issued. CMS and MSPRC officials attributed some of the MSPRC’s performance challenges to higher-than-expected workloads. MSPRC officials attributed their inability to keep up with increased call volumes to a lack of resources, stating that since the contract’s inception they have not been adequately funded by CMS for their workloads. They stated that CMS has consistently underestimated the annual volume of calls the MSPRC would receive. CMS officials acknowledged that when the contract started in 2006, at which time the MSP recovery tasks were transitioned from CMS claims contractors to the MSPRC, CMS underestimated the MSPRC workload. Officials said that just when the MSPRC was close to catching up from that transition, mandatory reporting was announced, which created a new, additional workload. CMS reported that the agency was taking several steps intended to address MSPRC performance challenges. For example, CMS did not renew the contract with the entity that served as the MSPRC since October 1, 2006, and is planning to make a significant change to its current MSP contracting structure by combining the functions of the current COBC and MSPRC. CMS intends to streamline the MSP data and debt collection processes for Medicare stakeholders by establishing a centralized coordination of benefits and MSP recovery organization. CMS reports that this approach will allow the agency to minimize duplicative activities that were previously performed by both the COBC and MSPRC, provide a single point of contact for internal and external stakeholders, and consolidate MSP responsibility under one umbrella. CMS is also working to develop a web-based MSPRC portal, which will enable beneficiaires and beneficiaries’ representatives to, among other things, obtain information about their Medicare claim payments. Table 5 presents the steps CMS is taking to address MSPRC performance challenges and the anticipated results of taking these steps. Most of these steps were either implemented only recently or have not yet been implemented, therefore it is too soon to tell to what extent these functions currently performed by the MSPRC will improve as a result of these actions. The average processing time for the WCRC to review WCMSA proposals has increased significantly over the past year and a half, resulting in delays in the resolution of MSP cases, and several actions have been taken or are under way by CMS that are intended to reduce processing time. According to WCRC data, the average processing time for all cases increased from 22 days in April 2010 to 95 days in September 2011 (see fig. 5). While the current WCRC contract does not include a performance standard related to the length of time for the WCRC to review submitted WCMSA proposals, WCRC officials told us they would like WCMSA reviews to be completed within 45 days. CMS and WCRC officials report that a number of factors contributed to the WCRC’s review process taking longer, including increased workload. For example, while in fiscal year 2011 the WCRC contract estimated that the WCRC would review 1,700 WCMSA proposals each month, the WCRC received an average of about 2,400 WCMSA proposals per month and was able to review an average of about 2,100 per month. As a result, a backlog grew. According to WCRC data, over the past several years, an increasing number of submitted WCMSA proposals were determined by the WCRC to be ineligible for review, meaning that more of the WCRC’s time has been spent responding to ineligible proposals. Also, CMS reported that a change made to the data system used by the WCRC to process WCMSAs resulted in a decrease in system performance, which significantly increased review time from September 2010 through January 2011, adding to the backlog of WCMSA proposals to be reviewed. Several actions have been taken or are under way by CMS to reduce the average processing time for WCMSA proposal review. For example, in fiscal year 2011 CMS provided the WCRC with additional funding that enabled the WCRC to authorize overtime for its employees to attempt to decrease the existing backlog of submitted WCMSA proposals. CMS is also currently in the process of awarding a new WCRC contract. According to CMS officials, the new contract provides for an increased estimated number of monthly WCMSA proposal reviews—increasing the number from 1,700 a month to from 2,000 to 2,500 a month. Additionally, CMS implemented a web-based portal—the WCMSA Portal (WCMSAP)—which is intended to improve the efficiency of the WCMSA submission process. The WCMSAP allows registered users, such as beneficiaries, attorneys, and insurance companies, to directly enter WCMSA case information electronically, upload documentation, and receive up-to-date case status information electronically. CMS conducted a pilot test with 10 WCMSA submitters that according to COBC officials, collectively represented 80 percent of all WCMSA submissions. We contacted the 10 WCMSA submitters that participated in the WCMSAP pilot, and they told us that the WCMSAP could improve the overall The WCMSAP became WCMSA submission and review process.available for use by all WCMSA submitters on November 29, 2011. Finally, CMS hired a contractor to conduct an assessment of its WCMSA process, which could result in recommendations to address related policies and procedures, such as the average processing time. CMS officials told us that they expected to receive a draft of the contractor’s report in March 2012, with a final report in June 2012. We identified three key challenges related to demand and recovery of MSP amounts. They include challenges related to the timing of the final demand amounts, the cost-effectiveness in recovery efforts, and the amounts demanded in liability settlements. CMS officials reported that the agency was taking steps to address some, but not all, of these challenges. Stakeholders, such as attorneys and NGHPs, reported that because CMS does not provide a final demand amount prior to a settlement, they have difficulty determining an appropriate settlement amount, which delays settlements. CMS is taking several steps to address this challenge. NGHP stakeholders reported that it would be helpful if CMS could calculate a final demand amount that can be provided to concerned parties prior to settlement, rather than after settlement. CMS officials stated, however, that they do not know what the final demand amount will be because Medicare continues to make conditional payments up to the settlement date.negotiations, beneficiaries can view all their claims paid to date by Medicare on the MyMedicare.gov website. CMS officials also noted that during settlement CMS is taking steps that may improve NGHP stakeholders’ ability to obtain or estimate Medicare’s demand amount prior to settlement. For example, as of September 30, 2011, beneficiaries can obtain the latest issued Medicare conditional payment amounts through an automated, self-service feature of the MSPRC phone line. In November 2011, CMS implemented an option for beneficiaries to pay Medicare a fixed 25 percent of their settlement amount for certain liability situations involving a physical-trauma-based injury with settlement amounts of $5,000 or less. In December 2011, CMS announced an option for beneficiaries, beginning in February 2012, to self-calculate conditional payment amounts for liability insurance MSP situations with settlement amounts of $25,000 or less that involve physical-trauma-based injuries.MSPRC will review the proposed self-calculated conditional payment amount and, if it finds the amount accurate, will respond with Medicare’s final conditional payment amount within 60 days. CMS has sometimes spent more in administrative costs attempting to recover certain conditional payment amounts than the demands are actually worth, but has recently implemented two initial recovery thresholds and may consider additional thresholds once it has had an opportunity to review 2012 data. NGHP stakeholders provided an example of a demand letter issued by the MSPRC for an amount of $1.59; one NGHP stakeholder noted seeing numerous examples of demand letters for amounts less than $5.00. MSPRC officials confirmed that they have traditionally pursued any recoveries the MSPRC was made aware of, regardless of the administrative costs to recover them. In 2004 we noted the importance of improving the cost-effectiveness of the MSP recovery process, and CMS concurred with our recommendations. The cost-effectiveness of recovery has improved greatly. In 2004, we reported that CMS recovered only 38 cents for every dollar spent on recovery activities in fiscal year 2003, but in June 2011 CMS reported that MSP activities have provided an average rate of return on recoveries of $9.32 for each dollar spent since fiscal year 2008. NGHP stakeholders suggested that CMS should take an additional step to improve cost- effectiveness by setting a recovery threshold based on the settlement amount that would likely yield a recovery amount at or above CMS’s cost to recover that money. Among other things, the Federal Claims Collection Act permits the Secretary of Health and Human Services to end collection actions on certain claims when the cost of collecting such claims is likely to be more than the amount recovered. 31 U.S.C. § 3711(a)(3). See also, 31 C.F.R. §§ 903.1-903.5. Additionally, the MSP statutory provision provides for the waiver of conditional payment requirements, including repayment, when the Secretary determines that such a waiver is in the best interests of Medicare. 42 U.S.C. § 1395y(b)(2)(B)(v). $200.additional recovery thresholds for certain NGHP situations once officials have had a chance to review 2012 data, which will include information on some liability MSP situations. NGHP stakeholders suggested that because CMS does not recognize the concept of proportionality in liability settlement situations a disproportionate share of liability settlement amounts may be paid to Medicare; however, CMS has a process that may sometimes address this challenge. The concept of proportionality in liability settlement amounts is relevant in situations when individuals and liability insurers agree to settle for less than the full amount of incurred expenses associated with the alleged incident, and therefore the amount of medical expenses to be reimbursed to an individual’s health plan is proportionally reduced. NGHP stakeholders said that CMS does not recognize this concept for MSP situations and instead wants 100 percent reimbursement of claims it paid. They assert that CMS should recognize proportionality in these situations and likewise proportionally reduce Medicare’s demand amount in these cases. NGHP stakeholders stated that if CMS does not proportionally reduce Medicare’s demand amount in these situations, it could leave beneficiaries without any compensation for issues such as pain and suffering or lost wages. However, CMS officials said that the concept of proportionality is in conflict with MSP provisions granting CMS a priority right of recovery, which entitles Medicare to full recovery for the expenses it paid up to the settlement amount. Nonetheless, CMS officials said that Medicare beneficiaries may contact the appropriate CMS regional office prior to settling a case to request a pre-demand compromise in the event that the demand amount would consume the entire settlement. CMS officials told us that they do not, however, advertise the availability of this option and do not keep data on how often compromises are requested or granted. Limited MSPRC data on those compromise requests of which the MSPRC is made aware suggest that about two out of every three compromise requests are approved by the reviewing CMS regional office. We identified three key challenges related to aspects of mandatory reporting for NGHPs: determining whether individuals are Medicare beneficiaries, supplying diagnostic codes related to individuals’ injuries, and reporting all settlement amounts. CMS reported that it is taking steps to address some, but not all, of these challenges. NGHP stakeholders reported difficulty in determining whether individuals are Medicare beneficiaries for the purposes of mandatory reporting, and CMS has taken a step to address the challenge and is considering another. NGHP stakeholders have reported difficulty obtaining the information needed from individuals involved in NGHP situations in order to determine whether an individual is a Medicare beneficiary, and whether the NGHP is therefore required to report the situation. In order to verify an individual’s Medicare eligibility, an NGHP either needs the person’s Medicare Health Insurance Claim Number (HICN) or the person’s Social Security number, first initial of the first name and last six characters of the last name, date of birth, and gender. NGHP stakeholders report that individuals are reluctant to surrender sensitive information, such as Social Security numbers, to NGHPs—particularly as there may be an adversarial relationship between the individual and the NGHP (i.e., that individual is suing the insurer or self-insured company). Without this information, the NGHP cannot verify whether the individual is a Medicare beneficiary and cannot submit the mandatory reporting record. Therefore, NGHP stakeholders also reported concerns that they could be subject to mandatory reporting noncompliance fines if not being able to obtain this information led to being unable to submit a mandatory reporting record. To assist NGHPs with this challenge, CMS has provided them with model language that they can use to document their unsuccessful attempts to obtain individuals’ HICNs or Social Security numbers. This model language is a sample statement to be signed by the individual indicating whether the individual is a Medicare beneficiary for use in cases when the NGHPs cannot otherwise determine the individual’s Medicare status. CMS has stated that if an individual refuses to furnish a HICN or Social Security number, and the NGHP reporting entity chooses to use this model language, CMS will generally consider the reporting entity compliant for purposes of mandatory reporting.officials stated that CMS and a number of other federal agencies are currently conducting internal studies to evaluate possible alternatives that could be used in lieu of Social Security numbers. ICD-9 codes are used by the Medicare claims contractors to determine whether specific Medicare claims should be denied or paid. NGHPs submitting incorrect ICD-9 codes could result in beneficiaries’ claims that are actually unrelated to their MSP situations being incorrectly denied. created within the CMS data systems. Therefore, CMS would already have had access to the related codes. Some NGHP stakeholders assert that they should not have to report all liability settlements, as CMS may be able to recover very little from certain settlements, and CMS is evaluating data to determine if appropriate reporting thresholds could be established. An official of an organization representing NGHPs has stated that liability settlements of less than $25,000 include a small portion of annual settlement payments but constitute a large number of individual claims. Therefore, the official suggested that liability NGHPs should not have to report these settlements to CMS as it would just increase the reporting burden on NGHPs while yielding small recovery amounts. However, recovery data show that for fiscal year 2011, the MSPRC issued almost 57,000 demands for liability settlements under $25,000. These demands related to these settlements totaled almost $71 million, with an average demand amount of about $1,250. Nonetheless, CMS is evaluating its data and the agency is considering implementing reporting thresholds, if appropriate. However, CMS officials expressed concern that setting reporting thresholds could have unintended consequences. If thresholds were set at, for example, $25,000, then the NGHP industry might begin settling many cases at amounts just under $25,000 in order to avoid mandatory reporting. CMS officials reported that any determination of reporting thresholds should wait until liability reporting data are available so the data can be analyzed and an appropriate threshold set. CMS officials also note that the establishment of any mandatory reporting thresholds would not eliminate CMS’s recovery rights for settlements below the threshold. We identified key challenges related to CMS guidance and communication of information on the MSP process, guidance on MSAs, and beneficiary rights and responsibilities related to MSP recoveries, resulting in communication of information that does not meet GAO standards for internal control. CMS has taken few steps to address these challenges. The overall presentation and organization of MSP process guidance for situations involving NGHPs on the CMS website does not ensure that pertinent information can be identified by external stakeholders, including NGHPs. For example, there is no main web page for the MSP program. Instead, information relevant to the MSP process for situations involving NGHPs is categorized on the main Medicare home page in two separate sections—some MSP process information falls under “Coordination of Benefits” and other process information falls under “Medicare Secondary Payer Recovery.” This makes it difficult to find any recent developments or changes to the MSP process as a whole, as an individual has to check multiple web pages to locate recent news. Additionally, while CMS has created an MSP manual, there is no direct link to the manual under the Coordination of Benefits or Medicare Secondary Payer Recovery headings on the Medicare home page. Also, because CMS regularly updates its MSP policies and process by issuing memos or “alerts,” it is difficult to determine what the current policy is or what may have changed in the process. CMS has issued guidance regarding WCMSAs, but finding current, official WCMSA guidance can be challenging, and CMS has issued little other MSA guidance. While CMS has a policy manual for describing the MSP process in general, no similar manual, or chapter in the MSP process policy manual, describes WCMSA policy. Further, while guidance in the form of memorandums related to WCMSAs exists, no manual or similar document currently exists to organize this guidance. The WCMSA-related memorandums are accessible on the CMS website, but are poorly organized, making it difficult to find memorandums on particular topics. As a result, NGHP stakeholders have reported that it is difficult to find updated WCMSA policies. However, CMS officials told us in January 2012 that the agency was developing a WCMSA user manual that would be available through the CMS website. Stakeholders also said that the WCMSA review and approval criteria are not clear, and expressed a desire for CMS to make this information more transparent. Furthermore, CMS has established an e-mail address to accept questions regarding WCMSA submission policy, but the actual e-mail address is not well publicized and is difficult to find. Additionally, while guidance exists for WCMSAs, CMS has issued very little guidance related to liability MSAs and NGHP stakeholders reported inconsistent handling of liability MSAs. CMS issued its first formal memorandum related to MSA for liability situations on September 29, 2011, detailing when it would consider Medicare’s interests satisfied with respect to future medical expenses in liability settlements. But this is the only formal memorandum related to liability MSAs that CMS has provided. And unlike for WCMSAs, CMS does not have a formal review and approval process for liability or no-fault MSA arrangements. Upon request, some CMS regional offices will review liability or no-fault MSAs, but this is at the regional office’s discretion. NGHP stakeholders report variation in regional office response, including which regional offices will review liability MSAs, policies (such as setting thresholds for review), and regional office responsiveness. Regarding developing policies and procedures for liability MSAs, CMS officials report that the agency is working to operationalize policy regarding the reporting of future medical expenses in liability insurance situations, including an option to allow for an immediate payment to Medicare for future medical costs. This would provide an additional option for taking Medicare’s interests into account rather than the option of establishing an MSA. CMS officials did not report that they were taking any steps to address regional office variation in liability MSA review. CMS communications with beneficiaries regarding their rights and responsibilities in the MSP recovery process are not always sufficient or clear and CMS has taken few steps to address this challenge. Specifically, two letters sent to beneficiaries are not sufficient or clear with regard to the beneficiary’s rights to dispute unrelated claims. The rights and responsibilities letter, which is sent to beneficiaries by the MSPRC after it is notified of an MSP situation, does not make beneficiaries’ rights and responsibilities clear regarding their ability to dispute the conditional payments that the MSPRC identifies. While the letter notes that the beneficiary should expect to receive a letter detailing the conditional payments Medicare has made to date, it does not explain that this letter may contain some unrelated claims and the beneficiary should review the document carefully. Furthermore, it does not explain that the beneficiary has the right to dispute any claims unrelated to the MSP situation. While CMS revised the rights and responsibilities letter in 2011, the revisions did not address these issues. Beneficiaries also receive a conditional payment letter, which CMS regards as a first step in determining conditional payments, but that is not made clear to the beneficiaries. Beneficiary advocates report that these letters often include charges for unrelated medical services. As a consequence, according to an attorney who represents Medicare beneficiaries, beneficiaries are often asked to return too great a portion of their settlements to Medicare. CMS officials stated that they consider the conditional payment letter the first attempt at determining the conditional payments based on the information the MSPRC has, and that they want the beneficiary and beneficiary’s attorney to help clarify which claims are related. They told us that the beneficiary is in the best position to clarify which claims are related, and that the MSPRC will work with the beneficiary and the beneficiary’s attorney prior to issuing the demand letter. However, while the conditional payment letter states that the beneficiary should inform the MSPRC if any of the identified conditional payments are inaccurate or incomplete, the language used in the conditional payment letter does not convey that the MSPRC will work with the beneficiary and the beneficiary’s attorney prior to issuing the demand letter. Additionally, the letter does not include that CMS may be willing to compromise its demand amount if it appears the conditional payments will consume the beneficiary’s entire settlement. The letter also does not include the beneficiary’s rights to appeal the amount of the MSP claim, as well as to seek a waiver of recovery, once the demand letter is issued. CMS did not report any plans to revise the language used in the conditional payment letter. CMS has a responsibility to protect the Medicare Trust Funds by ensuring that funds owed the program are recovered. Mandatory reporting should increase CMS’s awareness of MSP situations and therefore increase recoveries and MSP savings. Thus far, the initial implementation of mandatory reporting for NGHPs has greatly increased the number of MSP NGHP situations reported to CMS. MSP savings have also shown increases and should continue to increase as mandatory reporting is fully implemented. However, the volume of liability settlements that have yet to be reported to CMS is unknown; therefore, the extent to which workloads, recoveries, and savings will increase is also unknown. As a result of mandatory reporting, some NGHPs, particularly liability insurers, are interacting with CMS for the first time. Some of these NGHPs and NGHP stakeholder groups have raised concerns about long- standing MSP process and policies. Additionally, mandatory reporting increased the MSP contractors’ workloads, leading to performance delays. CMS has been responsive to some of the concerns expressed by NGHPs, in particular by continuing to delay the start of mandatory reporting for various types of liability settlements. CMS has also evaluated and modified some of its long-standing MSP policies and procedures, and is in the process of considering additional changes and program improvements. However, because these changes are new or still being implemented, it is too soon to tell the effect that they will have on improving the MSP process. Additionally, there are several areas related to the MSP program and process that still need improvement. In order to maximize its ability to protect the Medicare Trust Funds, CMS’s efforts to recover conditional payments when Medicare should not have been the primary payer need to be cost-effective. CMS recently implemented two recovery thresholds—a low, across-the-board threshold based in part on provisions in the Federal Claims Collection Act and a higher threshold that applies to certain liability MSP situations. CMS officials said the agency will consider setting additional recovery thresholds for certain NGHP situations once the agency has had a chance to review 2012 data. If recovery thresholds need to later be adjusted based on 2012 data, then CMS could make adjustments as appropriate. CMS could also improve program effectiveness by aligning mandatory reporting thresholds with recovery thresholds, once they are set. Additionally, CMS has opportunities to improve the MSP program by reducing specific reporting requirements for NGHPs and improving communication with stakeholders. While CMS’s main goal with mandatory reporting should be to obtain necessary information to pursue MSP recoveries, CMS could take steps to lessen the burden on NGHPs, without substantially increasing the burden on CMS or its contractors. Communication between CMS and various NGHP stakeholders, including beneficiaries, also needs improvement. Ensuring that these stakeholders have current, complete information so that they can understand the MSP process and policies, and their roles and responsibilities in the process, is essential for ensuring the overall effectiveness of the program. We are making five recommendations to CMS to improve the effectiveness of the MSP program and process for NGHPs. To ensure cost-effectiveness in the agency’s NGHP recovery process, we recommend that the Acting Administrator of CMS review recovery thresholds periodically for appropriateness to ensure that the agency’s recovery efforts are being conducted in the most cost-effective manner possible, and not require NGHPs to report on cases for which the agency will not seek any recovery. To potentially decrease the administrative burden of mandatory reporting for NGHPs, we recommend that the Acting Administrator of CMS consider making the submission of ICD-9 codes an optional component of reporting for liability NGHPs. To improve the agency’s communication regarding the MSP process for situations involving NGHPs. we recommend that the Acting Administrator of CMS take the following three actions: develop a centralized MSP program website, to include links to information about the various parts of the MSP process; develop guidance regarding liability and no-fault set-aside review and revise the correspondence with beneficiaries, such as letters sent during the recovery process, to ensure that beneficiary rights and responsibilities are more clearly communicated. We received written comments on a draft of this report from the Department of Health and Human Services on behalf of CMS. These comments are reprinted in appendix I. CMS agreed with our recommendation to review recovery thresholds periodically for appropriateness and our three recommendations to improve the agency’s communication regarding the MSP process for situations involving NGHPs. CMS also agreed to consider our recommendation on potentially making the submission of ICD-9 codes an optional component of reporting for liability NGHPs. However, the agency also noted that about 95 percent of NGHPs reporting data to CMS have provided the required ICD-9 codes, and provided reasons why allowing text descriptions rather than ICD-9 codes could increase the burden on parties such as beneficiaries. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Acting Administrator of CMS, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, key contributors to this report were Gerardine Brennan, Assistant Director; Christina Ritchie; and Lisa Rogers. Laurie Pachter; Jessica C. Smith; and Jennifer Whitworth also provided valuable assistance. | The Centers for Medicare & Medicaid Services (CMS) is responsible for protecting Medicares fiscal integrity. Medicare Secondary Payer (MSP) situations exist when Medicare is a secondary payer to other insurers, including non-group health plans (NGHP), which include auto or other liability insurance, no-fault insurance, and workers compensation plans. CMS attempts to recover Medicare payments made that were the responsibility of NGHPs, but CMS has not always been aware of these MSP situations. In 2007, legislation added mandatory reporting requirements for NGHPs that should enable CMS to be aware of these situations. NGHPs reported concerns about the MSP process, and CMS delayed the start of mandatory reporting by NGHPs, in part because of these concerns. This report examines (1) how the initial implementation of mandatory reporting for NGHPs has affected the workload of and payments to MSP contractors, and Medicare savings, and (2) key challenges within the process for MSP situations involving NGHPs and the steps CMS is taking to address those challenges. GAO reviewed relevant MSP-related documents and data on MSP costs, workload, Medicare savings, and contractor performance. GAO also interviewed CMS officials, MSP contractor officials, and NGHP stakeholders. During the initial implementation of mandatory reporting for non-group health plans (NGHP), the workloads of and Centers for Medicare & Medicaid Services (CMS) payments to Medicare Secondary Payer (MSP) contractors, and Medicare savings, all increased. From 2008 through 2011, the NGHP workloads of all three contractors CMS uses to implement the process for MSP situationsthe Coordination of Benefits Contractor (COBC), the Medicare Secondary Payer Recovery Contractor (MSPRC), and the Workers Compensation Review Contractor (WCRC)increased to varying degrees. For example, from 2008 through 2011, the number of NGHP MSP situations voluntarily reported to the COBC increased from about 142,000 to about 392,000, the number of NGHP cases established by the MSPRC increased from about 238,000 to about 480,000, and the number of Medicare set-aside proposals submitted to the WCRC increased from about 20,000 to almost 29,000. From 2008 through 2011, the total CMS payments to the MSP contractors increased by about $21 million, and Medicare savings from known NGHP situations that CMS is able to trackincluding savings from claims denials and conditional payment recoveriesincreased by about $124 million. The total impact of mandatory reporting on Medicare savings could take years to determine for various reasons, including that mandatory reporting is still being phased in. Within the process for MSP situations involving NGHPs, GAO identified key challenges related to contractor performance, demand amounts, aspects of mandatory reporting, and CMS guidance and communication. CMS has addressed or is taking steps to address some, but not all, of these challenges. Contractor performance. Challenges related to the timeliness of the MSPRC and WCRC were identified, including significant increases in the time required to complete important tasks. CMS reported taking steps to address the challenges with each of these contractors performance. Demand and recovery issues. Challenges were identified related to the timing of demand amounts, the cost-effectiveness of recovery efforts, and the amounts of Medicare demands from liability settlements. CMS reported taking steps to address some, but not all, of these challenges. Mandatory reporting. Key challenges were identified with certain aspects of mandatory reporting: determining whether individuals are Medicare beneficiaries, supplying diagnostic codes related to individuals injuries, and reporting all liability settlement amounts. CMS reported taking steps to address some, but not all, of these challenges. CMS guidance and communication. Key challenges were identified related to CMS guidance and communication about the MSP process, guidance on Medicare set-aside arrangements, and beneficiary rights and responsibilities. CMS has taken few steps to address these challenges. While CMS has taken, or reported it is in the process of taking, additional steps to address these key challenges, there are several areas related to the MSP program and process that still need improvement. To improve the MSP program, GAO is making recommendations to improve the cost-effectiveness of recovery, decrease the reporting burden for NGHPs, and improve communications with NGHP stakeholders. CMS agreed with these recommendations. |
Federal, state, and local government agencies have differing roles with regard to public health emergency preparedness and response. The federal government conducts a variety of activities, including developing interagency response plans, increasing state and local response capabilities, developing and deploying federal response teams, increasing the availability of medical treatments, participating in and sponsoring exercises, planning for victim aid, and providing support in times of disaster and during special events such as the Olympic games. One of its main functions is to provide support for the primary responders at the state and local level, including emergency medical service personnel, public health officials, doctors, and nurses. This support is critical because the burden of response falls initially on state and local emergency response agencies. The President’s proposal transfers control over many of the programs that provide preparedness and response support for the state and local governments to a new Department of Homeland Security. Among other changes, the proposed legislation transfers HHS’s Office of the Assistant Secretary for Public Health Emergency Preparedness to the new department. Included in this transfer is the Office of Emergency Preparedness (OEP), which currently leads the National Disaster Medical System (NDMS) in conjunction with several other agencies and the Metropolitan Medical Response System (MMRS). The Strategic National Stockpile, currently administered by the Centers for Disease Control and Prevention (CDC), would also be transferred, although the Secretary of HHS would still manage the stockpile and continue to determine its contents. The President’s proposal would also transfer the select agent registration enforcement program from HHS to the new department. Currently administered by CDC, the program’s mission is the security of those biologic agents that have the potential for use by terrorists. The proposal provides for the new department to consult with appropriate agencies, which would include HHS, in maintaining the select agent list. Under the President’s proposal, the new department would also be responsible for all current HHS public health emergency preparedness activities carried out to assist state and local governments or private organizations to plan, prepare for, prevent, identify, and respond to biological, chemical, radiological, and nuclear events and public health emergencies. Although not specifically named in the proposal, this would include CDC’s Bioterrorism Preparedness and Response program and the Health Resources and Services Administration’s (HRSA) Bioterrorism Hospital Preparedness Program. These programs provide grants to states and cities to develop plans and build capacity for communication, disease surveillance, epidemiology, hospital planning, laboratory analysis, and other basic public health functions. Except as otherwise directed by the President, the Secretary of Homeland Security would carry out these activities through HHS under agreements to be negotiated with the Secretary of HHS. Further, the Secretary of Homeland Security would be authorized to set the priorities for these preparedness and response activities. The new Department of Homeland Security would also be responsible for conducting a national scientific research and development program, including developing national policy and coordinating the federal government’s civilian efforts to counter chemical, biological, radiological, and nuclear weapons or other emerging threats. This would include establishing priorities and directing and supporting national research and development and procurement of technology and systems for detecting, preventing, protecting against, and responding to terrorist acts using chemical, biological, radiological, or nuclear weapons. Portions of the Departments of Agriculture, Defense, and Energy that conduct research would be transferred to the new Department of Homeland Security. For example, the Department of Energy’s (DOE) chemical and biological national security research and some of its nuclear smuggling and homeland security activities would be transferred to the new homeland security department. The Department of Homeland Security would carry out civilian health-related biological, biomedical, and infectious disease defense research and development through agreements with HHS, unless otherwise directed by the President. As part of this responsibility, the new department would establish priorities and direction for a program of basic and applied research on the detection, treatment, and prevention of infectious diseases to be conducted by the National Institutes of Health (NIH). The transfer of federal assets and resources in the President’s proposed legislation has the potential to improve coordination of public health preparedness and response activities at the federal, state, and local levels. Our past work has detailed a lack of coordination in the programs that house these activities, which are currently dispersed across numerous federal agencies. In addition, we have discussed the need for an institutionalized responsibility for homeland security in federal statute. We have also testified that one key consideration in evaluating whether individual agencies or programs should be included or excluded from the proposed department is the extent to which homeland security is a major part of the agency or program mission. The President’s proposal provides the potential to consolidate programs, thereby reducing the number of points of contact with which state and local officials have to contend. However, coordination would still be required with multiple agencies across departments. Many of the agencies involved in these programs have differing perspectives and priorities, and the proposal does not sufficiently clarify the lines of authority of different parties in the event of an emergency, such as between the Federal Bureau of Investigation (FBI) and public health officials investigating a suspected bioterrorist incident. Let me provide you with more details. We have reported that many state and local officials have expressed concerns about the coordination of federal public health preparedness and response efforts. Officials from state public health agencies and state emergency management agencies have told us that federal programs for improving state and local preparedness are not carefully coordinated or well organized. For example, federal programs managed by the Federal Emergency Management Agency (FEMA), Department of Justice (DOJ), OEP, and CDC all currently provide funds to assist state and local governments. Each program conditions the receipt of funds on the completion of a plan, but officials have told us that the preparation of multiple, generally overlapping plans can be an inefficient process. In addition, state and local officials told us that having so many federal entities involved in preparedness and response has led to confusion, making it difficult for them to identify available federal preparedness resources and effectively partner with the federal government. The proposed transfer of numerous federal response teams and assets to the new department would enhance efficiency and accountability for these activities. This would involve a number of separate federal programs for emergency preparedness and response, whose missions are closely aligned with homeland security, including FEMA; certain units of DOJ; and HHS’s Office of the Assistant Secretary for Public Health Emergency Preparedness, including OEP and its NDMS and MMRS programs, along with the Strategic National Stockpile and the select agent program. In our previous work, we found that in spite of numerous efforts to improve coordination of the separate federal programs, problems remained, and we recommended consolidating the FEMA and DOJ programs to improve the coordination. The proposal places these programs under the control of the Under Secretary for Emergency Preparedness and Response, who could potentially reduce overlap and improve coordination. This change would make one individual accountable for these programs and would provide a central source for federal assistance. The proposed transfer of MMRS, a collection of local response systems funded by HHS in metropolitan areas, has the potential to enhance its communication and coordination. Officials from one state told us that their state has MMRSs in multiple cities but there is no mechanism in place to allow communication and coordination among them. Although the proposed department has the potential to facilitate the coordination of this program, this example highlights the need for greater regional coordination, an issue on which the proposal is silent. Because the new department would not include all agencies with public health responsibilities related to homeland security, coordination across departments would still be required for some programs. For example, NDMS functions as a partnership among HHS, the Department of Defense (DOD), the Department of Veterans Affairs (VA), FEMA, state and local governments, and the private sector. However, as the DOD and VA programs are not included in the proposal, only some of these federal organizations would be brought under the umbrella of the Department of Homeland Security. Similarly, the Strategic National Stockpile currently involves multiple agencies. It is administered by CDC, which contracts with VA to purchase and store pharmaceutical and medical supplies that could be used in the event of a terrorist incident. Recently expanded and reorganized, the program will now include management of the nation’s inventory of smallpox vaccine. Under the President’s proposal, CDC’s responsibilities for the stockpile would be transferred to the new department, but VA and HHS involvement would be retained, including continuing review by experts of the contents of the stockpile to ensure that emerging threats, advanced technologies, and new countermeasures are adequately considered. Although the proposed department has the potential to improve emergency response functions, its success depends on several factors. In addition to facilitating coordination and maintaining key relationships with other departments, these factors include merging the perspectives of the various programs that would be integrated under the proposal and clarifying the lines of authority of different parties in the event of an emergency. As an example, in the recent anthrax events, local officials complained about differing priorities between the FBI and the public health officials in handling suspicious specimens. According to the public health officials, FBI officials insisted on first informing FBI managers of any test results, which delayed getting test results to treating physicians. The public health officials viewed contacting physicians as the first priority in order to ensure that effective treatment could begin as quickly as possible. The President’s proposal to shift the responsibility for all programs assisting state and local agencies in public health emergency preparedness and response from HHS to the new department raises concern because of the dual-purpose nature of these activities. These programs include essential public health functions that, while important for homeland security, are critical to basic public health core capacities. Therefore, we are concerned about the transfer of control over the programs, including priority setting, that the proposal would give to the new department. We recognize the need for coordination of these activities with other homeland security functions, but the President’s proposal is not clear on how the public health and homeland security objectives would be balanced. Under the President’s proposal, responsibility for programs with dual homeland security and public health purposes would be transferred to the new department. These include such current HHS assistance programs as CDC’s Bioterrorism Preparedness and Response program and HRSA’s Bioterrorism Hospital Preparedness Program. Functions funded through these programs are central to investigations of naturally occurring infectious disease outbreaks and to regular public health communications, as well as to identifying and responding to a bioterrorist event. For example, CDC has used funds from these programs to help state and local health agencies build an electronic infrastructure for public health communications to improve the collection and transmission of information related to both bioterrorist incidents and other public health events. Just as with the West Nile virus outbreak in New York City, which initially was feared to be the result of bioterrorism, when an unusual case of disease occurs public health officials must investigate to determine whether it is naturally occurring or intentionally caused. Although the origin of the disease may not be clear at the outset, the same public health resources are needed to investigate, regardless of the source. States are planning to use funds from these assistance programs to build the dual-purpose public health infrastructure and core capacities that the recently enacted Public Health Security and Bioterrorism Preparedness and Response Act of 2002 stated are needed. States plan to expand laboratory capacity, enhance their ability to conduct infectious disease surveillance and epidemiological investigations, improve communication among public health agencies, and develop plans for communicating with the public. States also plan to use these funds to hire and train additional staff in many of these areas, including epidemiology. Our concern regarding these dual-purpose programs relates to the structure provided for in the President’s proposal. The Secretary of Homeland Security would be given control over programs to be carried out by HHS. The proposal also authorizes the President to direct that these programs no longer be carried out through agreements with HHS, without addressing the circumstances under which such authority would be exercised. We are concerned that this approach may disrupt the synergy that exists in these dual-purpose programs. We are also concerned that the separation of control over the programs from their operations could lead to difficulty in balancing priorities. Although the HHS programs are important for homeland security, they are just as important to the day-to- day needs of public health agencies and hospitals, such as reporting on disease outbreaks and providing alerts to the medical community. The current proposal does not clearly provide a structure that ensures that the goals of both homeland security and public health will be met. The proposed Department of Homeland Security would be tasked with developing national policy for and coordinating the federal government’s civilian research and development efforts to counter chemical, biological, radiological, and nuclear threats. In addition to coordination, we believe the role of the new department should include forging collaborative relationships with programs at all levels of government and developing a strategic plan for research and development. However, we have many of the same concerns regarding the transfer of responsibility for the research and development programs that we have regarding the transfer of the public health preparedness programs. We are concerned about the implications of the proposed transfer of control and priority setting for dual-purpose research. For example, some research programs have broad missions that are not easily separated into homeland security research and research for other purposes. We are concerned that such dual-purpose research activities may lose the synergy of their current placement in programs. In addition, we see a potential for duplication of capacity that already exists in the federal laboratories. We have previously reported that while federal research and development programs are coordinated in a variety of ways, coordination is limited, raising the potential for duplication of efforts among federal agencies. Coordination is limited by the extent of compartmentalization of efforts because of the sensitivity of the research and development programs, security classification of research, and the absence of a single coordinating entity to ensure against duplication. For example, DOD’s Defense Advanced Research Projects Agency was unaware of U.S. Coast Guard plans to develop methods to detect biological agents on infected cruise ships and, therefore, was unable to share information on its research to develop biological detection devices for buildings that could have applicability in this area. The new department will need to develop mechanisms to coordinate and integrate information on research and development being performed across the government related to chemical, biological, radiological, and nuclear terrorism, as well as user needs. We reported in 1999 and again in 2001 that the current formal and informal research and development coordination mechanisms may not ensure that potential overlaps, gaps, and opportunities for collaboration are addressed. It should be noted, however, that the legislation tasks the new department with coordinating the federal government’s “civilian efforts” only. We believe the new department will also need to coordinate with DOD and the intelligence agencies that conduct research and development efforts designed to detect and respond to weapons of mass destruction. In addition, the first responders and local governments possess practical knowledge about their technological needs and relevant design limitations that should be taken into account in federal efforts to provide new equipment, such as protective gear and sensor systems, and help set standards for performance and interoperability. Therefore, the new department will have to develop collaborative relationships with these organizations to facilitate technological improvements and encourage cooperative behavior. The President’s proposal could help improve coordination of federal research and development by giving one person the responsibility for creating a single national research and development strategy that could address coordination, reduce potential duplication, and ensure that important issues are addressed. In 2001, we recommended the creation of a unified strategy to reduce duplication and leverage resources, and suggested that the plan be coordinated with federal agencies performing research as well as state and local authorities. The development of such a plan would help to ensure that research gaps are filled, unproductive duplication is minimized, and that individual agency plans are consistent with the overall goals. The proposal would transfer parts of DOE’s nonproliferation and verification research and development program to the new department, including research on systems to improve the nation’s capability to prepare for and respond to chemical and biological attacks. However, the legislation is not clear whether the programmatic management and dollars only would move or the scientists carrying out the research would also move to the new department. Because the research is carried out by multiprogram laboratories that employ scientists skilled in many disciplines who serve many different missions and whose research benefits from their interactions with colleagues within the laboratory, it may not be prudent to move the scientists who are doing the research. One option would be rather than moving the scientists, the new department could contract with DOE’s national laboratories to conduct the research. The President’s proposal would also transfer the responsibility for civilian health-related biological defense research and development programs to the new department, but the programs would continue to be carried out through HHS. These programs, now primarily sponsored by NIH, include a variety of efforts to understand basic biological mechanisms of infection and to develop and test rapid diagnostic tools, vaccines, and antibacterial and antiviral drugs. These efforts have dual-purpose applicability. The scientific research on biologic agents that could be used by terrorists cannot be readily separated from research on emerging infectious diseases. For example, NIH-funded research on a drug to treat cytomegalovirus complications in patients with HIV is now being investigated as a prototype for developing antiviral drugs against smallpox. Conversely, research being carried out on antiviral drugs in the NIH biodefense research program is expected to be useful in the development of treatments for hepatitis C. The proposal to transfer responsibility to the new department for research and development programs that would continue to be carried out by HHS raises many of the same concerns we have with the structure the proposal creates for public health preparedness programs. Although there is a clear need for the new department to have responsibility for setting policy, developing a strategy, providing leadership, and overall coordinating of research and development efforts in these areas, we are concerned that control and priority-setting responsibility will not be vested in those programs best positioned to understand the potential of basic research efforts or the relevance of research being carried out in other, non- biodefense programs. In addition, the proposal would allow the new department to direct, fund, and conduct research related to chemical, biological, radiological, nuclear, and other emerging threats on its own. This raises the potential for duplication of efforts, lack of efficiency, and an increased need for coordination with other departments that would continue to carry out relevant research. We are concerned that the proposal could result in a duplication of capacity that already exists in the current federal laboratories. Many aspects of the proposed consolidation of response activities are in line with our previous recommendations to consolidate programs, coordinate functions, and provide a statutory basis for leadership of homeland security. The transfer of the HHS medical response programs has the potential to reduce overlap among programs and facilitate response in times of disaster. However, we are concerned that the proposal does not provide the clear delineation of roles and responsibilities that is needed. We are also concerned about the broad control the proposal grants to the new department for research and development and public health preparedness programs. Although there is a need to coordinate these activities with the other homeland security preparedness and response programs that would be brought into the new department, there is also a need to maintain the priorities for basic public health capacities that are currently funded through these dual-purpose programs. We do not believe that the President’s proposal adequately addresses how to accomplish both objectives. We are also concerned that the proposal would transfer the control and priority setting over dual- purpose research and has the potential to create an unnecessary duplication of federal research capacity. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information about this testimony, please contact Janet Heinrich at (202) 512-7118. Gene Aloise, Robert Copeland, Marcia Crosse, Greg Ferrante, Gary Jones, Deborah Miller, Roseanne Price, and Keith Rhodes also made key contributions to this statement. Homeland Security: Proposal for Cabinet Agency Has Merit, but Implementation Will Be Pivotal to Success. GAO-02-886T. Washington, D.C.: June 25, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Public Health Priority Setting. GAO-02-883T. Washington, D.C.: June 25, 2002. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains. GAO-02-610. Washington, D.C.: June 7, 2002. Homeland Security: Responsibility and Accountability for Achieving National Goals. GAO-02-627T. Washington, D.C.: April 11, 2002. Homeland Security: Progress Made; More Direction and Partnership Sought. GAO-02-490T. Washington, D.C.: March 12, 2002. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Need to Consider VA’s Role in Strengthening Federal Preparedness. GAO-02-145T. Washington, D.C.: October 15, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation’s Efforts. GAO-01-1158T. Washington, D.C.: September 21, 2001. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health Preparedness Programs. GAO-02- 149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 9, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01- 915. Washington, D.C.: September 28, 2001. Chemical and Biological Defense: Improved Risk Assessment and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/NSIAD-99-159. Washington, D.C.: August 16, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999. National Preparedness: Technologies to Secure Federal Buildings. GAO- 02-687T. Washington, D.C.: April 25, 2002. National Preparedness: Integration of Federal, State, Local, and Private Sector Efforts Is Critical to an Effective National Strategy for Homeland Security. GAO-02-621T. Washington, D.C.: April 11, 2002. Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness. GAO-02-550T. Washington, D.C.: April 2, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness. GAO-02-548T. Washington, D.C.: March 25, 2002. Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness. GAO-02-547T. Washington, D.C.: March 22, 2002. Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness. GAO-02-473T. Washington, D.C.: March 1, 2002. Chemical and Biological Defense: DOD Should Clarify Expectations for Medical Readiness. GAO-02-219T. Washington, D.C.: November 7, 2001. Anthrax Vaccine: Changes to the Manufacturing Process. GAO-02-181T. Washington, D.C.: October 23, 2001. Chemical and Biological Defense: DOD Needs to Clarify Expectations for Medical Readiness. GAO-02-38. Washington, D.C.: October 19, 2001. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-02-162T. Washington, D.C.: October 17, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Terrorism Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement. GAO-01-666T. Washington, D.C.: May 1, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, DC: April 24, 2001. Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement. GAO-01-463. Washington, D.C.: March 30, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response. GAO-01-15. Washington, D.C.: March 20, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination. GAO-01- 14. Washington, D.C.: November 30, 2000. Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training. GAO/NSIAD-00-64. Washington, D.C.: March 21, 2000. Combating Terrorism: Chemical and Biological Medical Supplies Are Poorly Managed. GAO/T-HEHS/AIMD-00-59. Washington, D.C.: March 8, 2000. Combating Terrorism: Chemical and Biological Medical Supplies Are Poorly Managed. GAO/HEHS/AIMD-00-36. Washington, D.C.: October 29, 1999. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 14, 1999. Chemical and Biological Defense: Coordination of Nonmedical Chemical and Biological R&D Programs. GAO/NSIAD-99-160. Washington, D.C.: August 16, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/T-NSIAD-99-184. Washington, D.C.: June 23, 1999. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs. GAO/NSIAD-99-151. Washington, D.C.: June 9, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/NSIAD-99-110. Washington, D.C.: May 21, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO/NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program. GAO/T-NSIAD-99-16. Washington, D.C.: October 2, 1998. Combating Terrorism: Observations on Crosscutting Issues. GAO/T- NSIAD-98-164. Washington, D.C.: April 23, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Disaster Assistance: Improvement Needed in Disaster Declaration Criteria and Eligibility Assurance Procedures. GAO-01-837. Washington, D.C.: August 31, 2001. Chemical Weapons: FEMA and Army Must Be Proactive in Preparing States for Emergencies. GAO-01-850. Washington, D.C.: August 13, 2001. Federal Emergency Management Agency: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-832. Washington, D.C.: July 9, 2001. Budget Issues: Long-Term Fiscal Challenges. GAO-02-467T. Washington, D.C.: February 27, 2002. Results-Oriented Budget Practices in Federal Agencies. GAO-01-1084SP. Washington, D.C.: August 2001. Managing for Results: Federal Managers’ Views on Key Management Issues Vary Widely Across Agencies. GAO-01-592. Washington, D.C.: May 25, 2001. | Since the terrorist attacks on September 11, 2001, and the subsequent anthrax incidents, there has been concern about the ability of the federal government to prepare for and coordinate an effective public health response given the broad distribution of responsibility for that task at the federal level. More then 20 federal departments and agencies carry some responsibility for bioterrorism preparedness and response. The President's proposed Homeland Security Act of 2002 would bring many of these federal entities with homeland security responsibilities--including public health preparedness and response--into one department to mobilize and focus assets and resources at all levels of government. The proposed reorganization has the potential to assist in the coordination of public health preparedness and response programs at the federal, state, and local levels. There are concerns, however, about the proposed transfer of control of public health assistance programs that have both basic public health and homeland security functions from Health and Human Services to the new department. Transferring control over these programs, including priority setting, to the new department has the potential to disrupt some programs critical to basic public health responsibilities. The President's proposal is unclear on how both the homeland security and the public health objectives would be accomplished. |
Between 1968 and 1988, the federal government required states to operate various programs designed to help AFDC recipients get jobs. However, these programs were criticized because they served too few AFDC recipients, focused on the most employable, and did little to reduce welfare dependence. Dissatisfied with the welfare system, the Congress enacted FSA in 1988 to correct previous programs’ weaknesses and transform AFDC into a transitional program. FSA established the JOBS program to help welfare recipients get the services they need to get jobs and avoid long-term welfare dependence. Through JOBS, states are to (1) provide a broad range of education, training, and employment-related activities; (2) increase the number of AFDC recipients participating in these activities; and (3) target resources to long-term and potentially long-term recipients. JOBS also emphasizes helping teen parents complete their high school education. In addition, states are required to provide AFDC recipients with necessary support services, such as child care and transportation. Under JOBS, welfare agencies are to assess the needs and skills of welfare recipients, provide the services and activities needed to prepare them for work, and link them with employers when they are considered ready to work. To provide these services and activities, local JOBS programs rely heavily on a wide variety of community programs, such as Job Training Partnership Act (JTPA) programs, state or local adult basic education programs, the state employment service, Head Start, and community colleges. States received much flexibility in designing and implementing their programs. Most states moved quickly to implement JOBS; by October 1990, 31 had statewide programs. All had statewide programs by October 1992. About $1 billion in federal funds is available per year for JOBS, and, to obtain these resources, states must commit matching funds. In fiscal year 1993, federal and state expenditures totaled $1.1 billion for JOBS. No national studies have been completed on the impact of JOBS, but the limited data available suggest that programs can have a positive, but generally modest, impact. Recent experimental design evaluations found that certain JOBS and JOBS-like programs increased the number of AFDC recipients entering employment, raised earnings, and reduced welfare rolls. Some programs succeeded more than others, but none was able to move most program participants both into jobs and off AFDC after 3 years. Under some welfare reform proposals, many AFDC recipients would be expected to leave AFDC after 2 years. HHS is currently sponsoring a seven-site national evaluation designed to determine the effectiveness of different approaches to operating JOBS. Early results find variation and diversity among the seven sites studied and suggest that the potential for an improved program exists. However, it is not yet known whether the approaches identified in the more noteworthy programs can be implemented nationwide. Despite the progress of some programs in implementing JOBS, additional factors since the 1988 passage of FSA have led to continued dissatisfaction with the current welfare system. AFDC caseloads rose sharply beginning in 1989. The country also experienced a recession that heightened competition for scarce resources in both federal and state budgets. In addition, the public perceives AFDC as a growing problem and a permanent entitlement rather than a route to work. During his campaign, President Clinton promised to “end welfare as we know it,” a promise the public widely supported. Many Congress members and others have also proposed reforms, and some states have initiated their own reform efforts. Most of the congressional reform proposals we reviewed have the basic principles of FSA and build on JOBS to help parents get jobs and end dependence. Most of the proposals aim to require larger portions of the AFDC population to participate in JOBS. To accomplish this, larger proportions of the overall population would be required to participate over time (in some cases up to 90 percent of those deemed able to work) or entire segments of the AFDC population, such as young adults, would be required to participate. Some proposals would require participation of mothers with younger children. Under current requirements, mothers with children under 3 years old (or 1 year at state option) need not be required to participate in JOBS, while some reform proposals would lower this age to 6 months or as low as 3 to 4 months for additional children. Many reform proposals would also increase the focus on employment as the ultimate program goal. Some would no longer require states to offer education and training activities. In addition, some contain provisions to impose time limits on receipt of AFDC benefits. If, after a set time, usually 2 years, AFDC recipients have not found a job, they would be required to participate in a subsidized work program. Some proposals would also limit the time a participant could spend in the work program. While JOBS was designed to make welfare transitional by serving an increasing portion of AFDC recipients and reaching out to those at risk of long welfare stays, its progress has been limited. To date, JOBS has not served a large share of the AFDC caseload, and program administrators report that they cannot provide current participants with all the services and assistance they need. In addition, although JOBS has made progress in serving those at risk of long-term dependence, some AFDC recipients who have barriers to employment have not been widely served. Proposals to reform the program will be challenged to balance increased participation with the need for additional resources and the need to develop additional capacity over time. Under FSA, the Congress took steps to involve an increasing portion of AFDC recipients in welfare-to-work programs through its new JOBS program, but the proportion of AFDC recipients active in JOBS has not been growing. FSA expanded the base of AFDC recipients required to participate. For the first time since AFDC (originally Aid to Dependent Children) began in 1935, recipients with preschool children were required to prepare for and accept employment to receive full benefits. Even with this expanded base, about 56 percent of AFDC parents in fiscal year 1992 remained exempt from JOBS, most often because they were caring for a young child. In addition, FSA recognized that states may not be able to serve all who were required to participate at the program’s inception. It established gradually increasing minimum participation standards that tried to go beyond counting the participants and ensure satisfactory participation in JOBS. These minimum participation standards rose from 7 percent of those required to participate in fiscal year 1991 to 20 percent in fiscal year 1995.According to HHS, almost all states met the minimum participation standards in fiscal years 1991 through 1993. However, we concluded in a 1993 report that participation rate data reporting requirements were complex and burdensome, and participation rate data did not provide a fair basis for assessing states’ performance because they were not accurate or comparably derived across states. Due to exemptions, relatively low minimum participation standards, and AFDC caseload growth, the share of AFDC recipients active in JOBS remains limited and has not been increasing. As shown in figure 1, the numbers of those receiving AFDC and those required to participate in JOBS have increased from fiscal year 1991 through fiscal year 1993. Also, the number actually participating in JOBS at any level of involvement in an average month increased by 5 percent during this period. However, the share of AFDC recipients participating at any level has remained at about 11 percent of the total AFDC caseload over the same period of time. Although some individual programs have succeeded in enrolling most of their AFDC recipients who were required to participate, JOBS programs overall served only about one-fourth of those required to participate in an average month in fiscal year 1993. States met the minimum participation standards without committing enough matching funds to spend all federal moneys available for JOBS. As shown in table 1, about 57 percent of the federal allocation for JOBS was used in 1991, and use increased to 70 percent in 1993. Some argue that the states did not fully use available federal funds because of state fiscal pressures and competing demands for scarce resources due to the recent economic recession. While recent data suggest the states’ financial position has improved, even if the states drew down all available federal funds, this amount would not be sufficient to serve all AFDC recipients or even all those required to participate in JOBS. Some experts believe that the welfare culture will not change until recipients believe they must participate and accept employment and that this will not occur until a larger segment of the AFDC population is actually required to participate. While JOBS programs serve only a portion of the AFDC caseload, many program administrators reported that they could not always provide those participating with the services they need. In our mid-1994 survey of a nationally representative sample of county JOBS administrators, many administrators reported unmet needs in key program activities, such as basic education and job skills training. Administrators often cited transportation problems for participants, the need for more JOBS staff to serve participants, and, to a lesser extent, the lack of community resources and child care funding as reasons they could not meet participants’ service needs. FSA directed JOBS programs to draw on resources in the community before spending JOBS funds to pay for services or programs for JOBS participants. In many cases, local JOBS administrators reported that they now must reimburse programs for some or all of the services provided to JOBS participants. Under FSA, the Congress acknowledged the importance of child care to help welfare recipients get jobs and leave and stay off welfare. States are required to provide child care to AFDC recipients participating in JOBS, if they need it, and a year of transitional child care to recipients who leave AFDC because of employment. Federal funds for this child care are uncapped, and states must provide matching funds to acquire them. If a state cannot provide child care, it cannot require the AFDC recipient to participate in JOBS. Therefore, a shortage of state funds for child care can limit the number of AFDC recipients participating in JOBS. Not all JOBS participants receive child care assistance, but spending for child care assistance has been growing. In fiscal year 1992, less than 22 percent of JOBS participants received AFDC child care financial assistance, and federal and state expenditures for AFDC and transitional child care totaled about $755 million. From 1991 through 1993, spending for AFDC child care grew faster than spending for JOBS, as shown in figure 2. FSA child care requirements may have an unintended effect on the availability of child care for the working poor who do not receive AFDC. Several funding sources are intended to serve low-income families in general, but only current or recent AFDC families are guaranteed child care. We recently reported that some states have been shifting resources from poor working families toward entitled AFDC and JOBS recipients. This can place working poor families at greater risk of becoming dependent on welfare. FSA targeted for assistance certain AFDC recipients who were at risk of remaining on welfare for long time periods, and states have generally met the requirements to serve these recipients. However, certain recipients who need help to avoid long-term welfare dependence, such as teenage parents, are not being widely served and may be more difficult or costly to serve. The Congress recognized that some recipients depend on AFDC for longer time periods and may need extra help to achieve employment and self-sufficiency, and it targeted these recipients for JOBS benefits. While research shows that many of those who use AFDC do so for relatively short periods of time, most of those enrolled in AFDC at a point in time are in the midst of what will be a long period of welfare receipt and receive a large share of the AFDC benefits. Some of these families stay on welfare continuously for long time periods, but many leave AFDC only to return in a few years. These long-term and cyclical recipients may have barriers to employment, such as low education and literacy levels and a lack of skills and work experience. Many have other, less tangible, barriers to self-sufficiency, such as low self-esteem, limited life skills, or low motivation. These recipients are less likely to find employment on their own and may require more services to prepare for employment; therefore, targeting them could result in greater long-term benefits and savings. States have responded positively to JOBS’ emphasis on targeting services to long-term and potential long-term AFDC recipients. In 1991, we reported that states had shifted their stated priorities from serving those considered ready for employment to those who generally have barriers to employment. In fiscal years 1991 and 1992, more than half of JOBS participants were members of the target groups defined by FSA, and some programs have shown the potential to succeed with long-term recipients. However, some JOBS programs have met the mandates to serve target group members while also serving mostly volunteers, who may be more motivated or easier to serve than non-volunteers. In our 1994 national survey of county JOBS administrators, about half reported giving priority to recipients who are highly motivated. Our recent work lends support for JOBS’ special emphasis on teen parents, but a large majority of this at-risk group is not involved in welfare-to-work activities. In two reports issued in May 1994, we noted that a focus on helping teen mothers avoid long-term welfare dependence is important because their low levels of education and work experience and the young age of their children increase the likelihood of long-term welfare dependence. Yet, in a 1992 review of 16 states containing most of the nation’s AFDC teen mothers, we found that, overall, only 24 percent of them had been enrolled in JOBS. Our work and other recent evaluations highlight that teen mothers are a heterogeneous group with many complex problems; yet limited evidence exists about what works to help them gain self-sufficiency. In our 1992 review, we found that teen parents who received enriched services, such as educational alternatives to mainstream public high school, life skills training, or parenting classes, were more likely to complete high school or its equivalent than those not provided such services in the 16 states surveyed. An Ohio program that requires AFDC teen mothers to complete high school or its equivalent, or offers similar types of comprehensive or intensive services to teen mothers, also had some success in helping teen parents complete their education, especially teens who had not yet dropped out of school. However, the extent to which these types of programs can further help teen parents get jobs and leave AFDC in the long-run is not yet known. One recent study of a comprehensive program designed to help young mothers who have dropped out of high school had not increased employment, reduced welfare receipt, or delayed additional pregnancies after 18 months, although these effects may appear over a longer period of time. Various sources indicate that problems such as substance abuse, learning disabilities, emotional problems, and domestic violence are not uncommon among adult welfare recipients. If left unaddressed, these problems can interfere with a recipient’s ability to get or keep a job and may result in long-term welfare dependence. The extent of these problems is generally unknown, and few accepted national estimates are available. For example, recent estimates of the proportion of adult AFDC recipients who abuse drugs or alcohol to the extent that they would need treatment to participate in a JOBS-like program vary from 15 to 28 percent. In 1987, we reported that some past welfare-to-work programs screened out people thought to be difficult or expensive to serve, and our recent work suggests that this may still be true. When we surveyed 51 state administrators in 1994, some reported a reluctance to serve such recipients. Over one-fourth acknowledged they intentionally deferred the hard to serve or selected those who may be easier to serve. Administrators said that JOBS regulations do not provide incentives to serve such recipients, and, when they do serve them, it takes longer to prepare them for work. In addition, they cited a lack of funding and insufficient special services in the community to meet these participants’ special needs. Although most welfare reform proposals would require JOBS to serve a much larger portion of the AFDC caseload, to do so, JOBS programs would have to overcome challenges concerning program capacity and recipients’ characteristics. If the number of AFDC recipients participating in JOBS expanded significantly, the larger group could have different needs than current JOBS participants and may be harder or more costly to serve. Members of the larger group also may be less willing to participate or have barriers significant enough to interfere with their ability to participate unless they receive extra support and services. In some cases, as with some teen parents, the research has failed to point clearly to what types of services most effectively address these complex problems. In addition, if assistance is time limited, some recipients may reach the time limit before they have completed their work preparation activities. Program administrators told us that they are concerned about their capacity to increase dramatically their program size given the limitations on current capacity and the likelihood that some of the new recipients may be harder or more costly to serve. The effect of requirements to serve more recipients depends, in part, on the resources provided and the flexibility afforded the states in designing their programs. Currently, programs must make difficult decisions about how best to use their resources. Some choose to cover larger segments of the AFDC population by providing few or less costly services to many recipients, while others emphasize more intensive and expensive services to a smaller number of people. Requirements to serve more recipients without commensurate increases in funding could result in less assistance per recipient at a time when programs may be reaching out to their harder to serve, more costly recipients. An adequate supply of community services may not be readily available to meet the needs of additional participants. Also, the cost per participant could rise as programs fully use community resources available to them at no charge and have to pay for local services. Even with additional funds, some administrators questioned whether they could meet the needs of all participants, given limitations such as transportation, staff, and community resources. Serving more participants can also increase costs and pressures on service delivery systems associated with JOBS. More participants would require additional child care funding at both the state and federal level to guarantee child care to more recipients. This in turn could further reduce child care subsidies for poor working families, placing them at greater risk of going on or returning to welfare. In addition, some proposals require participation of mothers with children as young as 12 months or even younger for the next child. Such changes would increase demand for child care for infants and toddlers, which can be more expensive and generally is less available. Finally, JOBS participants use many services that are intended for use by the community at large, such as JTPA and adult basic education. Significant increases in JOBS participants could limit access to these services for the non-AFDC working poor. The current JOBS program is not well focused on the ultimate goal of employment. Local programs have not developed the strong links to employers that may help welfare recipients get jobs. We believe that this may be explained, in part, by the current JOBS performance measurement system. Because the system is based on participation in program activities and not on employment outcomes, states have had no direct incentive to move clients into jobs. Under proposed reforms, JOBS will need to focus more on employment. However, even with an increased focus on employment, factors external to JOBS may limit the program’s ability to ensure that participants get and keep jobs. Most local JOBS programs nationwide have not forged the strong links with employers that may help get jobs for their participants. Preliminary results of our work indicate that JOBS programs do not fully use the tools provided under FSA to move JOBS participants into jobs or provide work opportunities despite some evidence that these tools can promote employment or create work opportunities. Factors both within and beyond the control of JOBS programs hamper the use of these tools. JOBS makes available a range of tools to help welfare recipients get jobs. In addition to preparing AFDC recipients for employment through education and training, JOBS programs are required to help place job-ready recipients in jobs. While the job ready are expected to engage in job search activities, JOBS programs must conduct job development activities, including identifying job openings, marketing clients to employers, and arranging interviews for clients. Also, JOBS allows programs to provide temporary financial incentives to employers that hire and train JOBS participants through on-the-job training and work supplementation. And, programs may place participants with governmental and nonprofit organizations to gain work experience while participants continue to receive their AFDC grant. Under work experience programs, employers are not reimbursed and the participant is not considered a regular paid employee. These workplace-centered tools can make a difference in promoting employment or creating meaningful work opportunities for welfare recipients. Rigorous evaluations of JOBS and similar programs have identified job development as a potentially important factor in effective programs. In addition, on-the-job training and work supplementation, a mechanism that supports on-the-job training, have been used to move disadvantaged individuals into employment. While studies have shown that work experience activities do not increase employment or earnings or reduce welfare receipt, they do provide welfare recipients with the opportunity to work productively in the community. These workplace-centered tools can be used by varied JOBS programs, including ones that focus on immediate job placement as well as ones that emphasize longer term education and training. Although identified as a potentially important tool for moving JOBS participants into employment, the extent of job development performed nationally does not meet the needs of current JOBS participants looking for work. In mid-1994, almost all of the nation’s counties used job search in their programs, with a median of 10 percent of their participants involved. However, almost 60 percent of a nationally representative sample of county JOBS administrators we surveyed responded that they could market to employers or arrange on-site interviews for only some or few of their job-ready participants. In addition, about half said they worked only sometimes or rarely with private-sector employers to identify or create jobs for participants. Finally, more than half of the local administrators reported that, in their opinion, they did not do enough job development to meet their clients’ needs. Workplace-centered activities play a small role in JOBS and involve very few participants, generally those who are carefully chosen to attract and maintain the interests of employers. In mid-1994, about one-quarter of the nation’s counties had JOBS participants in on-the-job training, and about 8 percent had participants in work supplementation. These counties generally placed less than 1 percent of their JOBS participants in these activities. While almost all, 91 percent, of the counties enrolled some participants in work experience, the actual portion of participants involved was small, with a median of 9 percent, in the counties. Administrators we spoke with emphasized the importance of screening and selecting able and motivated participants to place with employers to recruit employers and maintain their interest in participating in the programs. Insufficient staff, certain federal requirements, labor market conditions, and overall federal program design hamper use or expansion of these tools. County JOBS administrators reported that they want to implement or expand the use of these tools in their JOBS programs but most often cited an insufficient number of staff to develop and administer these activities as a major hindrance to their initiation or expansion. Many administrators also noted that a federal requirement restricting work supplementation slots to employers’ newly created positions limits their abilities to recruit employers. In addition, at least 40 percent of the administrators believed current labor market conditions were a moderate or major hindrance. In 1993, unemployment rates reached 8 percent or more in one-third of the nation’s counties; employment growth was 1.5 percent or less in half of the nation’s counties and negative in one-third of the counties. We also believe that the limited focus on employment in JOBS as currently administered at the federal level does not promote implementing these activities. Because program administrators can meet all federal program requirements without redirecting scarce resources to job development, on-the-job training, work supplementation, or work experience, they have little incentive to do so. HHS officials we interviewed at headquarters and in the 10 regional offices noted the limited use of job development and other tools available to help participants get jobs or provide work opportunities. Some believed that JOBS programs often placed a lower priority on job development activities than on meeting participation requirements. To encourage job development, HHS has provided workshops and training. It has also provided on-site technical assistance at 10 sites across the country and plans to provide such assistance at 4 more sites. In addition, HHS officials noted that activities such as on-the-job training, work supplementation, and work experience are difficult to develop and administer. HHS has periodically provided guidance on the use of these tools and maintains a databank on promising JOBS practices that includes information on them. However, decisions on emphasizing job development and the other activities are left up to the states. The performance measurement system for JOBS provides little incentive for states to focus on moving clients into jobs. As mandated by the FSA, states are held accountable for the number and type of participants enrolled in activities, such as education and training. States can lose a portion of their federal funding if they fail to meet participation standards. As a result, JOBS programs may focus more on getting clients into program activities than off AFDC and into jobs. FSA specified minimum program participation standards and required the Secretary of HHS to develop and submit recommendations for outcome-related JOBS performance standards to the Congress by October 1993. This requirement was amended in late October of this year to require HHS—instead of developing performance standards at this time—to develop criteria for such standards no later than October 1, 1994. HHS submitted a report on the problems identified in developing a performance measurement system and a detailed plan and schedule for developing outcome-based measures and standards to the Congress on September 30, 1994. In its report, HHS stated that it plans to finalize performance measures for JOBS by October 1, 1996, and standards by October 1, 1998. HHS officials said they have proceeded cautiously in developing outcome-related performance standards, in part due to concerns that (1) setting certain standards might result in unintended program decisions, such as focusing on the most job-ready individuals to generate more favorable outcomes, and (2) outcome measures are not consistently related to program effectiveness. However, HHS officials said they are pursuing an outcome-focused performance measurement system in the context of welfare reform legislation and as a separate initiative. HHS currently focuses its JOBS data collection on measures of participation, rather than outcomes. HHS collects information on the numbers of program participants, expenditures on target group members, and the activities in which individuals are participating on a monthly basis. While HHS collects some outcome-related data, it does not track the total number of individual JOBS participants who get jobs or leave AFDC. In addition, HHS does not gather data on the extent to which JOBS program participants retain the jobs they get, the extent to which clients who leave AFDC for work return to the rolls, and whether teen parents are completing high school and subsequently getting jobs. As HHS recognizes, establishing outcome standards for JOBS that motivate states to get more participants employed, without creating unintended negative program effects, will be difficult and must be approached carefully. However, even without establishing standards, data gathered on the outcomes of JOBS participants can help monitor the progress of participants and assess the status of program operations. Today, little nationwide outcome data are gathered, although many states have been independently collecting these data. In a 1994 GAO survey, the majority of state JOBS administrators told us they believe HHS has not sufficiently shifted the program’s focus to outcome-based performance measurement. The current system’s limited focus on employment and its weak workplace links raise important issues for reform. Although strong links to employers appear to be important elements in the more noteworthy JOBS programs, many factors impede these links. Expanding JOBS as it currently operates will not guarantee improved links with the workplace; JOBS must better focus its efforts on employment. In addition, these weak links to the workplace pose challenges for proposed reforms requiring those unable to find work after a set time limit to enter subsidized work programs. As evidenced by the minimal use of existing workplace-centered activities, the infrastructure to support these extensive work programs is limited. Time and resources would be needed to carefully develop these programs, which could divert program attention away from helping JOBS participants develop their skills and find employment before they reach their time limit. Moreover, existing programs appear to place the better prepared participants in workplace-centered activities to recruit and maintain employer interest. Under reform proposals, these activities often may be offered as a last resort for those who have not yet found employment. Programs will need to ensure that these participants are adequately prepared for work since employer interest and cooperation will be crucial under reform. If JOBS can serve more participants and focus more on employment, the number of participants getting jobs and leaving welfare could increase; however, these efforts are unlikely to end the need for welfare due to factors outside the control of JOBS programs. A recent evaluation of one of the most noteworthy programs evaluated to date—one that serves a large portion of its local AFDC recipients and has a strong focus on employment—found that, after 3 years, 23 percent of the participants were both working and off AFDC. JOBS program results to date may be due, in part, to conditions external to JOBS. These may include the lack of available jobs in some locations, the volatility of the low-wage labor market, the lack of strong financial incentives to seek and keep employment, and a lack of health care coverage, child care, or transportation. In addition, for those who find jobs, their earnings may often be too low to allow them to leave welfare permanently and escape poverty. It will be difficult for JOBS to serve significantly larger numbers of AFDC recipients and help them gain employment and leave AFDC, given current conditions. Although the challenge of serving recipients with multiple barriers to employment, combined with limitations caused by factors outside the program, suggest that JOBS alone will not end the need for welfare, JOBS has shown promise in helping some AFDC recipients get jobs and leave welfare. Our work addresses a number of issues that will confront the Congress as it considers reforming welfare and asking more of JOBS. We are not making recommendations at this time but will be addressing each of these issues in more depth as we complete our ongoing work. The issues include the small portion of the AFDC caseload currently served under JOBS and the current limits on programs’ ability to provide needed services; the unknown number of AFDC recipients who have multiple barriers to employment, are at risk of long welfare stays, and may not be widely served under the current program; JOBS’ underutilization of the tools available to link participants to employers and the lack of a basic foundation for building subsidized work programs; and JOBS’ lack of a performance measurement system that encourages states to focus on employment as the ultimate program goal. In commenting on a draft of this report, HHS disagreed with our conclusion that JOBS programs are generally not well focused on employment. While we agree with HHS that programs may choose many routes, including education and training, to help participants obtain employment, JOBS programs are required to take steps to help participants find jobs when they are considered ready for work. Yet, as noted in the report, a majority of JOBS program administrators nationwide stated that they do not do enough to help these participants find jobs. We believe that efforts to place participants in jobs are as important as efforts to prepare them for work. In addition, we believe it is important to point out that JOBS’ current performance measurement system is not focused on job placement. HHS also believed we did not include sufficient information on the progress it and the states had made in implementing JOBS and that the report tone was too negative. We disagree. Our report clearly recognizes that states and HHS have made some progress in implementing JOBS and that some programs have achieved noteworthy results. We also recognize that programs have made progress in serving those at risk of long welfare stays and that JOBS holds potential as a means to help AFDC recipients get jobs and leave AFDC. However, while some progress has been made, we believe that the issues we identified in the report are common among JOBS programs nationwide and are ones that the Congress will confront as it considers welfare reform. These issues include the small portion of the AFDC caseload served, the lack of focus on employment as the goal, and the challenge of serving the hard to serve. HHS also raised other concerns (also see app. II) and provided technical comments that we have addressed in the text of the report as appropriate. Our work was conducted from April 1993 to November 1994 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairman, Subcommittee on Human Resources, House Committee on Ways and Means; Secretary of Health and Human Services; and other interested parties. Copies will also be made available to others on request. If you have any questions concerning this report or need additional information, please call me on (202) 512-7215. Kay Brown, Gale Harris, and Stephen Secrist contributed to this report. Our objectives for this report were to assess the progress that states and HHS have made in (1) serving more AFDC recipients in JOBS and (2) using JOBS to help AFDC recipients get jobs and end dependence. We also sought to assess the implications of that progress for welfare reform proposals. To accomplish our objectives, we relied on previously released GAO reports, other published research on the JOBS program, and the preliminary results of four ongoing studies on JOBS implementation. In all four studies, we have collected most of the data and are analyzing results. Specifically, these evaluations address program capacity, hard-to-serve AFDC families, states’ efforts to move JOBS participants to employment, and JOBS outcomes. We will issue reports for these studies when completed. To address the concern that states’ limited fiscal capacities and other factors constrain the expansion of education, training, and supportive services under JOBS, we are examining four key questions: (1) Who is, and is not, being served under the JOBS program; and what is the range of education, training and support services they are receiving? (2) What are the constraints and barriers to expanding the JOBS program? (3) What are possible strategies for overcoming these barriers? (4) What are the implications of these findings for the design of a time-limited welfare system? To answer the question about who is currently being served and who is not and the range of services participants are receiving, we are analyzing national data from HHS on JOBS participants and AFDC recipients for fiscal year 1992, including the JOBS participant database and the AFDC quality control file. We are also analyzing the 1992 Current Population Survey. In addition, we conducted computer-aided telephone interviews of a nationally representative stratified sample of county and local JOBS programs to identify the extent of current capacity constraints. We held discussions with four small groups of county-level program officials to identify constraints on various JOBS expansion scenarios and strategies for expanding the JOBS program. We identified implications for a time-limited welfare system through discussions with these small groups and our own analysis. Some AFDC recipients have personal, family, or situational problems that can interfere with their attendance in JOBS activities or employment. To determine the extent of these problems and whether these hard-to-serve families are receiving services, we addressed the following questions: (1) Who are the hard to serve and what portion of AFDC recipients do they represent? (2) To what extent are they referred to and receiving social services? (3) What factors discourage states from serving more of them? (4) What approaches are effective in meeting their needs? (5) What is the implication for this group of time-limited benefits under welfare reform? To determine the size, characteristics, and needs of hard-to-serve families, we surveyed state JOBS administrators on the difficulty of identifying the hard to serve and the likelihood that referrals are made; programmatic and situational factors that discourage states from serving them; and the effect welfare reform proposals may have in helping the hard to serve become self-sufficient. We also interviewed program officials and experts, analyzed federal and state data, researched the literature to identify state and local programs using various service strategies for the hard to serve, and visited selected sites. JOBS provides state and local welfare agencies several tools to find and create employment opportunities for AFDC recipients participating in JOBS. These include job development and placement, work experience, on-the-job training, and work supplementation or grant diversion. To learn about states’ experiences in using these tools and provide information on proposed reforms, we addressed these questions: (1) To what extent are states using private-sector job development and placement, subsidized employment, and work experience for welfare participants? (2) What are the barriers to expanding such efforts? (3) How might these barriers be overcome? (4) What are the implications of these findings for the design of a time-limited AFDC program? To answer these questions, we surveyed a nationally representative stratified random sample of county JOBS administrators. For additional information on strategies, barriers, and implications for reform, we spoke with program administrators at HHS and the Department of Labor, welfare experts, union officials, and welfare advocates. We also visited sites using job development, employer subsidies, and work programs. In addition, we collected AFDC and economic data to describe selected aspects of the sampled counties and to understand the implications of our findings for welfare reform. To learn about JOBS outcomes, we addressed the following questions: (1) What outcome data exist on the number of JOBS participants who are finding employment and leaving welfare? (2) To what extent are HHS and the states monitoring JOBS program outcomes and using performance standards? (3) What issues should be considered in establishing an effective national JOBS performance monitoring system? To assess overall program objectives, operating philosophies, and performance monitoring practices, we surveyed the JOBS program directors in the 50 states. We analyzed data on client outcomes that the states report to HHS as well as outcome data reported on the questionnaire. We discussed with HHS officials their approach to JOBS performance monitoring and issues related to establishing national performance standards for JOBS. We also consulted other experts about the latter. Child Care: Current System Could Undermine Goals of Welfare Reform (GAO/T-HEHS-94-238, Sept. 20, 1994). Although almost 10 million children are on welfare today, the existing welfare system requires few of their parents to be in school or training. Welfare reform proposals, however, would require many more welfare recipients to participate in education or training as well as require them to find work after 2 years. Should such proposals be enacted, many more welfare parents will need child care subsidies. Yet only a small fraction of eligible parents have received child care subsidies. Furthermore, the fragmented nature of the child care funding streams, with entitlements to some client categories, time limits on others, and activity limits on others, produces unintended gaps in services. This limits the ability of low-income families to become self-sufficient. Finally, as states deplete funds for welfare clients, they often turn to funds earmarked for the child care needs of the working poor, putting the working poor at greater risk of welfare dependency. For all of these reasons, GAO believes that welfare reform’s goal of economic independence for the poor could be undermined if the problems in the child care subsidy system are not adequately addressed. JOBS and JTPA: Tracking Spending, Outcomes, and Program Performance (GAO/HEHS-94-177, July 15, 1994). This report provides information on JOBS and JTPA, which Congress is considering consolidating. Together, the two programs account for about 60 percent of the federal employment and training funds for the nation’s poor. Although JOBS is limited to welfare recipients, JTPA serves other economically disadvantaged persons as well. In examining the interrelationship between the two programs, GAO discusses how funds are spent and reported for education, job training, support services, and program administration. In addition, GAO examines the outcome-focused data that are collected and performance standards for the two programs. Welfare to Work: JOBS Automated Systems Do Not Focus on Program’s Employment Objective (GAO/AIMD-94-44, June 8, 1994). JOBS is intended to help people avoid long-term welfare dependence by providing the education, training, work experiences, and services needed to obtain jobs. Although additional effort will be needed by HHS and the states to correct lingering data problems and incorporate further automation, the states have made progress developing computer systems to support the JOBS program. These systems, however, are narrowly focused on tracking program participants and collecting and reporting data to HHS, missing the greater opportunity that the systems could offer. Despite the millions of dollars in welfare costs that could be saved by moving people off welfare and into jobs, HHS failed to determine how information technology could best be applied to help achieve this objective. Families on Welfare: Sharp Rise in Never-Married Women Reflects Societal Trend (GAO/HEHS-94-92, May 31, 1994). From 1976 to 1992, the proportion of single women receiving welfare who had never been married more than doubled, rising from 21 percent to 52 percent. This change parallels a broader societal trend among all single mothers. Women receiving welfare in 1992 were also more likely to have a high school diploma and to have fewer children. These demographic changes among single women receiving welfare mirrored similar trends among all single mothers. However, single women on welfare in 1992 were poorer than in 1976, even though they worked in about the same proportions. Total family incomes dropped due to a decline in the real value of earnings and welfare benefits. The dramatic growth in the number of never-married women receiving welfare has important policy implications. Not only have never-married women and their families driven welfare caseloads to record levels, these families also affect other programs. For example, child support is hard to obtain for never-married women, who are less likely to have child support orders. Moreover, because the growth in never-married women receiving welfare reflects broader societal trends, it is unclear what impact welfare reform may have on the growth in the number and proportion of never-married women receiving welfare. Families on Welfare: Teenage Mothers Least Likely to Become Self-Sufficient (GAO/HEHS-94-115, May 31, 1994). Women who gave birth as teenagers make up nearly half the welfare caseload—a sizable group. GAO found that this group of women is less likely to have high school diplomas and more likely to have larger families. Both these characteristics increase the likelihood of this group’s being among the poorest welfare recipients. Even though they work in the same proportions as other women receiving welfare, they earn less and are more likely to have total family income below 50 percent of the poverty line. Given these differences, teenage mothers may have the hardest time earning their way off welfare and becoming self-sufficient. As the Congress debates welfare reform, it may need to explore ways to discourage young mothers from becoming welfare dependent and encourage those who do to become more self-sufficient. Families on Welfare: Focus on Teenage Mothers Could Enhance Welfare Reform Efforts (GAO/HEHS-94-112, May 31, 1994). Welfare families headed by women who have either less than a high school education, little recent work experience, or children younger than age 6 are less likely to get off welfare quickly than are other families. These characteristics are especially prevalent among teenage mothers receiving welfare. Moreover, teenage mothers have long-term implications for the welfare system. Together, current and former teenage mothers make up a large percentage of the welfare caseload, totaling nearly 42 percent of all single women on welfare in 1992. And they are among the poorest welfare recipients—more than half of women who gave birth as teenagers had total family incomes below 50 percent of the poverty line in 1992. Child Care: Working Poor and Welfare Recipients Face Service Gaps (GAO/HEHS-94-87, May 13, 1994). In response to the growing number of working mothers with young children, the Congress created four new child care programs for low-income families. These programs received more than $1.5 billion in federal funding in fiscal year 1992. Although states are making strides toward coordination of federally funded child care services, some federal requirements, coupled with resource constraints, are creating gaps in delivering these services to the poor. Specific service gaps stem from program differences in (1) categories of clients who can be served, (2) limits on the type of employment that clients can undertake without compromising their benefits, (3) limits on the amount of income clients can earn without losing their eligibility, and (4) limits on the time during which clients can receive child care subsidies. Despite congressional expectations that the block grant, the largest of the four programs, would motivate states to boost direct support to working poor families needing child care, the existing fragmented system of subsidized child care appears to provide little incentive for states to do so. In an environment of finite resources, when the child care programs for welfare and recent welfare recipients are entitlements, there is pressure to serve these groups while equally needy working poor families may go unaided. Moreover, each of the four programs unintentionally divides the poor into categories that fail to recognize the similarity of their economic plight and child care needs. State officials believe that they could better deliver child care that supports self-sufficiency if greater consistency existed across programs and if they had greater flexibility in spending their federal child care funds. Multiple Employment and Training Programs: Major Overhaul Is Needed (GAO/T-HEHS-94-109, Mar. 3, 1994). By GAO’s count, at least 154 programs run by 14 federal agencies provide $25 billion in employment training assistance to jobless people. Although well intended, these programs, when taken collectively, tend to confuse and frustrate their clients and administrators, hamper the delivery of services to those in need, and potentially duplicate efforts and accrue unnecessary costs. In addition, some programs lack basic training and monitoring systems needed to ensure efficient and effective service. Past efforts to fix the system have fallen short. As a result, more programs evolve every year, and the problems inherent in the system loom even larger. GAO testified that a major structural overhaul and consolidation of employment training programs is needed. The goal should be a customer-driven employment system guided by four principles: simplicity, tailored services, administrative efficiency, and accountability. The administration’s draft proposal to consolidate programs serving dislocated workers seems to be a step in the right direction; however, this consolidation needs to be part of a larger restructuring of employment training programs. GAO also has some questions about the proposal’s implementation. Child Care Quality: States’ Difficulties Enforcing Standards Confront Welfare Reform Plans (GAO/T-HEHS-94-99, Feb. 11, 1994). GAO questions the safety of child care being offered nationwide, both in terms of the physical environment—everything from working smoke detectors to properly stored food—and background checks for child care workers. Although the states are responsible for setting and enforcing quality standards, they are being challenged by the surge in demand for child care as well as by shrinking budgets. GAO found that 17 states did not conduct criminal background checks on child care center providers, and 21 states did not conduct checks of family day care providers. Although the Congress recently passed legislation to remedy this situation, it is too soon to know how much it will help. Welfare reform may also test states’ ability to protect children. Recent proposals requiring welfare recipients to participate in training programs and find work within 2 years may increase the demand for child care, potentially further straining state enforcement resources. Self-Sufficiency: Opportunities and Disincentives on the Road to Economic Independence (GAO/HRD-93-23, Aug. 6, 1993). The Family Self-Sufficiency Program, a partnership between the federal government and local public housing authorities, promotes local strategies to help poor families achieve economic independence and self-sufficiency. This report (1) examines how housing and social services policies affect beneficiaries when they land a job and increase their income and (2) analyzes the extent to which the law creates disincentives to upward income mobility. GAO concludes that training and supported work programs have successfully increased the earning of the economically disadvantaged who participate in them, but on average the earnings increases are not enough for a family to break free from all housing and public assistance programs. Welfare to Work: States Move Unevenly to Serve Teen Parents in JOBS (GAO/HRD-93-74, Jul. 7, 1993). JOBS can be used to help teen parents receiving welfare—even those considered hardest to serve—complete their high school education. In the 16 states GAO reviewed, about 24 percent of the teen parents receiving welfare had been enrolled in JOBS. The share of teen parents enrolled in each of these states, however, differed substantially, anywhere from 7 to 53 percent. Although the states varied in important ways that affected teen parents’ enrollment, this finding is not unexpected in a program such as JOBS, which is a financial and programmatic partnership between the federal and state governments. GAO cannot yet draw any firm conclusions about the effectiveness of JOBS in helping these young mothers. The numbers served are relatively small and not enough is known about the impact of JOBS on reducing welfare dependence among teen parents and their families. Moreover, JOBS is a relatively new program that has been operating in an environment of mounting fiscal distress and competing demands on state budgets. However, as state programs evolve, the economy recovers, and states choose to target more funds to JOBS, states may have greater capacity to enroll teen parents and strengthen the education and support services tailored to their needs. Because some teen parents have been improperly excluded from JOBS and states may be missing opportunities to enroll teen parents before they become welfare cases, GAO believes that steps should be taken to ensure that all teen parents are properly identified and told of the requirements for participating in JOBS. Welfare to Work: JOBS Participation Rate Data Unreliable for Assessing States’ Performance (GAO/HRD-93-73, May 5, 1993). To encourage state JOBS programs to serve more welfare recipients, the Congress mandated minimum participation rates that states must meet each year. States failing to meet or exceed the annual rates can lose millions of dollars in federal JOBS funds. GAO found that HHS is locating millions of dollars in federal JOBS funds on the basis of inaccurate state-reported participation rate data. These data are not comparably derived across states and should not be relied on when comparing states’ performance. Much of the inaccuracy in these data is attributed to states’ difficulties in collecting and processing all the required data and misinterpretation of JOBS regulations and HHS instructions. As minimum annual participation rates rise, it will become even more important that these issues are resolved. GAO believes that unless HHS simplifies its participation rate reporting requirements and increases its oversight of states’ processes, states will continue to report noncomparable and inaccurate data. Welfare to Work: States Serve Least Job-Ready While Meeting JOBS Participation Rates (GAO/HRD-93-2, Nov. 12, 1992). Concerns have arisen that JOBS participation rate requirements may be discouraging states from serving the least job-ready welfare recipients, including educating and training them. GAO discovered, however, that these concerns are unsupported by data that states reported to HHS during fiscal year 1991. All but one state met the 7 percent participation rate for fiscal year 1991, and all spent at least 55 percent of their JOBS budgets on target group members. Of those welfare recipients serviced by states participating in JOBS during this period, 62 percent were target group members. These target group members were most often placed in education and training activities, with no more than 12 percent placed in job search activities. In addition, one in three target placements, compared with one in four nontarget placements, was in secondary and remedial educational activities. Welfare to Work: Implementation and Evaluation of Transitional Benefits Need HHS Action (GAO/HRD-92-118, Sept. 29, 1992). Under the FSA, families trying to work their way off welfare can receive up to 12 months of child care and medical assistance. Insufficient data prevent GAO from fully analyzing the issue of transitional benefits, including factors affecting their use and how long families receive such benefits. GAO concludes that evaluating transitional benefits will prove complex and challenging. Unless HHS renews its evaluation planning and data collection efforts, HHS will probably be unable to report to the Congress next year on the impact of transitional Medicaid on welfare dependency. In addition, the evaluation of transitional child care will be in jeopardy unless a strategy and schedule for completing it are developed. The number of families receiving transitional benefits grew during the first 15 months of the program. Yet many state policies, despite federal notification requirements, do not require that families be told about benefits when they become ineligible for welfare. Some state policies also prohibit families from applying for benefits retroactively within the 12-month eligibility period. Until these state policies are reviewed and brought into compliance with federal requirements, families in these states will be at greater risk of being uninformed about and have limited access to transitional benefits. Welfare to Work: States Begin JOBS, but Fiscal and Other Problems May Impede Their Progress (GAO/HRD-91-106, Sept. 27, 1991). States have made significant progress establishing their JOBS programs, but are experiencing difficulties that could reduce the program’s potential and slow states’ progress in helping people avoid long-term welfare dependence. All states had programs in place by the mandated implementation date of October 1990, and 31 were operating statewide in October 1990, 2 years earlier than the legislative requirement for programs to be operating statewide. In addition, most states are moving in new directions indicated by the Congress, such as making education and training important program components and targeting services to those with employment barriers. However, in their first year of implementing JOBS, states have reported experiencing, or expecting to experience, some difficulties, including shortages of such services as basic/remedial education and transportation. HHS has provided, and continues to provide, states with technical assistance to help them with their difficulties. However, service and funding shortages and poor economic conditions could decrease states’ abilities to operate JOBS and slow their progress. Mother-Only Families: Low Earnings Will Keep Many Children in Poverty (GAO/HRD-91-62, Apr. 2, 1991). In 1987 slightly over 60 percent of the children below the poverty line lived in families headed by the mother alone. This report (1) provides an empirical estimate of the magnitude of the problems mother-only families face in escaping from poverty and (2) examines federal policies that could help them. GAO found that many single mothers will remain at or near the poverty line despite their holding full-time jobs. Low earnings, vulnerability to layoffs, lack of important fringe benefits like health insurance, and relatively high expenses for child care are some hurdles these women face. These problems also challenge the federal programs that seek to reduce the number of children living in poverty. GAO found that 1990 legislation that expanded the earned income tax credit and child care subsidies could increase the percentage of poor families that get along without welfare. Nevertheless, if poor women do not obtain better job skills to increase their earnings, many will probably have to depend on public assistance and other income supplements to live above the poverty line. The AFDC program, food stamps, and child support payments are especially important income supplements. Work and Welfare: Current AFDC Programs and Implications for Federal Policy (GAO/HRD-87-34, Jan. 29, 1987). After analyzing numerous pre-JOBS work programs, GAO found that the variety of work program options gave states the flexibility to tailor their programs to local needs, but multiple legislative authorizations resulted in a patchwork of administrative responsibilities and a lack of overall program direction. To serve more participants, programs spread their limited funds thinly, providing inexpensive services, such as job search assistance, and paying for few support services. Yet, the programs GAO examined served only a minority of adult AFDC recipients in 1985, excluding any with young children or severe barriers to employment. Evaluations of the work programs have shown modest positive effects on the employment and earnings of participants. But wages were often insufficient to move participants off welfare. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the progress the Job Opportunities and Basic Skills (JOBS) Program has made in: (1) serving a larger portion of Aid to Families with Dependent Children (AFDC) recipients; and (2) ensuring that JOBS participants obtain employment and leave AFDC. GAO found that: (1) about 11 percent of AFDC recipients participate in JOBS, and this percentage has not increased despite attempts to expand the base of AFDC recipients required to participate in JOBS; (2) many JOBS program administrators have reported that they could not always provide participants with the services they needed, such as basic education and job skills training, transportation, and child care; (3) although states have generally met federal requirements to target AFDC recipients at risk of long-term welfare dependence, many JOBS programs have not adequately served the difficult and complex needs of teen parents, alcohol or drug abusers, and others who are at risk for long-term welfare dependence; (4) JOBS program administrators have expressed concern about their capacity to increase program size and serve participants' varying needs; (5) JOBS program administrators have not fully used available resources to help AFDC participants obtain employment; (6) the JOBS performance measurement system, which focuses on participation rather than employment, provides little incentive for states to move clients into jobs; and (7) some factors affecting welfare dependence are outside the control of JOBS, including the low-wage labor market and a lack of financial incentives, health care coverage, child care, and transportation. |
Many federal agencies fund research to serve their goals and objectives. For example, NIH, the largest source of federal support for nondefense research, is the federal focal point for medical and behavioral research to help extend healthy life and reduce illness and disability. Each of the 27 institutes and centers that constitute NIH has an explicit mission focused on a particular disease, organ system, stage of development, or a cross- cutting mission, such as developing research tools. Other agencies, such as EPA, FDA, and FAA, support research, in part, to further scientific understanding that may in the future better inform their regulatory decisions. Nineteen offices within EPA conduct and/or support research to help carry out the regulatory aspect of the agency’s mission to protect human health and the environment and to implement environmental laws. Similarly, FDA relies on research to help identify and assess risks and to serve as the basis for regulatory decisions about such issues as human and veterinary drugs, medical devices, and the nation’s food supply. Finally, FAA, which enforces regulations and standards for the manufacture, operation, and maintenance of aircraft, conducts research to help ensure a safe and efficient system of air navigation and air traffic control. Federal research can be conducted by scientists in government laboratories—called intramural research—or by scientists at universities, in industry, or at nonprofit organizations—called extramural research. In fiscal year 2002, NIH, EPA, FDA, and FAA devoted a total of about $23 billion to intramural and extramural research. (See fig. 1.) Together, these four agencies accounted for about 50 percent of the federal funds devoted to research. Federal laws have created an environment conducive to a full range of joint ventures between government and industry, or between industry and universities, as well as among companies. Specifically, through collaboration, federal and nonfederal partners attempt to share the costs, risks, facilities, and expertise needed for research and to promote the movement of ideas and technologies between the public and private sectors. This cooperation between federal and private sector researchers may take many forms. Through informal cooperation, for example, federal agencies and industry may coordinate and share research agendas to prevent duplication of effort, or agency and private sector scientists may consult one another. Through formal cooperation, federal and nonfederal partners use written agreements, such as contracts or memorandums of understanding, to define the roles and responsibilities of each party. However, each type of arrangement differs in the extent of federal involvement in the research conducted under the agreement. Generally, work conducted under contracts is directed and overseen by federal agencies that do not participate in the work. In contrast, memorandums of understanding allow great flexibility in terms of participation by federal agencies and may also allow for sharing of resources or the funding of research by nonfederal partners. Congress may provide federal agencies the authority to accept gifts from external sources. For example, under the Public Health Service Act, certain agencies, such as NIH, may accept funds or nonmonetary gifts to support their research efforts or other agency functions. Under the act, donors may stipulate how agencies may use their gifts, for example, to only support research on a specific disease or condition, or they may allow the agency to use the gift for the benefit of any effort without stipulations. An agency’s statutory authority to accept donations is called its “gift acceptance authority.” In 2001 and 2003, NIEHS and ORD, respectively, entered into research arrangements with ACC to solicit and fund extramural research proposals. These arrangements specified how research proposals would be solicited, reviewed, funded, and overseen. Specifically, under the NIEHS-ACC arrangement, ACC and NIEHS agreed to support a 3-year research program to study the effects on reproduction and development of exposure to chemicals in the environment. ACC provided a gift of $1.05 million to NIEHS to fund this research, and NIEHS contributed $3.75 million to the project. Using the combined funds, NIEHS awarded a total of 17 research proposals from among the 52 it received. The program ended in 2004. Under the ORD-ACC arrangement, ACC and ORD agreed to support and fund research, with the first solicitation for research proposals focusing on novel approaches to analyzing existing human exposure data. In response to this first announcement of funding availability, issued in July 2003, 36 research proposals were submitted. ORD funded four research proposals, for a total of about $1.7 million, and ACC funded two proposals, for a total of about $1 million. ORD and ACC separately funded the research proposals that each had selected under this arrangement because EPA does not have the authority to accept contributions from outside sources. Researchers could specify whether they wanted their proposals considered for funding solely by ORD or by either ORD or ACC. ACC is a nonprofit trade organization representing most major U.S. chemical companies. It represents the chemical industry on public policy issues, coordinates the industry’s research and testing programs, and leads the industry’s initiative to improve participating companies’ environmental, health, and safety performance. In 1999, ACC launched a $100 million research initiative to study the potential impacts of chemicals on human health and the environment and to help improve screening and testing methods. A primary goal of the initiative is to focus on projects or programs that might take advantage of work planned or conducted by EPA, NIEHS, and other laboratories to stimulate collaboration and/or to prevent unnecessary duplication. Individuals or organizations can have conflicts of interest that arise from their business or financial relationships. Typically, federal conflict-of- interest laws and regulations govern the actions of individual federal employees, including their financial interests in, and business or other relationships with, nonfederal organizations. Conflict-of-interest concerns about individual federal employees typically arise when employees receive compensation from outside organizations; such arrangements often require prior approval from the federal employer. When a federal agency enters into a relationship with, or accepts a gift from, a regulated company or industry, concerns may arise about the agency’s ability to fulfill its responsibilities impartially. The statutory provisions that NIEHS and ORD relied upon to enter into their arrangements with ACC grant the agencies broad authority to collaborate with external organizations in support of research. Nothing in these statutes appears to prohibit either agency from entering into research arrangements with nonprofit organizations such as ACC. NIEHS used the authorities granted to NIH’s institutes and centers under sections of the Public Health Service Act, as amended, to enter into its arrangement with ACC (sections 301 and 405). The act authorizes NIH and its institutes and centers to cooperate, assist, and promote the coordination of research into the causes, diagnosis, treatment, control, and prevention of physical and mental diseases. In its research arrangement with ACC, NIEHS cited sections of the act as the authority it relied on to enter into the arrangement. These sections enumerate the general powers and duties of the Secretary of Health and Human Services and the directors of the institutes and centers in broad terms, including the authority to encourage and support studies through grants, contracts, and cooperative agreements. Similarly, ORD relied on broad authorities granted to EPA under sections of the Clean Air Act, as amended; the Clean Water Act, as amended; and the Solid Waste Disposal Act, as amended, to enter into its research arrangement with ACC (sections 103, 104, and 8001, respectively). These sections authorize EPA to promote the coordination and acceleration of research relating to the causes, effects, extent, prevention, reduction, and elimination of pollution in the air and water, and from solid waste. These sections authorize the EPA Administrator and other EPA officials to cooperate with appropriate public and private agencies, institutions, organizations, and industry to conduct research and studies. NIEHS and ORD did not formally evaluate the possibility that organizational conflicts of interest could result from their research arrangements with ACC because neither agency had policies requiring such evaluations. However, officials at both agencies took steps to manage potential conflicts that might arise during implementation of the arrangements. In 2001 and 2003, when they entered into arrangements with ACC, neither NIH nor EPA had specific policies requiring officials to formally evaluate potential conflicts of interest that could result from entering into such collaborative arrangements. As a result, neither NIEHS nor ORD conducted such evaluations. During negotiations with ACC on their research arrangements, NIEHS and ORD officials recognized the potential for organizational conflicts of interest, or at least the appearance of such conflicts. However, in light of the lack of policies on this issue, neither agency formally evaluated the potential for conflicts before finalizing their arrangements with ACC. Instead, officials told us, they informally evaluated the potential for conflicts of interest and intended to manage potential conflicts that might arise during implementation. To date, neither agency has developed any such policy guidance. In implementing their arrangements with ACC, NIEHS and ORD used their general research management processes to help manage potential conflicts of interest. These processes are designed to help ensure the integrity of scientific research undertaken by these agencies. According to agency officials, these processes helped guard against undue influence of ACC by limiting ACC’s participation in the selection, review, and oversight of agency-funded research conducted under the arrangements. For example: Developing research topics. Research priorities at both NIEHS and ORD were identified through routine agency planning processes that involved significant input from a range of stakeholders before the arrangements with ACC were finalized. In addition, NIEHS included research topics suggested by the National Research Council, a congressionally chartered scientific advisory body. Both NIEHS and ORD then worked with ACC to select the specific scientific topics that would become the focus of the research conducted under the arrangements. According to NIEHS and ORD officials, their arrangements with ACC did not change or influence the agencies’ research priorities. Because the research conducted under these arrangements supported the agencies’ existing research agendas, officials believe that the ACC arrangements helped them effectively leverage federal research dollars. Advisory council consultation. Both agencies have advisory panels that they routinely consult on matters related to the conduct and support of research, among other things. These consultations include public sessions that allow interested individuals, in addition to the panel members, to provide comments on the topics discussed. NIEHS obtained approval from its National Advisory Environmental Health Sciences Council before entering into the arrangement with ACC. ORD did not specifically consult its Board of Scientific Counselors regarding the agency’s arrangement with ACC, but did seek input from the Board regarding the research priorities covered by the arrangement. Both advisory bodies were established under the Federal Advisory Committee Act and must comply with the requirements of the act as well as related regulations. Publicly announcing the availability of funds. Both NIEHS and ORD, in 2001 and 2003, respectively, announced the opportunity to apply for grant funds available under the arrangements with ACC throughout the scientific community. Both agencies announced the availability of funding on their Web sites and included detailed information on the research programs and how to apply for funds. Both agencies also posted announcements in publications that are commonly used to advertise the availability of federal funding. Specifically, NIEHS published an announcement in the NIH Guide to Grants and Contracts, and ORD published its announcement in the Catalog of Federal Domestic Assistance. In addition, both agencies sent announcements to relevant scientific and professional organizations and to interested scientists who had signed up for electronic notice of funding opportunities. ORD also published a notice in the Federal Register. By widely announcing the availability of funds, the agencies hoped to ensure the participation of many qualified researchers and to avoid the appearance of preferential treatment for specific researchers. Moreover, widely publicizing the availability of funds would help ensure the openness of the agencies’ research processes. However, the agencies differed in the clarity of their instructions regarding how information would be shared with ACC. For example, in the portion of the announcement labeled “special requirements,” NIEHS’s announcement stated that applicants “should,” among other things, submit a letter allowing NIEHS to share their proposals with ACC. According to NIEHS this wording was not intended to be interpreted as a requirement but instead was intended to be a request. We believe that the language could have confused potential applicants about whether sharing information with ACC was required and could have dissuaded some qualified applicants from submitting proposals. In contrast, under the ORD-ACC arrangement, researchers were clearly advised that they could elect to have their proposals considered for funding by either ORD or ACC or solely by ORD. Applicants who did not want to share their proposals with ACC could elect to have their applications reviewed and considered solely by ORD. Determining completeness and responsiveness. Initially, NIEHS and ORD reviewed all submitted research proposals for compliance with administrative requirements. ACC did not participate in these reviews. At both agencies, research proposals judged incomplete were to receive no further consideration. NIEHS and ORD also had similar approaches for determining the responsiveness of the applications to the goals of the research program. At ORD, responsiveness was determined as part of the agency’s completeness review and did not involve ACC. Similarly, at NIEHS, responsiveness was determined solely by agency officials. Although NIEHS’s announcement stated that ACC would participate in the responsiveness review, NIEHS and ACC officials told us that ACC did not take part in this review. Peer review of research proposals. At both NIEHS and ORD, complete and responsive research proposals were independently peer reviewed for technical and scientific merit. According to officials, each agency followed its standard procedures for selecting experts to serve as peer reviewers and excluded representatives of ACC from serving as reviewers. At both agencies, only meritorious research proposals qualified for funding decisions. Both agencies also subjected these proposals to additional independent review. NIEHS’s National Advisory Environmental Health Sciences Council reviewed qualified proposals, and ORD required other EPA staff to review research proposals that were judged “excellent” or “very good” to help ensure a balanced research portfolio responsive to the agency’s existing research agenda. ACC convened its own technical panels to review qualified research proposals to ensure the relevancy of the proposals to the industry’s research needs and to ensure that the proposals balanced its research portfolio. Making results available to the public. NIEHS and ORD required— without input from ACC—the results of the research funded under the arrangements to be made public. For example, according to agency officials, NIEHS and ORD required researchers to discuss their preliminary findings in periodic public meetings, and, once their projects were completed, both agencies required researchers to submit their results for publication in peer-reviewed scientific journals. In addition, NIEHS strongly encouraged researchers to present their results at professional conferences and workshops. Officials from both agencies agreed that publicizing the results of research conducted under the arrangements helped ensure that agency-sponsored research adhered to accepted analytic standards and was unbiased. In addition to the routine research management processes, discussed in the previous section, officials at ORD took further steps that they believe helped them manage the potential for conflicts of interest in their collaboration with ACC. Specifically: Research arrangement developed with public input. ORD publicly announced that it might collaborate with ACC and invited public comment on the terms and conditions of the proposed partnership. In addition, ORD invited public comment on the draft announcement of the opportunity to apply for funding. ORD officials told us that they believed an open and public process to define the terms of ORD’s collaboration with ACC could help guard against real or perceived conflicts of interest. Membership of review panels. In addition to prohibiting ACC representatives from serving as expert reviewers, ORD did not allow employees of ACC member companies to serve on the peer review panels that evaluated research proposals for technical and scientific merit. ORD officials said this step helped minimize the perception that ACC or its members could play a role in evaluating the scientific merit of research proposals. When accepting funds from ACC under the research arrangement, NIEHS officials complied with those sections of NIH’s policy that guide the acknowledgement and administration of gifts. However, the policy’s guidance on evaluating and managing potential conflicts is extremely broad, lacking clarity and consistency. Consequently, officials have wide discretion in deciding how to fulfill their responsibilities under the gift acceptance policy. Further, the policy does not require officials to document the basis of their decisions. As a result, the gift policy does not provide the public sufficient assurance that potential conflicts of interest between NIH and donor organizations will be appropriately considered. Specifically, NIH’s gift acceptance policy outlines several steps that officials must take to acknowledge and administer gifts. NIEHS officials generally complied with these policy sections when accepting the gift from ACC. For example, NIEHS officials acknowledged the acceptance of ACC’s gift in a timely manner, deposited the funds in government accounts, and used the gift only for the purposes stipulated by ACC. As the policy also requires, NIEHS obtained ACC’s written agreement that any remaining funds could be used to further NIH’s goals without additional stipulation. However, other policy sections are inconsistent or unclear about what actions officials must take to evaluate conflicts of interest when accepting gifts—thereby affording officials wide discretion in carrying out their responsibilities. For example, one part of the policy in effect at that time and in subsequent revisions requires the approving official to use two assessment tools to evaluate conflicts of interest before accepting a gift, but another part of the policy states that the use of these tools is recommended rather than required. The Director of NIEHS, who had authority to accept the gift, said he was acutely aware that accepting the ACC money could pose the potential for real or apparent conflicts of interest. In light of his concerns, he spoke informally with the Acting NIH Director, senior NIEHS officials, NIH legal advisers, and senior officials from two external groups. Through these discussions and using his professional judgment, the NIEHS Director determined that accepting the ACC funds would not present a conflict of interest for NIEHS. When he decided to accept the ACC gift, the Director said that he was unaware of the assessment tools recommended by NIH’s policy. However, he believes the steps he and other NIEHS officials took in accepting ACC’s gift satisfied the gift acceptance policy regarding conflicts of interest. Given the lack of consistency in the policy sections that relate to conflicts of interest and the use of the assessment tools, it is difficult for us to determine whether the actions the director took complied with the NIH policy. Moreover, without documentation of his actions, we could not determine whether the steps he took were adequate to evaluate the potential for conflicts of interest. Furthermore, the policy in effect at that time and in subsequent revisions does not provide clear guidance on what type of coordination should occur between NIH offices in evaluating the potential for conflicts of interest when accepting a gift. For example, several NIEHS staff were concerned that the proposed ACC gift could result in an apparent conflict of interest and, consistent with NIH’s gift policy, forwarded the written agreement to the NIH Legal Advisor’s Office for review. However, the gift policy does not require staff to identify their concerns when seeking legal advice. According to these officials, in referring the agreement to NIH attorneys for review, they did not specifically request a determination of whether the gift would constitute a conflict of interest. As a result, the NIH attorneys conducted a general legal review of the gift and the proposed research arrangement, focusing primarily on the agency’s legal authority to enter into the arrangement. NIH legal staff told us that they could have provided assistance on conflict-of-interest issues had they been notified that the program staff had such concerns, or if in their view, the gift or written agreement had contained clauses that were obviously illegal or contrary to NIH policy. If the policy had been clearer about how conflict of interest concerns are to be communicated to NIH attorneys, we believe the legal staff would have conducted a conflict-of-interest review. Finally, NIH’s policy does not require officials to document how they have addressed conflict-of-interest concerns. Neither the NIEHS Director nor other senior NIH officials documented their consideration of potential conflicts of interest when accepting the ACC gift. The lack of documentation, coupled with the broad discretion resulting from the inconsistency and lack of clarity in the policy, allows officials to satisfy requirements with a wide array of actions, ranging from a formal evaluation to a highly informal one. At NIH, we identified nine arrangements that were somewhat comparable to the ACC research arrangements, but we did not identify any similar arrangements at ORD, other EPA program offices, FDA, or FAA. None of the nonprofit partners in the nine research arrangements we found at NIH represents industry in the same direct manner that ACC represents the chemical industry. However, some of the nonprofit partners have either general corporate sponsorship or corporate sponsorship for specific events. For example, sponsors of the Parkinson’s Unity Walk in 2004 included pharmaceutical companies. The sponsors helped defray operating expenses to ensure that all proceeds from the walk supported Parkinson’s research. Likewise, the Juvenile Diabetes Research Foundation received corporate sponsorship from an airline company, manufacturers of soft drinks and household products, and others, none of whom had any material connection to the outcome of the research. One nonprofit partner is a corporation’s philanthropic foundation. At NIH, we found a total of 11 institutes and centers—either singly or with other institutes and centers—that had entered into research arrangements with one or more nonprofit partners. Under the terms of four of the arrangements, NIH accepted gift funds from nonprofit partners to support the research described in the arrangements. In four other arrangements, when NIH institutes or centers lacked sufficient money to fund all the research proposals rated highly by peer review panels, they forwarded the research proposals to their nonprofit partner(s) for possible funding. (See table 1 for details on the NIH arrangements.) At EPA, none of the 16 program and regional offices we contacted identified any arrangements similar to the research arrangement between ORD and ACC. In addition, we did not identify any partnerships similar to the ACC research arrangement at FDA or at FAA. FDA officials we contacted said the agency had no research arrangements similar to the ACC arrangement with organizations that represent industry. Finally, FAA officials said that the agency had not entered into any research arrangements like the arrangements with ACC and generally did not use this type of collaborative arrangement to conduct extramural research. Federally funded research advances scientific understanding and helps improve regulatory approaches to protecting human health and the environment. For both regulatory and nonregulatory agencies collaboration with external organizations is one mechanism to maximize the financial and intellectual resources available to federal agencies. However, collaboration, particularly with organizations that directly represent regulated industries, can raise concerns about conflicts of interest that could call into question the quality and independence of federally funded research. As a result, it is imperative that federal agencies ensure, before they enter into collaborative research arrangements with nonfederal partners, that they fully consider the potential for conflicts of interest. NIEHS and ORD relied on their general research management processes to minimize any potential conflicts of interest that might arise during implementation of their respective ACC arrangements. While these processes were appropriate for managing the arrangements, they were not specifically designed to address conflict-of-interest concerns and therefore cannot be considered adequate substitutes for formal conflict-of-interest evaluations. Consequently, without policies requiring officials at NIH and EPA to formally evaluate and manage potential conflicts of interest when they enter into collaborative arrangements such as those with ACC, neither agency can ensure that similar arrangements in the future will be systematically evaluated and managed for potential conflicts of interest. When accepting the gift from ACC, NIEHS officials believed their actions satisfied the conditions of the NIH gift acceptance policy for conflict of interest. However, NIH’s policy—both the wide discretion allowed in deciding on whether and how officials should evaluate conflicts of interest and the lack of required documentation—provides little assurance of systematic evaluation of gifts that may present potential conflicts of interest for the agency. To allay concerns about the potential for conflicts of interest that may result from accepting gifts, officials should clearly document both their evaluation of the potential for conflicts of interest and the basis for their decisions to accept or reject a gift. The Director of NIH and the Administrator of EPA should develop formal policies for evaluating and managing potential conflicts of interest when entering into research arrangements with nongovernmental organizations, particularly those that represent regulated industry. The Director of NIH should further revise the NIH gift acceptance policy to require NIH officials to evaluate gifts, particularly from organizations that represent regulated industry, for potential conflicts of interest and to document the basis for their decisions, including what, if any, steps are needed to manage potential conflicts. We provided EPA and NIH with a draft of this report for their review and comment. EPA neither agreed nor disagreed with our recommendation, but provided technical comments that we have incorporated as appropriate. (See app. II.) NIH concurred with our recommendations and stated it would take steps to implement them. In addition, NIH emphasized that is it not a regulatory agency and suggested changes to the report to clarify its role. We have added language to clarify NIH’s relationship with the regulated industry. NIH also provided technical comments that we have incorporated as appropriate. NIH’s comments and our response are included in appendix III. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report for 30 days after the date of this letter. At that time, copies of this report will be sent to the congressional committees with jurisdiction over the Environmental Protection Agency and the National Institutes of Health; the Honorable Stephen L. Johnson, Acting Administrator of EPA; the Honorable Elias A. Zerhouni, Director of NIH; and the Honorable Joshua B. Bolten, Director of the Office of Management and Budget. This report will also be available at no charge on GAO’s home page at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 3841. Key contributors to this report are listed in appendix IV. As requested by the Ranking Member of the Subcommittee on Environment, Technology and Standards, House Committee on Science, and the Ranking Member of the Subcommittee on Research, House Committee on Science, we determined the (1) legal authority the National Institutes of Health’s (NIH) National Institute of Environmental Health Sciences (NIEHS) and the Environmental Protection Agency’s (EPA) Office of Research and Development (ORD) used to enter into arrangements with the American Chemistry Council (ACC); (2) extent to which NIEHS and ORD evaluated and managed the possibility that conflicts of interest could result from their arrangements; (3) extent to which NIEHS complied with NIH’s gift acceptance policy when accepting ACC’s funds; and (4) extent to which similar research arrangements exist within other offices and programs within NIH and EPA, as well as other regulatory agencies. To determine the legal authorities NIEHS and ORD relied on to enter the research arrangements with ACC to solicit and fund extramural research, we reviewed the statutes cited in agency documentation related to the arrangements. For NIH, these authorities included sections 301 and 405 of the Public Health Service (PHS) Act, as amended (42 U.S.C. §§ 241 and 284); and gift acceptance statutes contained in sections 231 and 405(b)(1)(H) of the PHS Act as amended (42 U.S.C. §§ 238, 284(b)(1)(H)). For ORD these authorities included section 103 of the Clean Air Act, as amended (42 U.S.C. § 7403), section 104 of the Clean Water Act, as amended (33 U.S.C. §1254), and section 8001 of the Solid Waste Disposal Act, as amended (42 U.S.C. § 6981). We also reviewed the following related documentation on delegations of authority: Memorandum from the Assistant Secretary for Health to Public Health Service Agency Heads for “Delegation of Authority To Accept Gifts Under Title XXI of the PHS, Miscellaneous” (July 10, 1995), and NIH Manual Chapter 1130, Delegations of Authority, Program: General #5 Accept Gifts Under Section 231 of the PHS Act, Program: General #10 National Library of Medicine. We also reviewed relevant legislative histories and Comptroller General decisions and interviewed attorneys at NIEHS and ORD about their reviews of the arrangements. Furthermore, we compared each agency’s policies and both formal arrangements with the authorities cited above. To determine what measures NIEHS and ORD took to evaluate and manage the potential that conflicts of interest could result from their arrangements with ACC, we interviewed program officials on their perceptions of conflict of interest when the ACC arrangement was being considered, as well as on the actions they took to develop and implement the arrangements. We also interviewed budget and legal officials, as appropriate, at each agency on their involvement in reviewing and completing the arrangements. We reviewed the research arrangements with ACC, as well as other documentation related to the arrangements, including correspondence between agency officials and ACC, interagency memorandums, and documentation of agency legal and other reviews. We considered statutes on conflict of interest and ethics guidelines that might address the need for agencies to consider and manage real or apparent conflicts of interest (18 U.S.C. § 209, and the Ethics in Government Act of 1978, 5 U.S.C. app. 4). Finally, we interviewed ACC officials to obtain their views on conflicts of interest and on the role of ACC representatives in developing the announcement of funding availability, reviewing and funding research proposals, and administering the grants. We did not test the NIEHS or ORD internal controls governing the administration of grants awarded under the arrangements. To determine whether NIEHS’s acceptance of ACC funds as a gift complied with NIH policy for accepting gifts, we collected and analyzed NIH’s policy for gift acceptance and we interviewed legal staff at NIEHS concerning their review of potential gifts and their assistance to program officials. We obtained and reviewed the research arrangement and related documentation on transferring and administering the gift funds. We interviewed program officials on their actions in accepting the funds and compared activities and documentation pertaining to NIEHS’s acceptance of ACC’s gift with the requirements and recommendations outlined in NIH’s policy. To determine the extent of similar research arrangements at other federal agencies, we identified officials responsible for 96 percent or more of the extramural research budgets at NIH, EPA, and two additional agencies. We then used a structured guide to determine what, if any, research arrangements the agencies had with external partners. In addition to NIEHS and ORD, we selected a nonprobability sample of two additional agencies on the basis of the magnitude of the research component of their mission and congressional interest. The two agencies selected were the Food and Drug Administration (FDA) and the Federal Aviation Administration (FAA) because each agency had a research component to its mission, a corresponding research budget, and a regulatory role. We determined that the selection was appropriate for our design and objectives and that the selection would generate valid and reliable evidence to support our work. To determine the extent to which arrangements exist within these four agencies, we obtained the most current available data on extramural research budgets from institutes and centers in NIH, program and regional offices in EPA, and the programs and centers at FAA and FDA. To assess the reliability of these data, we used a structured guide to interview officials at each agency responsible for maintaining the databases containing the data provided. Specifically, we obtained descriptions of the databases, how data are entered into the databases, quality control checks on the data, testing conducted on the data, and officials’ views on the accuracy and completeness of the data. We asked follow-up questions whenever necessary. FDA officials noted one limitation on the data that were provided. Specifically, when compiling data on research budgets, officials must sometimes subjectively interpret the term “research.” The impact of such interpretation may cause the extramural research figures for FDA to be slightly overstated. After taking these steps, we determined that the data were sufficiently reliable for the purposes of this report. We used these data to rank order the programs and centers and identify officials in each agency responsible for administering 96 percent or more of each agency’s extramural research budget. In our interviews with these officials, we focused on arrangements established since January 1999— specifically, arrangements with characteristics similar to the ACC arrangements. We looked for and considered arrangements with nongovernmental, nonacademic partners to sponsor research extramural to both organizations. We did not collect information or report on the use of other types of agency research cooperation with external partners such as cooperative research and development agreements or informal consultations between agency and external scientists. At NIH, we used a structured guide to interview officials at the following institutes or centers, listed in order of greatest to least extramural research grant-dollar totals, in fiscal year 2002: National Cancer Institute; National Heart, Lung, and Blood Institute; National Institute of Allergy and Infectious Diseases; National Institute of General Medical Sciences; National Institute of Diabetes and Digestive and Kidney Diseases; National Institute of Neurological Disorders and Stroke; National Institute of Mental Health; National Center for Research Resources; National Institute of Child Health and Human Development; National Institute on Drug Abuse; National Institute on Aging; National Eye Institute; NIEHS; National Institute of Arthritis and Musculoskeletal and Skin Diseases; National Human Genome Research Institute; National Institute on Alcohol Abuse and Alcoholism; National Institute on Deafness and Other Communication Disorders; National Institute of Dental and Craniofacial Research; National Institute of Nursing Research; and National Institute of Biomedical Imaging and Bioengineering. Together, these institutes and centers accounted for 99 percent of NIH’s total extramural research funds for fiscal year 2002. At EPA, we used a structured guide to interview program officials from the following offices and regions (shown in order of greatest to least funding available for extramural research fiscal year 2003): ORD; Office of Water; Region 6; Region 9; Office of International Affairs; Region 3; Office of Solid Waste and Emergency Response; Region 4; Region 5; Region 1; Region 2; Region 7; Region 10; Region 8; Office of Prevention, Pesticides and Toxic Substances; and Office of Air and Radiation. Together, these offices accounted for 99 percent of the EPA’s extramural research funds for fiscal year 2003. At FDA, we interviewed the agency official responsible for getting approval for Memorandums of Agreement from the General Counsel’s Office and Office of Grants Management and for ensuring that each agreement is published in the Federal Register. FDA does not accept funds from external partners under these agreements. Finally, at FAA, we interviewed officials from the research and development offices at headquarters as well as the division manager of the Acquisition, Materiel, and Grants Division of the William J. Hughes Technical Center. Together, these offices accounted for 96 percent of the agency’s fiscal year 2003 funds for extramural research. To independently corroborate the information obtained from agency officials, to the extent possible, we collected documents on the agreements we identified at these agencies and reviewed agency Web sites maintained by the relevant centers and offices, as well as Web sites maintained by external sources, such as advocacy or trade groups. We conducted our review from February 2004 through February 2005 in accordance with generally accepted government auditing standards. In addition to the individuals listed above, key contributions to this report were made by Amy Dingler, Karen Keegan, Judy Pagano, Carol Herrnstadt Shulman, Barbara Timmerman, Mindi Weisenbloom, and Eugene Wisnoski. Also contributing to this report were Anne Dievler and Jim Lager. | An institute at the National Institutes of Health (NIH) and an office in the Environmental Protection Agency (EPA) entered into collaborative arrangements with the American Chemistry Council (ACC) to support research on the health effects of chemical exposures. NIH accepted a gift from ACC to help fund the research. EPA and ACC funded their proposals separately. The arrangements raised concerns about the potential for ACC to influence research that could affect the chemical industry. GAO determined the agencies' legal authorities to enter into the arrangements; the extent to which the agencies evaluated and managed potential conflicts of interest resulting from these arrangements; the extent to which the NIH institute complied with NIH's gift acceptance policy; and the extent to which NIH, EPA, and other agencies have similar arrangements. NIH's National Institute of Environmental Health Sciences (NIEHS) used the authorities granted to NIH's institutes and centers under sections of the Public Health Service Act to enter into its arrangement with ACC. Similarly, EPA's Office of Research and Development (ORD) relied on authorities granted to EPA under sections of the Clean Air Act, the Clean Water Act, and the Solid Waste Disposal Act to enter into its research arrangement. Nothing in these statutes appears to prohibit either agency from entering into research arrangements with nonprofit organizations such as ACC. NIEHS and ORD did not formally evaluate the potential for conflicts of interest with ACC before they entered into the arrangements, but both agencies took steps to manage the potential as the arrangements were implemented. NIH and EPA had no specific policies requiring officials to evaluate or manage potential conflicts of interest when they entered into the ACC arrangements, nor do they currently have such policies. Although no formal evaluation occurred, agency officials managed the arrangements through their existing research management processes. Both agencies believe these actions helped mitigate the potential for undue influence by ACC and adequately protected the integrity of the scientific research conducted under the arrangements. Because the agencies' research management processes were not designed to address conflict of interest issues they are not a substitute for a formal evaluation of such conflicts. Without policies requiring a formal evaluation and management of conflicts, there is no assurance that similar arrangements will be appropriately evaluated and managed for such conflicts in the future. NIEHS officials complied with portions of NIH's gift acceptance policy that guide the acknowledgement and administration of gifts. However, the policy's guidance on evaluating and managing potential conflicts is extremely broad, and it lacks clarity and consistency. As a result, the policy gives officials wide discretion in this area. In addition, the policy does not require the agency to document the basis for its decisions. Consequently, the policy does not provide sufficient assurance that potential conflicts of interest between NIH and donor organizations will be appropriately considered. While some institutes and centers at NIH had arrangements somewhat similar to the ACC arrangements, GAO did not find any similar arrangements at other program offices at EPA or at the Food and Drug Administration and the Federal Aviation Administration--two other agencies with significant research budgets. None of the nine research arrangements GAO found at NIH institutes and centers involve organizations that represent industry in the same direct manner that ACC represents the chemical industry. |
Established by the Communications Act, FCC is charged with regulating interstate and international communications by radio, television, wire, satellite, and cable. The Telecommunications Act, which substantially revised the Communications Act, established that FCC should promote competition and reduce regulation to secure lower prices and higher- quality services for American telecommunications consumers and should encourage the rapid deployment of new telecommunications technologies. The law grants FCC broad authority to execute its functions. FCC implements its policy initiatives through a process known as rulemaking, which is the agency process for implementing, interpreting, or prescribing law or policy. Figure 1 shows some common communications services affected by FCC rulemaking. FCC is an independent regulatory agency that must follow many, but not all, federal laws related to rulemaking. Under the Communications Act, the commission is composed of five commissioners appointed by the President with Senate confirmation. The President designates one of the commissioners as chairman. The chairman derives authority from provisions in the act and FCC’s rules, which define the chairman’s duties to include (1) presiding at all meetings and sessions of the commission, (2) representing the commission in all matters relating to legislation and before other government offices, and (3) generally coordinating and organizing the work of the commission. The commissioners delegate many of FCC’s day-to-day responsibilities to the agency’s 7 bureaus and 10 offices (see fig. 2). While this report focuses on the rulemaking process, FCC also makes decisions on many other issues, such as enforcement actions and issuing licenses for communications devices. Between 2002 and 2006, FCC data show that the agency made 1,835 decisions by FCC commissioners and 17,406 decisions within FCC bureaus and offices. APA is the principal law governing how agencies make rules. The law prescribes uniform standards for rulemaking, requires agencies to inform the public about their rules, and provides opportunities for public participation in the rulemaking process. Most federal rules are promulgated using the APA-established informal rulemaking process, also known as “notice and comment” rulemaking. APA outlines a multistep process to initiate and develop rules and includes provisions for parties to challenge them, which FCC must follow. Many steps require agencies to provide public notice of proposed or final actions as well as provide a period for interested parties to comment on the notices—hence the “notice and comment” label. APA does not generally address time frames for informal rulemaking actions, limits on contacts between agency officials and stakeholders, or requirements for “closing” dockets. The Communications Act outlines procedures for addressing petitions for reconsideration by FCC and appeals to federal court for FCC rules. The act states that a petition for reconsideration may be filed within 30 days of the date of public notice. The U.S. Courts of Appeals have jurisdiction to review all final FCC rules. Other laws and orders also apply to FCC rulemakings, including but not limited to the following: Regulatory Flexibility Act. This act requires federal agencies to assess the impact of their forthcoming regulations on small businesses, small governmental jurisdictions, and certain small not-for-profit organizations. The act also requires rulemaking agencies to publish a “regulatory flexibility agenda” in the Federal Register each October and April, listing regulations that the agency expects to propose or promote and that are likely to have a significant economic impact on a substantial number of small entities. This requirement, as well as a similar requirement in Executive Order 12866, is generally met with entries in the Unified Agenda of Federal Regulatory and Deregulatory Actions. The Unified Agenda is published twice a year in the Federal Register and provides uniform reporting of data on regulatory activities under development throughout the federal government. Congressional Review Act. This act requires agencies to submit final rules to Congress and GAO before they can take effect. We compile and make available on our Web site basic information about the rules we receive through an on-line database, including the rule’s priority, listed as either “significant/substantive” or “routine/info/other” as indicated by the agency’s submission. According to the GAO database, 240 significant/substantive FCC rules were published in the Federal Register between January 2002 and December 2006. The Government in the Sunshine Act of 1976 (Sunshine Act). This act requires federal agencies headed by a collegial body composed of two or more individual members, such as FCC, to hold regular public meetings with sufficient public notice that the meeting will take place. The agency must release the meeting’s agenda, known as the Sunshine Agenda, no later than 1 week before the meeting. In addition, the act prohibits more than two of the five FCC commissioners from deliberating with one another to conduct agency business outside the context of the public meeting. E-Government Act of 2002. This act requires agencies, to the extent practicable, to accept public comments on proposed rules by electronic means and to ensure that a publicly accessible Web site contains electronic dockets for their proposed rules. Paperwork Reduction Act. This act seeks to minimize the paperwork burden imposed by government on the public and requires an agency to seek clearance from the Office of Management and Budget if it proposes to collect information from 10 or more people on a particular matter. For example, this requirement would apply to an agency’s proposed rule that might seek information from the public. APA places no restriction on “off-the record” or “ex parte” communication between agency decision makers and other persons during informal rulemaking. However, FCC has rules about such contacts to protect the fairness of its proceedings by providing an assurance that FCC decisions are not influenced by off-the-record communications between decision makers and others. The rules also give FCC the flexibility to obtain the information it needs for making decisions. Under its ex parte rules, FCC generally classifies its rulemaking proceedings as “permit-but-disclose,” meaning that outside parties are allowed to present information to FCC either in writing or in person, but are required to disclose such communications in the public record. The rules require a person making an oral ex parte presentation that includes data or arguments not already reflected in that person’s other filings to submit a disclosure to the record summarizing the new data and arguments. The rules state that the summary should generally be “more than a one or two sentence description” and not just a listing of the subjects discussed. When there is ambiguity about whether data or arguments are already in the public record, FCC encourages parties to briefly summarize the matters discussed at a meeting. FCC’s ex parte rules also establish the Sunshine Period, which begins when FCC releases the Sunshine Agenda of items scheduled for a vote at a public meeting and ends when those items are released to the public after the vote or are removed from the agenda before the meeting. During the Sunshine Period, the public may not contact the agency to discuss any matters that appear on the Sunshine Agenda unless there is a specific exemption. The Sunshine Period does not apply to items that are voted on by circulation. FCC rules state that staff must not directly or indirectly disclose nonpublic information outside the agency without authorization by the chairman. Nonpublic information includes the content of public meeting agenda items, except for as required to comply with the Sunshine Act, and actions or decisions made by the commission at closed meetings or by circulation prior to the public release of such information. FCC’s rulemaking process includes multiple steps as outlined by law with opportunities for the public to participate during each step. FCC initiates rulemaking in response to statutes, petitions for rulemaking, or its own initiative. Any person may petition FCC to amend or create new rules. After FCC releases an NPRM, it develops and analyzes the public record to support a rule, leading to a final rule for the commission to adopt. Anyone may participate in the development of the public record through electronic filings and meetings with FCC officials. The FCC chairman has the flexibility to decide what proposed and final rules the commission will consider for adoption. At the chairman’s discretion, the commission may adopt final rules either at a monthly public meeting or by circulating written items to each commissioner. Stakeholders unsatisfied with an FCC rule may file a petition for reconsideration with the commission or seek federal judicial review. FCC initiates rulemaking in response to statutes, petitions for rulemaking, or its own initiative. FCC may propose rules under its broad regulatory authority granted by the Communications Act or in response to a specific statutory requirement. Congress may also impose a time frame or conditions for rulemaking. FCC may also propose rules in response to petitions for rulemaking filed by outside parties. FCC puts petitions for rulemaking out for public comment and, after reviewing any comments, may initiate rulemaking on the issue or deny the petition. FCC does not have to respond to a rulemaking petition within a set time frame, so a petition for rulemaking can remain open indefinitely. When initiating a new rulemaking, FCC has the flexibility to organize the public record in the manner that it deems most appropriate. FCC may establish a new docket to house the rule’s supporting documents or initiate the proceeding in an existing docket, particularly if the filings already in that docket would be germane to the new proceeding. As we have previously discussed, there were 240 significant/substantive rules in our database published in the Federal Register between 2002 and 2006. Among these recent rules, 47 were initiated in response to a petition for rulemaking, 172 were initiated on FCC’s own authority or in response to a specific legal requirement, and 21 were initiated for a combination of these reasons. The commission’s release of an NPRM signals the beginning of a rulemaking; the NPRM may or may not contain the text of the proposed new or revised rules. The NPRM provides an opportunity for comment on the proposal and indicates the length of the comment and reply comment periods, which stakeholders can use to submit comments and reply to other comments. These periods begin once the NPRM is published in the Federal Register. The NPRM also indicates the ex parte rules for contact under the rulemaking, which are generally “permit-but-disclose.” After FCC releases an NPRM, it begins developing and analyzing the public record to support a rule, leading to proposed final rules for the commission’s adoption. FCC provides multiple avenues for public participation during rule development, including opportunities to submit filings electronically and to meet with FCC officials. Outside parties provide FCC with written comments, reply comments, and other types of data to support their positions on a rulemaking. Outside parties are permitted to discuss rulemakings with FCC officials, including commissioners, at any time except during the Sunshine Period. FCC officials said they make every effort to meet with any outside party that requests a meeting. For these meetings, FCC rules require the outside party to submit an ex parte disclosure for each meeting, indicating what new data or arguments were presented during the meeting. This disclosure, like any other filing in the docket, becomes part of the public record available electronically to the public through FCC’s Electronic Comment Filing System (ECFS), a searchable Web-based depository of rulemaking notices and filings. Each document filed as part of the public record is associated with one or more dockets. FCC provides guidance to the public on its Web site for how to use ECFS and file comments. FCC officials said that they do not usually conduct their own studies in support of rulemaking issues. Instead, they rely mostly on external stakeholders to submit this information into the public record, and FCC staff analyze the information. In addition, stakeholders can critique each other’s data that are in the public record. On more technical issues, FCC’s Office of Engineering and Technology may conduct analyses for the public record, and information FCC routinely collects, such as data on broadband use, may be placed in a docket if it is relevant to a rulemaking. Also, while FCC officials said that they do not typically identify and reach out to parties to participate in rulemakings, they may contact a particular stakeholder and request additional information for the public record or use publicly available data, such as data from the U.S. Census, to augment the public record. For example, officials from the Wireline Competition Bureau told us they contact rulemaking participants if they need an additional level of detail in the public record to adequately support a rule. In some cases, FCC also holds field hearings, such as the current series of hearings on media ownership, to solicit comment for the public record on specific rulemaking issues. Using information contained in the public record as support, bureau staff draft proposed final rules for the commission to vote on. FCC officials must consider all timely filed comments and reply comments when developing a rule and have the flexibility to also consider information from ex parte filings. FCC officials said that they consider all types of comments filed in ECFS to support rulemakings; however, they said that specific comments on contentious rulemaking issues are more helpful than general comments that express support or opposition to a rule. FCC officials told us that they may also consider comments from stakeholders with a vested interest in an issue more seriously than those from other parties. FCC’s Office of General Counsel and Office of Managing Director provide rulemaking guidance to bureaus and review rules to determine whether the bureaus followed the steps required by various rulemaking laws. FCC uses an electronic system, known as the Electronic Management Tracking System to track rulemakings and manage associated staff workload. The chairman decides when the commission will vote on final rules and whether the vote will occur during a public meeting or by circulation, which involves electronically circulating written items to each of the commissioners for approval. According to FCC officials, while it is not possible to vote on every rulemaking at a public meeting, items that are controversial or have a broad impact on the industry are more likely to be voted on during a public meeting. Of the 240 recent rules, 101 were adopted on the day the commission held a public meeting, indicating they may have been voted on at the meeting, while the other 139 appear to have been adopted by circulation. Three weeks before the commission considers an item at a public meeting, the chairman’s office releases to FCC officials the draft version of the proposed rules the commission expects to vote on at the public meeting. These drafts are internal, nonpublic documents. FCC officials told us they do not release information to the public about what items the commission is planning to vote on at public meetings or items being circulated by the commission for adoption. FCC’s written rulemaking guidance states that such information is nonpublic and may not be disclosed in any format, including via paper, electronic, or oral means, unless the chairman authorizes its disclosure. For items to be voted on during a public meeting, the Office of Managing Director releases the Sunshine Agenda no later than 1 week before the meeting. The agenda includes a list of items the commission intends to vote on at the meeting and notifies the public that it may not contact the commission about those items during the week before the vote, the period known as the Sunshine Period. Items voted on through circulation do not usually appear on the Sunshine Agenda and, therefore, are not subject to the contact prohibition. According to FCC officials, at the chairman’s discretion, the commission could adopt items included on the Sunshine Agenda by circulation in advance of the public meeting. No more than two commissioners may meet to deliberate on rulemaking matters outside of an official public meeting, according to a requirement of the Sunshine Act. Some FCC commissioners have said that this requirement should be changed because it creates logistical complications and transfers the daily discussion of rules down from the commissioners to their staff. We did not evaluate these claims. Once the commission adopts a rule, the originating bureau often makes technical corrections to it and may also make substantive changes. Each commissioner is given the final rule before it is released and can decide if the rule has undergone substantive changes. Any substantive changes are approved by the commissioners, and the rule goes through a final internal review before FCC releases the rule and submits it to the Federal Register for publication. FCC may adopt and release some rules on the same day, while other rules may require months of revision because the commission may vote on a particular issue or policy position and not the precise wording of the rule. When this occurs, the final wording of the rule is approved by all commissioners before the order is released. It is difficult to determine time frames for FCC rules because FCC tracks which dockets are open, and many rules are in dockets that have been open for a long time. These dockets may include other rules or may have remaining issues to address. For example, one docket that has been open since 1980 includes several NPRMs and rules. A recent rule in this docket, issued in 2006, was attached to a Further Notice of Proposed Rulemaking, and was published in the Federal Register as an “interim rule,” indicating the issue is ongoing even though FCC has released several rules in the docket. Documents that are both an NPRM and a rule can be difficult to find in FCC’s database because the database allows a document to have only one document-type label, even if the document serves multiple purposes. As a result, if a document is filed as a rule and it also includes an NPRM, when searching for NPRMs this document would not be found. FCC may also develop rules on the basis of comments solicited from notices other than a docket’s first NPRM. For example, a 2005 rule from a docket that began in 1997 was supported by an analysis of comments solicited in 2000 and 2003. FCC officials told us that some dockets— particularly those that address complex issues—contain multiple rulemakings. Specifically, a docket may contain different NPRMs and rules that are issued at different times. Therefore, a rule may be made in a docket to address a long-standing issue, but relate specifically to an NPRM that was not released until years after the docket was established. Consequently, some rules could be considered to have shorter time frames because they address issues primarily raised in subsequent NPRMs released in the same docket. FCC officials told us that they do not track the time it takes to complete a rulemaking and generally are not required by statute to complete rules within certain time frames. The time it takes to complete rules may vary because of the unique nature of each rulemaking. Certain factors, such as the technical complexity of the issue being address and the priority of the rulemaking in comparison to other issues, can also affect rulemaking time frames. We also reviewed rulemaking at the Environmental Protection Agency and Federal Trade Commission, but we could not compare their time frames with FCC’s because of differences in their rulemaking processes. Stakeholders unsatisfied with an FCC rule may file a petition for reconsideration with the commission or petition for federal judicial review. Stakeholders are allowed 30 days after a rule is published in the Federal Register to file a petition for reconsideration, although FCC usually has no required time frame for acting on such a petition. FCC officials said they give priority consideration to petitions identifying problems with the rules they should correct quickly. Parties may also petition the U.S. Courts of Appeals for review of an FCC rule, typically after the commission has already considered the issue, such as after FCC has denied a party’s reconsideration petition. An appeals court may uphold, vacate (hold unlawful or set aside), or remand an FCC rule (send it back to the agency for further consideration) entirely or in part, which may lead the commission to take additional action on the rulemaking, such as issuing a new version of the rule to address the court’s concerns. Twenty-five of the 240 recent rules had published opinions from the U.S. Courts of Appeals resulting from challenges. According to these opinions, the court denied or dismissed the challenges to 19 rules, and 6 rules, either wholly or in part, were determined to be unlawful or sent back to FCC for further consideration. In addition, according to FCC data, challenges to 12 of the 240 recent rules were pending in the U.S. Courts of Appeals as of June 2007. FCC generally followed the rulemaking process in the four case studies we reviewed. Specifically, each rulemaking included an NPRM and a notice and comment period. In reviewing the docket for each case study, we found that most—but not all—ex parte filings complied with the ex parte rules, and there was no evidence that FCC violated its Sunshine Period rule. However, we found that multiple stakeholders—both those involved in our case studies and other stakeholders who often file comments in FCC rulemakings—often knew when proposed rules were scheduled for a vote well before FCC released the agenda to the public and before the Sunshine Period began. This advance information is not supposed to be released outside of FCC. Other stakeholders with whom we spoke told us that they cannot learn when rules are scheduled for a vote until the agenda is publicly available. At that time, FCC rules prohibit stakeholders from lobbying, or making presentations to, FCC. This unequal access to information could create an unfair advantage for FCC stakeholders who know when FCC is about to vote on a rulemaking and, therefore, would know when it is most effective to present their arguments to FCC officials. We reviewed four case studies of rules that were released from 2002 through 2006. Each of these rules originated in a different bureau—Media, Wireless Telecommunications, or Wireline Competition—or in FCC’s Office of Engineering and Technology. These bureaus and office had the most rulemakings during the period of our analysis. Table 1 describes each of our case studies. Among our four case studies, three rules were initiated by FCC in response to specific statutory requirements or on its own initiative, and one rule—amateur radio service—was initiated in response to petitions for rulemaking. Each of our case studies began with an NPRM, which, among other things, included either the language of the proposed rule or a description of the subjects and issues involved. The NPRM also described the notice and comment period during which stakeholders could file comments that FCC must consider. One NPRM—amateur radio service— included a specific proposal for the rule, while the other NPRMs included only the subject of the rulemaking. For example, in the public safety interference rulemaking, the NPRM discussed various methods to minimize interference and asked for comments on these proposals. Some stakeholders told us that it is much easier to comment on an NPRM that includes a proposed rule. According to these stakeholders, it is easier to comment on a specific proposal instead of trying to comment on an entire subject or a range of proposals. However, there is no requirement that an NPRM include a proposed rule. One NPRM that we reviewed contained an error, but that error did not appear to substantially affect the rulemaking. Specifically, the NPRM for the rulemaking on cable boxes incorrectly stated that the rulemaking was not bound by ex parte rules. FCC officials told us that this error occurred because language was inadvertently carried over from an earlier drafting of a notice of inquiry on cable boxes. FCC later decided to issue an NPRM instead of a notice of inquiry, but accidentally left in the language that the proceeding was not bound by ex parte rules. FCC officials told us that this mistake had no bearing on the rulemaking because stakeholders submitted ex parte filings anyway. Our review of the docket confirmed that stakeholders submitted ex parte filings in this rulemaking. In each case study, numerous stakeholders participated in rule development, both during and after the comment period. Specifically, the case studies had between 42 comments (for the cable boxes rule) and 273 comments (for the public safety interference rule) filed during the comment period. In addition, FCC received between 8 (for the amateur radio service rule) and 2,237 (for the Incumbent Local Exchange Carrier (ILEC) unbundling rule) ex parte filings and comments in the docket after the formal comment period ended. These filings and comments, which were filed through FCC’s Web site, either reflect a meeting between FCC officials and stakeholders or are written comments that stakeholders submitted after the formal comment period had ended. The comments range from lengthy studies to one-page, mass-produced e-mails. In formulating a rule, FCC must consider all comments that are filed during the comment period and may consider comments that are filed after the comment period. Each filed comment is placed in a docket that is publicly accessible through FCC’s Web site. Generally, FCC does not produce its own studies to develop a rule. Rather, FCC relies on stakeholders to submit information and analysis that is then placed in the docket so that FCC and other stakeholders can critique the information. According to FCC officials, this results in both transparency and quality information because each stakeholder has had an opportunity to review and comment on all of the information in the docket. In addition to submitting comments, stakeholders often meet with FCC staff to discuss issues. Stakeholders involved in our studies told us that they were able to meet with FCC officials when they requested meetings. Other stakeholders with whom we spoke who were not involved in these case studies but regularly comment on FCC rulemakings also told us that they were able to meet with FCC officials. A few of these stakeholders told us that, while FCC officials were always willing to meet with them, they were sometimes unsure if FCC officials were currently focused on the rulemaking that was being discussed. FCC officials told us that they meet with stakeholders who request a meeting, even if the stakeholders have no new information to present. We found that most of the hundreds of ex parte filings in the four case studies appeared to meet the requirements, but several did not appear to be sufficient. These filings, which publicly document when stakeholders meet with FCC officials, generally detailed who attended the meeting or what arguments were raised. However, in the cable boxes rulemaking, one filing did not reveal which organization was represented at the meeting or what was discussed. Another filing discussed new information that was supported by a research report, but the report was not included with the ex parte filing. If the information is not filed in the public record with the ex parte filing, then it cannot be used to support a rulemaking. Therefore, stakeholders have an incentive to file complete ex parte disclosures. We also found some ex parte comments in three of the four case studies that do not describe the discussion at the meeting, but refer to comments already in the docket. While such filings may comply with ex parte rules, the effect of this type of filing is mixed. Since the ex parte filing did not explain those positions, it may not be very helpful to other stakeholders because other stakeholders would have to go to the docket and look up the party’s filed comments. Some stakeholders told us that there is nothing wrong with these kinds of ex parte filings. According to these stakeholders, they already know the other stakeholders and what their arguments are. As a result, these stakeholders are not concerned about the adequacy of ex parte filings. In contrast, a few stakeholders told us that they do not have the time or the resources to monitor each docket and, therefore, such ex parte filings are not helpful. FCC officials told us that, generally, bureau staff both remind stakeholders to submit ex parte filings and review them for accuracy. If a filing is not accurate or complete, the staff member asks the stakeholder to resubmit the filing. Stakeholders may file a complaint with FCC if they believe that other stakeholders have not provided complete ex parte filings. However, according to FCC officials, very few complaints have been filed. We found that FCC followed its Sunshine Period rule that prohibits unauthorized contact with FCC. In reviewing dockets and meeting with stakeholders and FCC officials, we found no evidence of any prohibited contact during the Sunshine Period. According to FCC officials and numerous stakeholders, the rules are well known and stakeholders do not generally request meetings or submit comments during this time. FCC officials told us that, if stakeholders do try to contact FCC or submit comments during the Sunshine Period, then FCC takes steps to ensure that the comments are not seen by the FCC staff working on the rulemaking. In the ILEC unbundling rulemaking, we found that several stakeholders submitted comments to FCC during the Sunshine Period. We also found that those comments were prominently marked in FCC’s ECFS as comments that were submitted during the Sunshine Period and were not to be viewed by FCC staff working on the rulemaking. The ILEC unbundling case study took a number of months to be released after the commission voted to approve it. The rule was adopted at a public meeting in February 2003, but was not released until August 2003. According to FCC officials, this delay was necessary because the final order was approximately 800 pages and the actual wording of the order was not voted on during the public meeting. Rather, the meeting included votes on the policy positions and issues associated with the order but not on the actual language. After the rule was adopted by the commission, Wireline Competition Bureau staff worked with relevant offices to draft the precise wording of the order, and then there were multiple discussions, comments, and revisions as the order went through each commissioner’s office and each substantive change was approved by the commissioners. The rules in our four case studies took between 1.0 and 4.5 years to complete from the time the related NPRM was issued. Generally, however, stakeholders told us that they are not concerned about the time it takes to conduct a rulemaking. Stakeholders told us that, if they support a rulemaking, they would like it to be completed more quickly. However, these same stakeholders said they may oppose another rulemaking and would like that rulemaking to proceed slowly or not be completed at all. In contrast, another stakeholder told us that they always support quick rulemakings because the businesses they represent prefer a stable regulatory market, and ongoing rulemakings create uncertainty for some businesses and their investors. Three of the four rulemakings we reviewed were challenged in court. Both the public safety interference and the cable boxes rules were upheld by the U.S. Court of Appeals for the D.C. Circuit. In the ILEC unbundling rulemaking, the D.C. Circuit vacated and remanded the rule in part, which means that part of the rule was struck down, part of the rule was returned to FCC for reconsideration, and part of the rule was upheld. In response to the court’s ruling, in August 2004, FCC issued another rule and NPRM soliciting comment on alternatives that would be consistent with the court’s ruling, as well as a rule implementing a 12-month plan to stabilize the telecommunications market while the new rules were being written. Six months later, FCC issued a rule that the commission said was consistent with the court’s guidance. This rule was also challenged and upheld by the D.C. Circuit. Several stakeholders told us that they learn which items FCC is about to vote on even though that information is not supposed to be released outside of FCC. FCC circulates information internally approximately 3 weeks before a public meeting to inform FCC staff of what is scheduled to be voted on at the public meeting. FCC rules prohibit the disclosure of this information to anyone outside of FCC. Specifically, the information is considered nonpublic information and cannot be released by any FCC employee without authorization from the FCC chairman. FCC officials in the units responsible for the case study rules and FCC officials in the units that conducted most of the rulemakings between 2002 and 2006 all told us that this is nonpublic information, and that they do not release it outside of FCC. However, nine stakeholders—both those involved in the case studies we reviewed and other stakeholders with whom we spoke who regularly participate in FCC rulemakings—told us that they hear this information from both FCC bureau staff and commissioner staff. One stakeholder— representing a large organization that is involved in numerous rulemakings—told us that FCC staff call them and tell them what items are scheduled for a vote. In contrast, a number of other stakeholders told us that they do not learn this information and do not know which items are scheduled for a vote. These stakeholders, who generally represent consumer and public-interest groups, told us that they do not know when FCC is about to vote on a rulemaking or when it would be best to meet with FCC staff to make their arguments. In contrast, stakeholders who know which items have been scheduled for a vote know when to schedule a meeting with FCC commissioners and staff because they know when FCC is about to vote on a rulemaking. FCC officials told us that, for stakeholders to successfully make their case before FCC, “timing is everything.” Specifically, if a stakeholder knows that a proposed rule has been scheduled for a vote and may be voted on in 3 weeks, that stakeholder can schedule a meeting with FCC officials before the rule is voted on. In contrast, a stakeholder who does not know that the rule is scheduled for a vote may not learn that the rule will be voted on until the agenda is announced 1 week before the public meeting. However, once the agenda has been announced, the Sunshine Period begins, and no one can lobby FCC officials about the proposed rule. As a result, the stakeholder who learns that a rule has been scheduled for a vote 3 weeks before the vote can have a distinct advantage over a stakeholder who learns about an upcoming vote through the public agenda. Our case study reviews and discussions with multiple stakeholders showed that some stakeholders know this nonpublic information and, as a result, these stakeholders may have an advantage in the rulemaking process. Even though advance knowledge that a rule is scheduled for a vote is nonpublic information, it has been reported by news agencies in the past. Specifically, in the cable boxes rulemaking, an industry newspaper published a story stating that the proposed rule would likely be circulated among the commissioners for a vote within the next few weeks. The newspaper attributed this information to unnamed “FCC sources.” The complexity and number of issues within a docket and the priority the commission places on an issue may all factor into how long dockets, and the rulemakings within these dockets, remain open. FCC tracks open dockets, which may contain one or more rulemakings. The commission determines which rulemakings are a priority and when to open and close a docket; therefore, the commission determines how a rulemaking and a docket progress. Specifically, a docket may remain open because it is broad and is intended to include multiple rulemakings or because the commission has not voted to close the docket even though the docket includes completed rulemakings. Some rulemakings may remain open because they involve complex, technical issues or because competing priorities can force FCC officials to work on one rulemaking as opposed to another. As of December 2006, FCC had 133 open dockets on the Unified Agenda, 99 of which originated from three bureaus—Media, Wireless Telecommunications, or Wireline Competition—and one office— Engineering and Technology. These four units had the most dockets during the period of our analysis. These dockets remain open for a variety of reasons. According to FCC officials, rulemakings may be completed within a docket even if the docket remains open. As we have previously discussed, one docket that has remained open since 1980 includes a number of NPRMs and rulemakings that were issued. We selected four open dockets as case studies, each of which originated in a different FCC unit and had been open for a different length of time. Table 2 provides an overview of these dockets. According to FCC officials, there is no way to determine exactly why a docket or a rulemaking remains open. While the chairman sets the FCC’s agenda, the commission decides when to open and close dockets and when action will be taken on specific rulemakings, so whether or not a docket and rulemaking remain open is ultimately a commission decision. However, certain factors may contribute to dockets and rulemakings remaining open, including the following: Broad dockets. Some dockets may remain open because FCC designed them to be broad with multiple rulemakings. For example, FCC officials involved in the Internet protocol services case study told us that this docket was created to encompass a variety of issues related to this topic. Specifically, the commission wanted to initiate a rulemaking that looked at a number of issues related to Internet protocol services and anticipated that the docket would be open for years and would include a broad NPRM followed by a number of rules. FCC has already completed four rules within this open docket, including rules related to 911 service and voice- over-Internet-protocol service. Housekeeping. Some dockets may remain open even though the issue(s) within the docket has been addressed by a rulemaking. For example, in the airport terminal use case study, the docket remains open even though the issue raised in the NPRM has been addressed with a rule. FCC officials told us that the commission must vote to formally close a docket, and the commission will generally wait until after stakeholders have had a chance to file a petition for reconsideration or challenge the rulemaking in court. As a result, dockets often remain open for a time after a rule has been issued. FCC officials also told us that an open docket is a “housekeeping” issue, and that there is no harm in having dockets remain open. Stakeholders generally agreed that having dockets remain open is not an issue. Complex/Technical issues. Within open dockets, some rulemakings may remain open for many years because they involve complex, technical issues. For example, the satellite coordination case study involves the technical properties of different types of satellites and involves worldwide coordination and complex decisions about a satellite’s potential interference with another satellite. FCC officials also told us that satellite issues take a long time to resolve in part because of the nature of satellites, which require a worldwide frequency, a number of different applications, and millions of dollars. In the distributed transmission systems (DTS) case study, FCC officials and stakeholders told us that the issue involves complex new technology. Specifically, DTS technology would allow broadcasters to place towers around urban areas to more easily transmit digital programming. Within the rulemaking, stakeholders have submitted items such as proposed geographic locations for siting the towers to implement DTS, proposed criteria for determining if the towers are interfering with other broadcasts, and procedures to allow for potential additional transmitters without interference with adjacent channels. Competing priorities. Some rulemakings may remain open because other rulemakings take precedence and the number of staff available to work on rulemakings is limited. For example, FCC officials told us that they worked on some rulemakings that are more important to the transition to digital television instead of the DTS rulemaking because Congress has set a deadline for the end of the digital television transition. FCC officials decided to focus their staff resources on more important rulemakings, especially since only a few companies have applied to use DTS since FCC adopted the interim DTS policy. All of the stakeholders involved in the DTS rulemaking with whom we spoke agreed with FCC officials and told us that other issues related to digital television are more important than establishing permanent DTS rules. FCC officials also told us that the issues in the DTS case study are similar to other rulemakings on the transition to digital television. Since the staff with expertise in digital television cannot work on every rulemaking, FCC officials have to prioritize the rulemakings on which they work. As a result, some rulemakings, such as DTS, remain open. Stakeholders generally told us that they are not concerned about the number of open dockets. According to these stakeholders, an open docket is not important; what matters is whether the rulemaking has been addressed. However, as we have previously discussed, stakeholders also told us that their views on the length of the rulemaking process could change depending on whether or not they favor the proposed rule. Specifically, supporters of an issue generally prefer a quick rulemaking, while opponents are likely to favor a lengthy rulemaking process. As a regulatory agency, FCC is routinely lobbied by stakeholders with a vested interest in the issues FCC regulates. It is critical that FCC maintain an environment in which all stakeholders have an equal opportunity to participate in the rulemaking process and that the process is perceived as fair and transparent. Situations where some, but not all, stakeholders know what FCC is considering for an upcoming vote undermine the fairness and transparency of the process and constitute a violation of FCC’s rules. Since the success of lobbying for a particular issue can be highly dependent on whether an issue is being actively considered, FCC staff who disclose nonpublic information about when an issue will be considered could be providing an advantage to some stakeholders, allowing them to time their lobbying efforts to maximize their impact. As a result, FCC may not hear from all sides of the issue during an important part of the rulemaking process. This imbalance of information is not the intended result of the Communications Act, and it runs contrary to the principles of transparency and equal opportunity for participation established by law and to FCC’s own rules that govern rulemaking. To ensure a fair and transparent rulemaking process, we recommend that the Chairman of the Federal Communications Commission: Take steps to ensure equal access to information, particularly in regard to the disclosure of information about proposed rules that are scheduled to be considered by the commission, by developing and maintaining (1) procedures to ensure that nonpublic information will not be disclosed and (2) a series of actions that will occur if the information is disclosed, such as referral to the Inspector General and providing the information to all stakeholders. We provided FCC with a draft of this report for their review and comment. FCC had no comment on the draft report and took no position on our recommendation. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies to the appropriate congressional committees and the Chairman of the Federal Communications Commission. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me on (202) 512-2834 or at goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. See appendix II for a list of major contributors to this report. To describe the Federal Communications Commission’s (FCC) rulemaking process, we reviewed agency documents available on FCC’s Web site that describe its rulemaking process. We also reviewed FCC’s internal rulemaking guidance documents and applicable laws, such as the Administrative Procedure Act of 1946 (APA) and the Communications Act of 1934. We interviewed FCC officials from the Offices of General Counsel, Managing Director, and Engineering and Technology and the Wireline Competition, Wireless Telecommunications, Media, and Public Safety and Homeland Security Bureaus to determine how FCC generally conducts rulemaking. Because we wanted to focus our review on FCC rulemaking as it applies to regulating telecommunications and media in the United States, we excluded rulemaking by FCC’s International Bureau from the scope of this study. We also interviewed organizations that represent a number of industries regulated by FCC as well as public-interest organizations to determine how these organizations participate in FCC rulemaking. Table 3 is a list of the organizations we interviewed and the principal sectors they represent. We also gathered and analyzed available data on FCC rulemaking orders published in the Federal Register between 2002 and 2006. While each rulemaking order may contain one or more rules, each order is generally referred to as a rule. Therefore, throughout the report, we referred to each rulemaking order as a rule. We used the GAO online rules database to identify FCC rules published in the Federal Register between January 1, 2002, and December 31, 2006, and compiled a list of those rules contained in the database as of February 1, 2007, which we refer to as “recent rules.” We compiled only those rules FCC had identified as “significant/substantive” and excluded rules labeled as “routine/info./other.” We then used the Federal Register to identify each rule’s FCC document number and the number of its associated docket. With those numbers, we were able to use FCC’s Web-based document databases—the Electronic Comment Filing System and the Electronic Document Management System—to retrieve FCC rulemaking documents, such as Notices of Proposed Rulemaking (NPRM), reports and orders (R&O), and comments and ex parte filings from public participants. NPRMs and R&Os include the dates they were adopted and released. Using these sources, for each recent rule, we were generally able to determine the rule’s docket; the originating bureau; the dates the rule was adopted, released, and published in the Federal Register; and the dates the first NPRM was released in the rule’s associated docket. Although some dockets contain multiple NPRMs, we did not analyze each rule to attempt to determine which specific NPRM(s) the rule was associated with, as it was not always clearly stated in the rules and a content comparison between each rule and each NPRM could not have been completed within the time frame for this study. We analyzed these NPRMs to determine generally why FCC initiated rulemaking. We reviewed selected U.S. Courts of Appeals opinions that addressed challenges to these rules. We identified appeals court opinions published between January 1, 2002, and June 1, 2007, using legal research databases, FCC’s Web site, and the Web site for the U.S. Court of Appeals for the D.C. Circuit. Using citations in these cases, we identified which cases were related to the “recent rules” we have previously identified. We then gathered and analyzed the published opinions related to those recent rules and identified the court’s decision in each case. We also obtained from FCC a list of ongoing court challenges to their rules. To determine the extent to which FCC followed its rulemaking process, we selected for case study four rules that were completed between 2002 and 2006. Of the 240 rules FCC completed during that time, 190 rules originated in the Media, Wireless Telecommunications, or Wireline Competition Bureaus or in the Office of Engineering and Technology. We selected one rule from each of these units. We also based our selection of rules on why the rules were initiated, how long they took to complete, and whether they were challenged in court. For each rule, we reviewed and analyzed the rulemaking records and interviewed FCC officials and stakeholders involved in the rulemakings. We used information from these case studies to illustrate examples of FCC rulemaking; however, the findings in our case studies cannot be generalized to all FCC rulemakings. In addition to these case studies, we interviewed stakeholders who represented different sectors of the telecommunications field, including wireless providers, satellite providers, and public safety and consumer groups. We also interviewed FCC officials in each of the four units to obtain information on general experiences with the FCC rulemaking process. To identify the factors that contributed to dockets and rulemakings remaining open, we reviewed FCC’s list of dockets in the December 2006 Federal Register’s Unified Agenda of Federal Regulatory and Deregulatory Actions. The Unified Agenda is published every 6 months and includes a list of dockets that FCC considers to be open. As of December 2006, FCC had 133 open dockets, 99 of which originated from 3 bureaus—Media, Wireless Telecommunications, or Wireline Competition—or FCC’s Office of Engineering and Technology. Of these 99 open dockets, 1 began in 2006—making it too recent to include in our analysis. As a result, we analyzed 98 dockets. From those 98 dockets, we selected 4 dockets for case study. We selected dockets that originated in different FCC bureaus or offices, were initiated for different reasons, and had been open for various lengths of time. Each docket may contain one or more rulemakings, and we analyzed each docket and the rulemakings within each docket. We reviewed and analyzed the rulemaking records and interviewed FCC officials and stakeholders involved in the rulemakings to determine why the dockets and rulemakings remained open. We used information from these case studies to illustrate examples of FCC rulemaking; however, the findings in our case studies cannot be generalized to all FCC rulemakings. We determined that the data used in this report were sufficiently reliable for the purposes of our review. We conducted our review from October 2006 through July 2007 in accordance with generally accepted government auditing standards. In addition to the contact named above, individuals making key contributions to this report include Tim Bober, Lauren Calhoun, Maria Edelstein, Bess Eisenstadt, Edda Emmanuelli-Perez, Andrew Huddleston, Sara Ann Moessbauer, Josh Ormond, John W. Shumann, Tristan T. To, and Mindi Weisenbloom. | The Federal Communications Commission (FCC) is charged with regulating interstate and international communications by radio, television, wire, satellite, and cable. The Telecommunications Act of 1996 established that FCC should promote competition and reduce regulation to secure lower prices and higher-quality services for American consumers. FCC implements its policy aims through rulemaking, whereby the agency notifies the public of a proposed rule and provides an opportunity for the public to participate in the rule's development. These rulemakings are documented within a public docket that contains the rulemaking record. In response to a congressional request on FCC rulemaking, GAO (1) described FCC's rulemaking process; (2) determined, for specific rulemakings, the extent to which FCC followed its process; and (3) identified factors that contributed to some dockets and rulemakings remaining open. GAO reviewed recent FCC rules, interviewed FCC officials and stakeholders, and conducted case studies of rulemakings. FCC's rulemaking process includes multiple steps as outlined by law, with several opportunities for public participation. FCC generally begins the process by releasing a Notice of Proposed Rulemaking and establishing a docket to gather information submitted by the public or developed within FCC to support the proposed rule. Outside parties may meet with FCC officials but must file a disclosure in the docket, called an ex parte filing, that includes any new data or arguments presented at the meeting. FCC analyzes information in the docket and drafts a final rule for the commission to adopt. The FCC chairman decides which rules the commission will consider and whether to adopt them by vote at a public meeting or by circulating them to each commissioner for approval. Stakeholders unsatisfied with a rule may file a petition for reconsideration with the commission or petition for review in federal court. FCC generally followed the rulemaking process in the four case studies of completed rulemakings that GAO reviewed, but several stakeholders had access to nonpublic information. Specifically, each of the four rulemakings included steps as required by law and opportunities for public participation. Within the case studies, most ex parte filings complied with FCC rules. However, in the case studies and in discussions with other stakeholders that regularly participate in FCC rulemakings, multiple stakeholders generally knew when the commission scheduled votes on proposed rules well before FCC notified the public. FCC rules prohibit disclosing this information outside of FCC. Other stakeholders said that they cannot learn when rules are scheduled for a vote until FCC releases the public meeting agenda, at which time FCC rules prohibit stakeholders from lobbying FCC. As a result, stakeholders with advance information about which rules are scheduled for a vote would know when it is most effective to lobby FCC, while stakeholders without this information would not. The complexity and number of rulemakings within a docket and the priority the commission places on a rulemaking contribute to dockets and rulemakings remaining open. The commission determines when to open and close a docket and which rulemakings are a priority; therefore, the commission determines how a docket and rulemaking progress. Dockets and the rulemakings within them may remain open because the dockets are broad and include multiple rulemakings, or because the commission has not yet voted to close the dockets even though they include completed rules. Within dockets, some rulemakings may remain open because they involve complex, technical issues or because competing priorities can force FCC officials to work on one rulemaking as opposed to another. Stakeholders generally said they are not concerned about the number of open dockets. |
Section 8 rental housing assistance, managed by the Department of Housing and Urban Development (HUD), is the main form of federal housing assistance for low-income tenants. In fiscal year 1997, it had expenditures totaling $16.4 billion. Under the Section 8 program, residents in subsidized units generally pay 30 percent of their income for rent and HUD pays the balance. The Section 8 program provides rental assistance tied to specific property units (project-based assistance) and to families and individuals who live in affordable rental housing of their choice, as long as the units meet HUD’s rent and quality standards (tenant-based assistance). According to HUD data, in fiscal year 1997, the tenant-based and project-based programs each served approximately 1.4 million households. This report focuses on issues concerning the project-based rental assistance program, including the project-based assistance associated with housing for the elderly and disabled. HUD has estimated a growing need for Section 8 project-based funding over the next 5 years to cover the costs of renewing expiring Section 8 rental assistance contracts and of providing additional funding to existing Section 8 contracts that lack sufficient funds to cover payments for the full term of the contracts. As a result, the Congress has become increasingly concerned that HUD have effective systems in place to identify unexpended Section 8 funds that can be used to offset future funding needs. The Section 8 housing assistance program, named for the revised section 8 of the U.S. Housing Act of 1937, was originally established by the Housing and Community Development Act of 1974 (P.L. 93-383). Section 8 rental assistance is generally limited to families whose incomes are at or below 50 percent of the area’s median income and to rental units that meet HUD or local standards for decent, safe, and sanitary housing. In the project-based program, assistance is tied to specific housing units under an assistance contract, rather than to the families themselves, and is therefore referred to as “project based.” HUD generally contracts directly with, and provides rental subsidies to, the owners of private rental housing; in some cases, HUD contracts with state finance agencies that are responsible for administering the rental assistance program for low-income residents. Typically, the initial contracts were for 15, 20, or 40 years. In recent years, the Congress has generally preferred to provide new Section 8 rental assistance in the form of tenant-based assistance. However, the Congress continues to provide funding to renew existing Section 8 project-based contracts as they expire and to amend contracts with insufficient funding to meet their contract terms. The Congress also continues to provide new project-based assistance for properties funded under the Section 202 housing for the elderly program and the Section 811 housing for the disabled capital advance program. According to HUD, about 24,000 active Section 8 project-based contracts covered about 1.4 million property units as of September 30, 1997. These contracts are associated with four main programs and several smaller programs. The four principal project-based programs are the (1) New Construction/Substantial Rehabilitation Program, (2) Elderly/Disabled Program, (3) Loan Management Set Aside Program, and (4) Property Disposition Program. These programs are described below. New Construction/Substantial Rehabilitation Program. The purpose of this program was to encourage developers to build or rehabilitate projects for lower-income families by providing rental assistance contracts for a negotiated number of units in a project for periods ranging from 15 to 40 years. The program was established in 1974 and repealed by the Congress in 1983 because of its high cost. Thus, funding for new project-based rental assistance contracts associated with newly constructed or rehabilitated properties was discontinued in 1983, except for new contracts associated with housing for the elderly and disabled. Elderly/Disabled Program. Since fiscal year 1992, HUD’s programs for the elderly and disabled have provided property development funding to sponsors of low-income housing through capital advances and project-based rental assistance contracts. The sponsors do not have to repay the advances as long as they continue to meet HUD’s requirements for keeping rents affordable. Thus, the rental assistance contracts need to subsidize only operating costs because no mortgages are associated with the properties. The contracts under the current program are not funded under the same appropriations account as the Section 8 rental assistance program, but the project-based assistance under these programs is substantially the same as Section 8 project-based assistance, except that the subsidy is limited to operating costs. HUD includes these contracts in its inventory of Section 8 project-based contracts. New contracts currently being issued for project-based assistance for properties for the elderly and disabled are generally issued for 5- or 20-year terms, depending upon when the project was initially approved. Loan Management Set Aside Program. This program was developed to provide Section 8 rental assistance to financially troubled projects. Section 8 contracts under this program were initially for 15-year terms. These contracts began expiring during the 1990s and required renewal funding. No new loan management set aside Section 8 contracts have been issued since fiscal year 1994. Multifamily Property Disposition Program. The purpose of this program is to facilitate the sale or transfer to new owners those properties acquired through foreclosures on defaulted loans insured by the Federal Housing Administration. Legislation enacted in 1988 required HUD to preserve some of the units in these properties as affordable housing for low- to moderate-income households. HUD satisfied this requirement by providing project-based rental assistance under 15-year Section 8 contracts with the new owners. In 1995, HUD stopped entering into new project-based contracts for property disposition and began using Section 8 vouchers and certificates under the tenant-based program instead. However, some new project-based contracts with 15-year terms will be executed in the future as a result of a demonstration program that HUD implemented in 1994 to test the feasibility of tenant ownership options at foreclosed properties. Other Section 8 programs include the (1) housing preservation program, (2) project-based tenant protection program, and (3) community investment demonstration program, also referred to as the pension program. These programs are described below. Housing Preservation Program. From 1987 to 1996, HUD issued project-based rental assistance contracts under the housing preservation program. The Congress established the program to avoid displacing lower-income households and losing affordable housing stock. These consequences were anticipated as the owners of federally insured properties, developed during the 1960s and 1970s, were approaching eligibility to pay off their mortgages. Once they have paid off the mortgages, owners do not have to meet existing operating restrictions, such as limits on residents’ income levels and the rents that could be charged. HUD provided project-based rental assistance as one of the incentives for owners to continue low-income restrictions. The Congress discontinued the use of this incentive to reduce excessive program costs by 1997 and terminated the preservation program in its entirety in fiscal year 1998. Project-Based Tenant Protection Program. This program provides vouchers or certificates to eligible households who face displacement or rent increases for various reasons, such as the owners’ opting out of the Section 8 project-based program or HUD’s terminating the project-based assistance because owners failed to comply with housing quality standards. Under this program, HUD’s Office of Housing receives appropriations but transfers the funding over to HUD’s Office of Public and Indian Housing, which provides the tenant-based assistance. Community Investment Demonstration Program. This program, referred to as the pension fund program, was created by the Congress in 1993 to demonstrate how the leveraging of HUD’s resources can encourage pension funds to invest in the production and preservation of affordable housing. Six participating pension funds make or purchase uninsured loans to finance the construction or rehabilitation of multifamily rental housing for lower-income families. To reduce the risks incurred by the pension funds, HUD uses Section 8 project-based funds, limited to 120 percent of the local fair market rent under contracts of up to 15 years. HUD receives funding (budget authority) for the project-based program primarily to pay for contract renewals as well as for contract amendments to fund contracts that do not have sufficient funds to make payments for the full term of the contract. The original long-term contracts that were entered into in the 1970s and 1980s began expiring in the early 1990s. The Congress and HUD have worked together to fund renewals for all of these contracts. Renewals are now funded for 1 year. While some contracts have more funding than is needed because expenditures have been less than anticipated, other contracts are underfunded and need contract amendments to provide funding for the full term. This need arises when the initial funding was not sufficient to provide adequate rental assistance over the life of the contract. In 1996, the Congress revised the Section 8 program to permit HUD to transfer any remaining budget authority from expired or terminated Section 8 project-based assistance contracts to other housing assistance contracts. Prior to this change, HUD’s authority to use recaptured budget authority from expired or terminated contracts had to be treated in accordance with the terms of the annual appropriations acts. Because an increasing number of Section 8 project-based contracts are coming due for renewal and because of the need to provide amendment funding to existing contracts, HUD has estimated a growing need for budget authority. Specifically, as shown in table 1.1, HUD received $2.4 billion for Section 8 project-based funding in fiscal year 1997 and estimates this need will grow to $6.8 billion in fiscal year 2003. The future outlays associated with the program are estimated to remain relatively constant, ranging from $8.6 billion in fiscal year 1999 to $8.8 billion in 2003. HUD’s Budget Office was unable to provide the funding and outlay data associated with the Section 8 project-based rental assistance program prior to fiscal year 1997. According to an official in that office, such information could not be provided because the appropriations for the tenant-based and project-based programs are provided in one lump sum, and the Department has not tracked the two programs separately. Furthermore, until fiscal year 1998, HUD did not separately track new appropriations and carryover balances from prior years. The Section 8 tenant-based program and the moderate rehabilitation program are managed by the Office of Public and Indian Housing, and the project-based program is managed by the Office of Housing. However, as part of the implementation of the Department’s 2020 Plan, HUD is currently in the process of establishing a Section 8 Financial Management Center that will centralize the management of the Section 8 programs under the Office of Public and Indian Housing. The center, located in Kansas City, Missouri, will serve as the focal point for the administrative services necessary to support all Section 8 contracts, including both tenant-based and project-based contracts. Under the plan, contract management responsibilities for most Section 8 project-based contracts would be handled in the same manner as they currently are for the tenant-based program—that is, contract management responsibilities would be delegated to state and local public housing or housing finance agencies that will administer the contracts on behalf of HUD. Currently, most project-based contracts are in the form of housing assistance payment contracts that are administered by HUD personnel. These contracts are to be converted into annual contributions contracts administered by a public housing agency or a housing finance agency. HUD says that state housing agencies currently administering Section 8 tenant-based programs would be offered an opportunity to administer an annual contributions contract for all the remaining project-based contracts in the state if they have the administrative capacity to do so. While the plan initially estimated converting 95 percent of the project-based contracts to annual contributions contracts by the end of fiscal year 1998, HUD officials now expect to complete the conversion by the end of fiscal year 1999. This report was prepared to comply with the requirements of the 1997 Emergency Supplemental Appropriations Act (P.L. 105-18, June 12, 1997), which requested that GAO study HUD’s systems for budgeting and accounting for Section 8 rental assistance funds to determine whether HUD’s systems ensure that unexpended Section 8 funds do not reach unreasonable levels and that obligations are spent in a timely manner. This report examines the Section 8 project-based assistance program, particularly (1) the categories and amounts of unexpended rental assistance funds and (2) the effectiveness of HUD’s processes to evaluate unexpended Section 8 project-based balances, ensure they do not reach unreasonable levels and are spent in a timely manner, and take unexpended balances into account when determining funding needs as part of HUD’s budget process. In addition, chapter 1 of the report provides HUD’s estimate of future funding trends for the Section 8 project-based program for fiscal years 1999 through 2003. To identify unexpended Section 8 project-based balances, we obtained information on the balances as of September 30, 1997, from HUD’s Program Accounting System (PAS), which HUD reported as being in compliance with the Federal Manager’s Financial Integrity Act. We reviewed the PAS documentation to confirm that we were provided with complete information. We did not perform a reliability assessment of these data. However, HUD’s Office of the Inspector General has examined funding and expenditure data as part of its financial statement audit for fiscal years 1996 and 1997 and has not identified data errors that were material to HUD’s financial statements. In addition, HUD’s Office of the Chief Financial Officer has retained a contractor to evaluate the documentation supporting the PAS data (as well as the Tenant Rental Assistance Certification System (TRACS) data discussed below) to determine its reliability. The contractor’s review is based on a random sample of 100 Section 8 project-based contracts. As of June 1998, the review was still under way. We also reviewed budget allotment and apportionment data as of September 30, 1997, and data for the first quarter of fiscal year 1998 to ensure that we had included all relevant unexpended balances. The balances include those for project-based rental assistance contracts for housing for the elderly and disabled, which are included in HUD’s inventory of Section 8 contracts. To evaluate the unexpended Section 8 project-based balances and to ensure they do not reach unreasonable levels, we analyzed HUD’s data to identify the balances associated with contracts that had expired on or before September 30, 1996, and with those contracts with future expiration dates but no expenditures from March through September 1997. The latter category will include contracts that have been terminated. In addition, we obtained information from Office of Housing officials at headquarters and field office locations concerning the status of unexpended balances associated with the elderly/disabled, property disposition, and pension fund programs because of issues associated with these programs, such as program changes that affect the need for existing funds. Additionally, we examined HUD’s reports on the status of funds on inactive projects (also referred to as aging reports) and other documentation provided by the Office of the Chief Financial Officer to analyze selected inactive, expired, and pending contracts. However, we did not conduct a systematic analysis of all of HUD’s Section 8 project-based unexpended balances to identify funds that were no longer needed. To examine the effectiveness of HUD’s procedures to evaluate unexpended Section 8 project-based balances to ensure they do not reach unreasonable levels and are spent in a timely manner, we reviewed and analyzed HUD’s annual certification process for Section 8 project-based balances and reviewed HUD’s Budget Forecast System (BFS) model, which the Department uses to estimate Section 8 amendment needs for budgeting purposes, as well as various analyses produced by the model. For the certification process, we reviewed HUD’s handbook and other relevant documents, including memorandums and various accounting reports. We interviewed HUD officials at headquarters and six field offices (located in Chicago, Illinois; Dallas and Fort Worth, Texas; Denver, Colorado; New York, New York; and Seattle, Washington). In addition, we reviewed reports by HUD’s Office of the Inspector General, as well as the supporting workpapers, on the 1997 year-end certification process and discussed the report’s findings with officials in the Office of the Inspector General. To examine the effectiveness of HUD’s processes to take unexpended balances into account when determining funding needs as part of its budget process, we evaluated HUD’s BFS model. We met with HUD and contractor officials to obtain information on the purpose of the model, its methodology, and the analyses produced. We obtained the supporting data files and examined the model’s input and output to determine if the model was working as intended. The funding estimates produced by the model are in nominal dollars, not adjusted for inflation. We reviewed five different analyses and worked with HUD throughout the review to correct the errors in data and methodology that we identified. We also reviewed HUD’s fiscal year 1999 budget request for amendment funding for the Section 8 project-based program. This request was supported by an April 1997 BFS analysis. Because our review focused on the unexpended balances for the existing portfolio of project-based rental assistance contracts, we did not examine HUD’s budget request for Section 8 contract renewals. We also did not assess how HUD’s Section 8 Financial Management Center would oversee unexpended balances for Section 8 project-based contracts, such as how the annual reviews of unexpended balances will be conducted, because this aspect of the Center’s operations was in the early planning stage at the time of our review. To provide information on funding trends in the Section 8 project-based rental assistance program, we requested historical data from HUD on the budget authority and outlays associated with the program. However, HUD could provide this information only for fiscal years 1997 and 1998, along with the amounts in the fiscal year 1999 budget for fiscal years 1999 through 2003. We provided a draft copy of this report to HUD for its review and comment. HUD provided written comments on the draft, and these comments are presented and evaluated in chapter 3 and appendix II. We conducted our work from August 1997 through June 1998 in accordance with generally accepted government auditing standards. As of September 30, 1997, HUD had available about $59.1 billion in unexpended Section 8 project-based funds. About $55.4 billion of the unexpended balances was obligated to about 31,000 Section 8 contracts. HUD also had about $3.7 billion in unobligated Section 8 project-based balances. These balances consisted of about $3 billion reserved for specific contracts but not yet obligated and about $.7 billion in unreserved funds that carried over into fiscal year 1998. While we did not conduct a comprehensive analysis of all of HUD’s unexpended balances for Section 8 project-based rental assistance, we identified about $517 million that is no longer needed because the contracts expired, were terminated, or were never executed. In addition, we identified other balances for which the continued need is questionable, such as $79 million that HUD has assigned to the property disposition program, even though the Department discontinued the use of project-based assistance for the program in 1995 and instead uses tenant-based assistance. We identified three categories of Section 8 unexpended balances, which we used to analyze the status of existing funding balances as of September 30, 1997. Broadly stated, the funds are (1) obligated to specific Section 8 contracts, (2) reserved for specific Section 8 contracts, or (3) totally unobligated. The first category, called “undisbursed obligations,” is the amount of funds obligated to the Section 8 contracts but not yet disbursed. This category includes balances for both active and inactive Section 8 contracts. HUD’s Office of the Chief Financial Officer also uses the term “undisbursed contracts” to describe this category of funds. In the second category, referred to as “unobligated but reserved,” HUD has funding that has been reserved for specific Section 8 contracts but has not yet been obligated to them. This category includes Section 8 funding for properties for which Section 8 contracts have not yet been executed, such as properties that are still being planned or are under development or construction. It also includes reservations associated with active and inactive contracts that have already had funds obligated to them. HUD uses the terms “uncontracted reservations” and “unobligated reservations” to describe this category of funding. The third category, “unobligated and unreserved” funds, is the amount of budget authority that HUD has received for Section 8 project-based programs but has not yet reserved or obligated for specific contracts. HUD refers to this funding as “unassigned allotments” and “unreserved assignments.” These amounts are also referred to as carryover funds at the end of the year because they become available for reservations and obligations in the next fiscal year. As of September 30, 1997, HUD had available about $59.1 billion in unexpended Section 8 project-based funds. Table 2.1 presents the $59.1 billion in the three funding categories and associates the funds with the Section 8 project-based programs. About $55.4 billion of the funding represents undisbursed obligations; about $3 billion represents funds that are unobligated but reserved; and about $.7 billion represents unobligated and unreserved funds. As shown in table 2.1, undisbursed obligations constituted the largest segment of unexpended Section 8 project-based balances as of September 30, 1997. About $55.4 billion, or 94 percent of the total unexpended fund balance of $59.1 billion, is associated with about 31,000 Section 8 contracts. About $32.7 billion (59 percent) of the $55.4 billion was allocated to contracts supporting rents at family properties developed under the new construction/substantial rehabilitation program. HUD programs serving the elderly and disabled under the new construction/substantial rehabilitation loan program ($12.5 billion) and the elderly and disabled capital advance program ($2.5 billion) command the second largest portion of HUD funds, about $15 billion in total. The loan management set aside program accounted for about $4.1 billion (7 percent) in unexpended balances, while about $2.5 billion (5 percent) in funds were associated with properties covered through property disposition programs. The remaining $1 billion (2 percent) was for other programs, such as the housing preservation, pension fund, and tenant protection programs. While most of the undisbursed obligations are needed to fulfill HUD’s Section 8 funding commitments over the remaining life of each contract, funding in excess of contractual needs has accumulated in some cases. Specifically, for 1,085 contracts that expired on or before September 30, 1996, we identified about $345 million in undisbursed obligations as of September 30, 1997. About 900 of these contracts, with balances totaling about $218 million, expired during 1994 or earlier. These balances generally remained because rental assistance payments were lower than HUD anticipated when the contracts were funded. As discussed in the next section, unobligated but reserved balances of $60 million are also associated with expired contracts, bringing the total balance of funding remaining on contracts that expired on or before September 30, 1996, to $405 million. Additionally, we identified 440 contracts, with $503 million in undisbursed obligations, that had future expiration dates but no disbursements during the last 6 months of fiscal year 1997. While the lack of expenditures may occur in active contracts that do not bill regularly or do not currently require a subsidy, it can also occur in contracts that have been terminated for various reasons. According to our examination of a 1997 HUD field review of existing contracts, at least $77 million was associated with contracts that were no longer in effect. Specifically, for 304 of the 440 contracts without recent expenditures, HUD field offices indicated that 77 of these contracts had been terminated for various reasons. The other 227 contracts were designated as either active, pending, or suspended. Included in the active category were 104 contracts with property owners serving the elderly or disabled, with about $100 million in Section 8 balances, that were not disbursing funds at all because the owners had not requested rental assistance payments. According to Office of Housing officials, in some cases a project’s costs are low enough to be supported by residents’ incomes without the need for the HUD subsidy. They also noted that contracts may go through periods when owners either do not file for reimbursement or submit claims that are lower than the projected annual requirements for rental assistance. Finally, a substantial number of other contracts are likely to have unexpended balances remaining when the contracts expire. For these contracts, the actual subsidies required are less than those HUD anticipated as being needed when funds were obligated to the contract. In chapter 3, we discuss HUD’s efforts to identify such balances and to compare them with the amounts needed to fund current contracts that lack sufficient funding to cover payments for the full term of the contracts. Approximately $3 billion of the $59.1 billion in unexpended balances fell into the category of unobligated but reserved funds. HUD has reserved most of these funds for future contracts associated with (1) the elderly and disabled capital advance programs, (2) renewals and amendments of existing Section 8 contracts, (3) property disposition programs, and (4) other programs such as the pension fund program. While we did not analyze all of these balances in detail, we did identify some cases in which unneeded funds have accumulated, such as $60 million associated with expired contracts and about $35 million associated with contracts that HUD never executed for various reasons, such as the property’s not being constructed. The elderly and disabled capital advance programs accounted for about $1.3 billion of the $3 billion in unobligated but reserved funding as of September 30, 1997. In all, about 1,100 Section 8 contracts, which will be for 5 or 20 years, depending upon when HUD reserved the funds, had not yet been executed. According to Office of Housing officials, considerable time usually elapses between the date funds are reserved for an approved project and the date that property is ready for occupancy. HUD officials said that in some cases it has taken 7 or more years to complete planning, development, and construction—at which point the Section 8 contract is executed. In addition, about $1 billion for renewing and amending existing Section 8 contracts was included in the unobligated but reserved balances as of September 30, 1997. Approximately $753 million of the total was from HUD’s 1997 appropriations, while the remainder, about $248 million, was appropriated for fiscal year 1996 or prior years. The $1 billion was reserved for contracts under the new construction/substantial rehabilitation ($417 million), loan management set aside ($408 million), elderly and disabled capital advances ($131 million), and property disposition ($37 million) programs. (These funding amounts are not shown separately in table 2.1 but are included in the overall fund totals for each of the Section 8 programs.) In addition to the $37 million for the renewals and amendments of existing property disposition contracts, HUD’s property disposition programs had another $163 million in reserved funds available on September 30, 1997—for a total of approximately $200 million. About $77 million of this total was reserved for a 1994 HUD demonstration program in which a state housing finance agency agreed to administer the disposition of 11 foreclosed properties. The program requires that tenant groups receive preference in purchasing the properties. HUD expects to execute the 15-year contracts under this program within 2 years. Another $53 million in unobligated but reserved funds in the property disposition program was for 16 unexecuted contracts having funding reservation dates as far back as 1984. In the three cases we examined, totaling $6.4 million, HUD either could not identify the property or told us that the new owners decided not to participate in the Section 8 program. For example, HUD records showed that a field office had reserved about $4.6 million for a Section 8 contract in 1985 but never executed the contract because of a change in disposition plans. At another office, property disposition staff reserved about $1.4 million in August 1994 for a HUD-owned property it planned to transfer to a unit of city government. However, by August 1996, the purchaser had decided to demolish the property instead of accepting the Section 8 contract. In another case, HUD officials informed us that they could not identify a specific property associated with a 1991 reservation of $423,000 that we questioned. In all of these situations, HUD officials stated that the funds should have been released. In the category of “other” programs, HUD had about $170 million in unobligated but reserved funds. The largest portion of this amount, about $123 million, was for the pension fund program, which the Congress authorized in 1993 to test the feasibility of becoming partners with large pension funds in the purchase, rehabilitation, and construction of affordable housing. HUD agreed to subsidize these properties through 15-year Section 8 contracts, with rents limited to 120 percent of an area’s fair market rent. As of April 1998, six participating pension funds had submitted applications for the renovation of 42 properties. HUD has approved 24 of the proposals, and work had been completed on 15. By the end of fiscal year 1998, HUD expects that as many as 30 properties, consisting of about 3,300 units, will be financed by participating pension funds. HUD has developed a preliminary proposal for the repeal of this program; however, we were told that such action would not affect the completion of projects currently in the pipeline. In addition to the unobligated but reserved fund balances for future Section 8 contracts, many existing contracts that have already been executed have unobligated but reserved balances remaining. Our analysis of these balances showed that some are not needed. For instance, 271 Section 8 contracts that had expired on or before September 30, 1996, had about $60 million in unobligated but reserved balances remaining. We also found cases in which HUD continued to record Section 8 reservations as valid in its accounting records even though the Section 8 contracts were never executed. For example, we examined two reservations, made in 1980 and 1990, that totaled $29 million. The $20 million reservation recorded in 1980 had no further activity reflected in HUD’s accounting records. The cognizant field office confirmed that this reservation should have been removed from the accounting records. The reservation had been associated with a property that was planned under the new construction/substantial rehabilitation program, but the commitment for the property was never made. Similarly, we found a $9 million reservation, recorded in 1990, that was associated with a property for the elderly and disabled that the cognizant field office reported it could not identify. Funds for both of these properties remained reserved as of September 30, 1997. In addition, we found that HUD had reserved approximately $25 million of Section 8 project-based funding during the last week of fiscal year 1997 for 82 contracts previously executed under the housing preservation program. However, on the basis of HUD’s own projections of contract expenditure rates through contract expiration, it is questionable whether most of these contracts need the additional funds. As of September 30, 1997, HUD’s Section 8 project-based balances included about $.7 billion in unobligated and unreserved funds that it carried into fiscal year 1998. Most of the funds were associated with the renewals of expiring contracts, amendments for underfunded contracts, and funds for the disposition of foreclosed multifamily properties. Approximately $510 million of the unobligated and unreserved fund balance included funds for renewing and amending Section 8 project-based contracts. About $246 million of this total was from funds appropriated in fiscal year 1996 or earlier. An Office of Housing official informed us that the carryover funds are needed to fund expirations and amendments that occur during the first quarter of the fiscal year because the Office of Housing does not usually receive its fiscal year apportionments until December—or about 2 months into the fiscal year. Also included in the unobligated and unreserved balance was about $79 million for the disposition of failed HUD properties. An Office of Housing official informed us that it did not have an immediate need for these project-based disposition funds and had carried them over into fiscal year 1998. As discussed previously, since 1995, HUD has discontinued the use of project-based assistance for property disposition and uses tenant-based assistance instead. An official overseeing HUD’s property disposition programs said these unobligated balances had stayed with the program in case HUD ever goes back to using Section 8 project-based assistance for its disposition efforts. We also found that the Office of Housing had unobligated and unreserved funds of about $52 million carried into fiscal year 1998 for the project-based tenant protection program. According to HUD’s budget director for the Office of Housing, fiscal year 1997 program activity, such as Section 8 contract terminations resulting from HUD’s enforcement actions or owners opting out of the Section 8 program, was slower than anticipated. The director indicated that the funding that was unobligated and unreserved at the end of fiscal year 1997 remains available to meet increasing tenant displacement needs that may materialize. In chapter 3, we discuss HUD’s efforts to identify unexpended balances that can be recaptured and used to help meet its future needs for Section 8 project-based funding. We also compare HUD’s estimates of Section 8 project-based amendment needs with the amount of existing Section 8 project-based funding that may be used to meet those needs. HUD uses two processes to evaluate unexpended Section 8 project-based balances to ensure that the balances do not reach unreasonable levels, are spent in a timely manner, and are taken into account in HUD’s budget process. These processes are its annual review of unexpended balances (unliquidated obligations) and the HUD Budget Forecast System (BFS) model, which is used to estimate Section 8 amendment needs for budgeting purposes. We identified weaknesses in both of these processes. For example, some HUD offices did not conduct the annual reviews of unexpended balances, and some funds that were identified as being no longer needed were not deobligated. We also found that errors in the analyses derived from the BFS model resulted in HUD’s substantially underestimating the amount of unexpended balances that are available for recapture. More recent HUD analyses, which correct most of the problems we found in the BFS model and update information to reflect more current economic assumptions, indicate that at the end of fiscal year 1998, the Department will have about $1.5 billion in funding that could be used to meet fiscal year 1999 needs. Furthermore, these analyses do not reflect an additional $1.5 billion in funding that could be used by HUD to meet its fiscal year 1999 needs for contract amendments. HUD’s procedures for identifying and deobligating funds that are no longer needed to meet its contractual obligations do not ensure that all Section 8 project-based balances are evaluated each year and that excess balances are identified and deobligated in a timely manner. For example, we found that some offices did not perform annual reviews of unexpended balances, and some funds that were identified as no longer needed were not deobligated. These weaknesses stem from a number of factors, including limited oversight of the process by HUD’s Office of the Chief Financial Officer. Each year, the status of HUD’s unexpended balances are to be examined under a review process the Department refers to as the annual review of unliquidated obligations. According to HUD’s handbook on incurring, recording, and adjusting obligations, the purpose of the review is to determine whether the recorded obligations should be continued, reduced, or canceled. According to HUD’s Acting Assistant Chief Financial Officer for Accounting, the annual review covering Section 8 project-based balances focuses on identifying those balances associated with contracts that are no longer active, such as balances remaining on expired or terminated contracts. The review process is based on balances as of June 30 and is to be completed by August 31. For decentralized programs such as the Section 8 project-based rental assistance program, the reviews are conducted by HUD’s program offices. The program office for the Section 8 project-based program is the Office of Housing. The reviews are coordinated by HUD’s field accounting divisions and conducted by Office of Housing staff at the various field office locations. The annual review process is to occur in four major steps. First, HUD’s field accounting divisions provide a listing of all Section 8 project-based contracts with unexpended balances that have had no financial activity for 6 or more months to the responsible Office of Housing directors at the various field office locations. Second, Office of Housing officials are to have the balances examined and report the results of their reviews to the field accounting division. These reports should specify whether each contract is (1) active, (2) completed and cancellation action has been initiated, or (3) completed and cancellation action will be initiated. For contracts for which funds are to be canceled, the Office of Housing is to provide the appropriate documentation to the field accounting division so that the remaining balances may be deobligated. Third, the field accounting divisions are to compile the results of all of the reviews and send a certification statement to HUD’s Office of the Chief Financial Officer (CFO). The certifications are to state that the program offices were notified, in writing, of the obligations that had no financial activity for 6 months or more, and that responses were obtained from the program offices indicating whether the obligations were valid—that is, whether the balances were still needed or should be deobligated. We note that HUD’s guidance on performing the review of unliquidated obligations does not specifically define a valid obligation. However, the guidance for this review implies that a valid obligation represents one associated with an active contract. Thus, invalid obligations are those obligations associated with expired or terminated contracts. While the guidance states that the review should determine whether to continue, reduce, or cancel obligations, it does not directly address whether active contracts should be reduced if the unexpended balances are greater than projected needs. HUD’s Acting Assistant Chief Financial Officer for Accounting indicated that this type of analysis is optional. The certification is also to indicate, as appropriate, that efforts were made to obtain responses from program offices when no response was received within the requested time frame and attempts were made to obtain the documentation needed to deobligate unneeded obligations. The certification is also to state that the documentation of the review is available for future internal control review and audits. Finally, primarily on the basis of these and other certifications covering HUD’s other programs and activities, the Office of the CFO is to certify to the Department of the Treasury that the obligation balances in each of the agency’s appropriation accounts reflect proper existing obligations. We found a number of weaknesses in HUD’s annual process for identifying and deobligating Section 8 project-based funds that are no longer needed, including (1) some offices not completing the reviews and (2) funds identified for deobligation not being deobligated. These weaknesses stem from a number of factors, including limited oversight of the reviews conducted by the program offices. As a result of the weaknesses in the review process, the balances associated with expired or terminated contracts have remained in the accounting records for years after contracts have expired or been terminated. In examining the annual process for field offices under the jurisdiction of HUD’s Midwest, Southwest, and Northwest/Alaska locations, we found that in some cases the required annual reviews were not conducted by the field offices responsible for reviewing Section 8 project-based balances. In other cases, the reviews were incomplete. For example, the Southwest location, which included 10 field offices with responsibility for Section 8 project-based assistance, did not complete the reviews at all in 1997. The director of the New York field accounting division, who is responsible for the Section 8 project-based balances managed by the Southwest offices, did not disseminate the unexpended balances report because of his heavy workload. Similarly, 3 weeks after the certification statements were due to the Office of the CFO, we found that the unexpended balances reports had not been distributed to the Northwest/Alaska field offices for review because of an oversight. As a result of our September 1997 request for documentation of the reviews, however, the field accounting director had the reports distributed to the location’s three field offices with responsibility for Section 8 project-based contracts. As a result, the Seattle field office identified $3 million in Section 8 project-based funds that were no longer needed. We noted that while the certification letter by the director of the field accounting division for the Northwest/Alaska offices indicates that, as of September 16, 1997, some of the reviews were not yet completed, the letter from the director of the New York field accounting division does not indicate that the reports were not distributed and thus the reviews not performed. This certification letter only states—incorrectly—that appropriate HUD officials and employees had been notified of unliquidated obligations that needed to be liquidated or deobligated. HUD’s Office of the Inspector General (OIG) also found shortcomings in the review process at the two field accounting divisions it examined in 1997 as part of its annual financial statement audit of the Department. In that audit, the OIG reviewed the Department’s year-end certification process for the Denver and Chicago field accounting divisions. The OIG was to determine whether the various program office directors at these locations responded to the field accounting directors with the results of their reviews of unexpended balances. The OIG found that two multifamily housing directors did not respond at all and that one multifamily housing director provided an incomplete response. The review process does not always result in the deobligation of funds identified as no longer needed for specific Section 8 project-based contracts. For example, in 1993, the Dallas field office identified about $17 million in balances associated with expired or terminated contracts and prepared the necessary documentation to deobligate the funds. However, according to the housing management specialist responsible for the review, the balances were never deobligated because HUD staff in headquarters instructed the field office to wait until it determined whether the funds could be reprogrammed for future Section 8 program needs. In April 1996, the Congress provided HUD with authority to reuse these funds. As of September 30, 1997, however, these balances were still in HUD’s accounting records. For example, for properties in Texas alone, we found that as of September 30, 1997, there were 132 expired Section 8 project-based contracts with about $45 million in balances. Many of these contracts expired in the early 1990s. The New York field accounting division director also told us that Office of Housing staff have not been deobligating funds for expired contracts for a number of years because HUD headquarters has had plans to recapture these funds centrally. According to the budget director of the Office of Housing, these plans will be initiated beginning in June 1998. During its fiscal year 1997 financial statement audit, the OIG also found that funds identified for deobligation had not been processed. Specifically, the OIG found that during the annual review process for fiscal year 1997, HUD’s Chicago Housing Office identified nearly $34 million in Section 8 project-based funds associated with expired or closed contracts that were no longer needed. However, according to the OIG’s audit summary of this review, the Housing Office provided the field accounting division director with a listing of the balances that needed to be deobligated but not with the required documentation to deobligate the funds. The field accounting division’s deputy director informed the OIG that the program person responsible for completing the task had been reassigned to another area in HUD, and the deobligation documents were not prepared before the reassignment. Without the documents, the field accounting division could not deobligate the $34 million in HUD’s accounting systems. The OIG reported this deficiency to HUD in its May 21, 1998, management letter for the fiscal year 1997 financial statement audit. The OIG recommended, among other things, that the field accounting divisions ensure that all funds to be deobligated at year end are in fact deobligated. Weaknesses in the annual certification processes are also due in part to the fact that the Office of the CFO and the field accounting divisions provide limited oversight of the annual review process. The Office of the CFO relies upon the certifications received from the directors of the field accounting divisions in order to certify to the Department of the Treasury that all obligations at the end of the fiscal year are proper existing obligations. However, we found that the certifications relied upon do not express an opinion on the continued need for the balances and that HUD does not require the program officials who actually perform the annual reviews to certify that the balances are needed. The directors only certify that program offices were asked to perform the reviews and that they received responses from the program offices indicating that the obligations were still valid or should be deobligated. According to HUD’s Acting Assistant Chief Financial Officer for Accounting, who provided HUD’s certification to the Department of the Treasury for fiscal year 1997, the responsibility for certifying the balances actually rests with the program offices, such as the Office of Housing, and not the field accounting divisions. However, HUD’s handbook does not require that the program officials performing the reviews provide certifications on the continued need for the unexpended balances. Nevertheless, we found that some program officials were asked by the director of their respective field accounting division to provide certifications on the continued need for the balances. For example, the Midwest field accounting director requests such certifications from Housing Office officials, although some of the respondents did not provide them. However, not all accounting division directors require certifications. For example, the memorandum from the director of the Rocky Mountain field accounting division to program directors initiating the review for fiscal year 1997 did not request a certification from the program offices. The field accounting director acknowledged that he did not specifically ask program offices for the certification, although in his view the memorandum did imply that program directors should certify that the balances are accurate. According to the director, some offices did provide a written certification even though his memorandum did not directly ask them do so. We also found that the certifications provided to the CFO by the directors of field accounting divisions generally used the standard certification letter provided in HUD’s review guidance. As such, the certifications did not identify which offices were covered by the certification and, most importantly, which of these offices had not completed the reviews. Thus, under this system, the Office of the CFO is unaware of deficiencies in the review process at the field accounting division and/or the program office level. For example, the Office of the CFO was not aware of the offices that had not completed the review. Specifically, the Office was not aware of the New York field accounting division’s failure to request the Southwest location—which covered 10 offices with Section 8 project-based responsibilities—to perform the fiscal year 1997 review. Nor was it aware of existing balances, such as the $20 million reservation made in 1980 for a project that was subsequently canceled but was still in HUD’s accounts as of September 30, 1997. While the primary responsibility for the reviews appropriately rests with the program offices, some oversight over the manner in which the field accounting divisions and the program offices conduct their reviews is appropriate given the reliance on their work by the Office of the CFO. According to the Director, Office of Financial Policy and Procedures, Office of the Assistant Chief Financial Officer for Systems, the Office of the CFO does not review any documentation supporting the reviews and certifications. Furthermore, the Acting Assistant Chief Financial Officer for Accounting said it would not be appropriate to have accounting staff (field accounting divisions) evaluate programmatic decisions, such as whether to deobligate funds for specific contracts, because accounting staff do not have the necessary background to make such determinations. At a minimum, the CFO’s confirmation that the reviews have been completed and that the funds identified for deobligation have been deobligated would improve accountability. HUD also does not identify certain balances—such as those associated with expired contracts—and require the program offices to justify keeping the funds. For example, as discussed in chapter 2, we found about $517 million that is no longer needed because the contracts had expired, were terminated, or were never executed. HUD does not have effective processes in place to take unexpended balances into account when determining its needs for Section 8 project-based funding as part of its budget process. Specifically, HUD’s Budget Forecast System (BFS) model, used to estimate Section 8 amendment needs for budgeting purposes, has not provided reliable information, in part because basic quality checks on the data used in the analyses were not performed. As a result, the Department requested substantially more funding than is needed for contract amendments in its fiscal year 1999 budget request. More specifically, HUD requested $1.3 billion for contract amendments in fiscal year 1999, whereas more recent HUD analyses, which correct most of the problems we found in the BFS model and update information to reflect more current economic assumptions, indicate that at the end of fiscal year 1998, the Department will have about $1.5 billion in funding that could be used to meet fiscal year 1999 needs. Furthermore, these analyses do not reflect an additional $1.5 billion in funding that could be used by HUD to meet its contract amendment needs for fiscal year 1999. Each year, the Department receives funding to amend Section 8 project-based contracts that have insufficient funding. However, while some contracts do not have sufficient funding, others have more funding than is needed. HUD refers to the amount of funds remaining in such contracts at expiration as recaptures—that is, HUD can recapture and use these funds for other Section 8 contracts. Until the fiscal year 1999 budget, HUD had not factored the use of recaptures into its budget requests to offset the estimated needs for amendment funding. According to the budget director for the Office of Housing, recaptures were not factored into earlier budget requests because of data limitations that existed before HUD was able to use computerized data from the Section 8 Tenant Rental Assistance Certification System (TRACS). To estimate its amendment funding needs for fiscal year 1999, HUD added a new analysis to its BFS model. HUD contractor staff maintain and operate the BFS model. For each active Section 8 project-based contract, the BFS model compares projected expenditures over the life of the contract, adjusted for inflation, with funding that is currently available and estimates whether each contract has a funding shortfall or excess funding that can be recaptured. The model includes two categories of funding: undisbursed obligations and unobligated but reserved funds. The BFS model provides estimates, by year, of the projected shortfall amounts and of the recaptures associated with expiring contracts. The current analysis is carried through 2035, at which point HUD data indicate that all contracts in the portfolio as of September 30, 1997, will have expired. HUD incorporates a methodology referred to as “leveling” into the analysis. In this methodology, HUD spreads estimated funding shortfalls over the remaining term of the contract rather than beginning in the year the contract is projected to run out of funds. For example, for a contract costing $1 million a year with 10 years remaining and $9 million available, the $1 million shortfall would be spread out in $100,000 increments over the next 10 years, rather than being identified as a shortfall of $1 million in the tenth year. According to HUD officials, this approach enables HUD to request a consistent annual amount to fund amendments and to avoid requesting large amounts in later years. Thus, the amounts identified as shortfalls each year will include shortfalls that will actually occur in future years. According to HUD’s fiscal year 1999 budget request, the total amount of funding needed to amend Section 8 project-based contracts for fiscal year 1999 is $1.7 billion. HUD’s request also shows that this amount can be reduced by over $463 million from recaptures from expiring contracts, to a net funding need of $1.3 billion. According to HUD, the budget request was supported by an April 1997 BFS analysis. As shown in table 3.1, the funding need (shortfall amount) for fiscal year 1999 was projected to be about $1,162.8 million; this shortfall could be reduced by $540.1 million in recaptures. HUD and the Office of Management and Budget (OMB) added $500 million more to the fiscal year 1999 budget request above the funding shortfall identified in the analysis for 1999. According to HUD officials, this funding was added because of the long-term funding need for amendments. The April 1997 analysis showed a long-term net funding shortfall for amendments through the year 2023 of $18.9 billion, based on funding shortfalls of $24 billion and recaptures of $5.3 billion. Table 3.1 provides excerpts from the April 1997 BFS analysis covering fiscal years 1998 through 2003 and for 2023 when all contracts were projected to be expired. We found a number of errors in the analyses produced by the BFS model that resulted in the shortfall estimates being overstated and the recapture amounts understated. We could not review the April 1997 analysis, which was based on fiscal year 1996 data and was used to support the fiscal year 1999 budget request, because HUD could not provide us with the supporting data files. However, we reviewed five different analyses from September 1997 through May 1998. These analyses were based on data through fiscal year 1997, whereas the April 1997 analysis was based on fiscal year 1996 data. We reviewed the data supporting these analyses, identified errors and methodological issues with each one, and worked with Office of Housing officials to have the errors and methodology issues corrected.The budget director for the Office of Housing said that the errors we found in the updated analyses would also occur in the April 1997 analysis. Among the errors we found with the analyses we reviewed were the following: A total of about $1.4 billion in Section 8 project-based funding provided to the contracts in fiscal year 1997 was not included in the analyses because the contractor was not told to update the BFS model to pick up funding data from new appropriation accounts for the program. HUD corrected this error after we informed officials of the problem in January 1998. Active contracts were excluded from several of the analyses because of either inaccurate expiration dates in HUD’s database of Section 8 contracts or computer programming errors. For example, about 1,000 active contracts were excluded from an analysis on the basis of incorrect expiration dates, and 1,800 active contracts were excluded because of a programming error. HUD applied an inflation factor to 1997 data in error. HUD made this error because updating the data to fiscal year 1997 required eliminating the inflation factor that was applicable to the earlier analysis, dated April 1997, which was based on fiscal year 1996 data. However, the 1997 inflation factor was not eliminated from the analyses based on 1997 data until we identified the error. The methodology used to project future contract expenditures, referred to as the burn rate, does not accurately estimate expenditures for some contracts. The BFS model treats contract expenditures as a monthly expenditure, whereas the payments for a number of the contracts (generally those contracts managed by public housing entities, referred to as annual contributions contracts) actually reflect expenditures for either 3, 6, or 12 months, depending on the terms of the contracts. In addition, the methodology excludes some active contracts that did not receive any payments during the 6 months included in the analysis. HUD officials emphasized to us that the methodology would overstate some needs and understate others. However, the Department has not examined the overall impact of this methodology on the estimates. Our analysis of the expenditure rates indicates this problem tends to overstate expenditures to some degree. HUD officials have agreed that the methodology should be corrected. The Office of the CFO has developed a methodology for estimating Section 8 contract expenditures that links expenditure data with the time period covered by the expenditure, which appears to provide a more accurate estimate for the contracts that do not bill monthly. However, this methodology is not used in the BFS model. In addition, in response to our questions about the basis for the inflation factors and about the legislatively mandated limits on Section 8 project-based rent increases, HUD updated the analyses to include more current economic assumptions and to reflect the legislatively mandated limits. Specifically, the analyses provided to us through February 1998 reflected OMB’s economic assumptions (inflation factors) for the fiscal year 1998 budget. The subsequent analyses, provided in April 1998, reflect OMB’s economic assumptions for the fiscal year 1999 budget, and included an analysis that used assumptions reflecting the legislatively mandated limits on rent increases. While the errors we identified had various causes, most of them resulted from HUD’s not having adequate controls in place to ensure that the data and assumptions used in the BFS model were complete, accurate, and current, and that the data were fully reflected in the analyses produced by the model. For example, we identified a number of errors by performing basic data quality checks. Specifically, we examined the contracts excluded from the analyses to determine if any active contracts were being excluded incorrectly; we matched input to output to determine if all relevant shortfalls and recaptures were included in output; and we matched Section 8 project-based funding data from HUD’s Program Accounting System (PAS) with the funding included in the BFS analysis to determine if all funding was included in the analyses. These quality checks were not performed by the HUD contractor nor requested by HUD officials when the contractor provided them with various analyses. In addition, Office of Housing officials did not always ensure that the contractor had all the information it needed to perform the analysis, such as information on all relevant appropriation accounts that include Section 8 project-based funding. In April 1998, HUD provided a revised analysis that reflected the lower inflation factors OMB established for the fiscal year 1999 budget as well as the legislatively mandated limits on rent increases for certain contracts. As shown in table 3.2, this analysis estimates significantly lower shortfalls, higher recaptures, and lower net funding needs in the short term as well as the long term compared with the prior analyses—including the April 1997 analysis presented in table 3.1. Specifically, the current analysis estimates total shortfalls of $13 billion, recaptures of $11.5 billion, and a net funding need of $1.5 billion, compared with the shortfalls of $24 billion, recaptures of $5.3 billion, and net funding needs of $18.8 billion reflected in the April 1997 analysis used to support the fiscal year 1999 budget request. Furthermore, regarding the short-term funding needs, the April 1998 analysis indicates that the amount of recaptured funds that could be applied toward HUD’s fiscal year 1999 amendment needs is substantially higher than HUD estimated in its budget request. As shown in table 3.2, the analysis indicates that contracts expiring in fiscal year 1998 are estimated to have over $2.7 billion in recaptures and that contracts expiring in fiscal year 1999 will have close to $1 billion in recaptures. As discussed earlier, HUD’s budget request indicated that $463 million in recaptures were available to help offset fiscal year 1999 amendment needs. Testifying in March 1998, we pointed out that updated HUD analyses indicated that recaptures were likely to be much higher than HUD had indicated in its budget request. Accordingly, we stated that the Congress may wish to consider reducing HUD’s fiscal year 1999 request for funding to amend Section 8 project-based contracts. While HUD’s April 1998 analysis reflects a substantial improvement over earlier estimates, it does not present a complete and accurate picture of Section 8 project-based needs because it (1) does not reflect all of the Section 8 project-based funding the Department has available for funding shortfalls and (2) still contains some errors. Specifically, the analysis does not reflect about $1.5 billion that could be used to offset HUD’s fiscal year 1999 request for amendment funding. This total includes the following amounts: $833 million in project-based amendment funding that, according to the budget director for the Office of Housing, was appropriated to the Department for fiscal year 1998, including amounts associated with properties funded under the capital advance program for the elderly and disabled; $133 million of Section 8 project-based amendment funds that were unobligated and unreserved at the end of fiscal year 1997 and were carried over for use in 1998; $517 million in project-based funding that we identified in chapter 2 as being no longer needed. These funds would nearly offset the net funding needs through 2035, as shown in table 3.2. In terms of errors, this analysis again excludes about 1,800 active contracts, which would further reduce funding shortfalls. Our analysis indicates that if these contracts were included, the total long-term funding need would be reduced by approximately $200 million. These contracts were excluded because (1) the contractor made an error by accidently excluding 400 of the contracts and (2) the Office of Housing provided a file of contracts to be used in the analysis that excluded 1,400 active contracts. In addition, this analysis continues to use the methodology for estimating expenditures that tends to overstate expenditures. According to HUD’s Chief Financial Officer, Budget Director, and other HUD staff, the April 1998 estimate of $1.5 billion in net funding needs for Section 8 contract amendments (in table 3.2) could understate actual needs because the inflation rate is low (about 2 percent) and the analysis assumes limits on rent increases for many properties. That is, the net funding needs produced by BFS analyses vary depending upon the assumptions about future inflation rates and the limits on future rent increases. To illustrate how changes in these assumptions can affect estimates of amendment needs, HUD prepared two sensitivity analyses. The first sensitivity analysis is based on the same inflation factors as the analysis presented in table 3.2 (ranging from 1.9 to 2.2 percent), but this analysis does not include assumptions incorporating legislatively mandated limits on rent increases. This analysis projects amendment funding shortfalls for Section 8 project-based assistance of about $18 billion, recaptures of about $10.6 billion, and net funding needs of about $7.5 billion through 2035. Table 3.3 presents the results of this analysis. The second sensitivity analysis assumed that the inflation rate for each year was 3.2 percent and also excluded the impact of limits on rent increases. As shown in table 3.4, this analysis projects shortfalls of $24 billion, recaptures of $9.7 billion, and net funding needs of $14.2 billion through 2035. HUD officials view the estimates shown in table 3.2 and the sensitivity analyses shown in tables 3.3 and 3.4 as a range of potential amendment funding needs. However, it is important to note that both sensitivity analyses exclude the effect of the legislatively mandated limits on future rent increases—that is, they assume that the limits are repealed and that the inflation estimates apply to all Section 8 contracts. At this time, we are not aware of any major legislative efforts to repeal the limits on rent increases. As shown in table 3.2, the legislatively mandated limits substantially lower the estimate of long-term amendment needs. HUD’s policies and procedures for identifying and deobligating funds that are no longer needed do not ensure that all Section 8 project-based balances are evaluated each year and that balances that are no longer needed for specific Section 8 project-based contracts are identified and deobligated in a timely manner. The current review process does not provide HUD with adequate assurance that the reviews are being conducted properly and that identified funds are being deobligated. Assurance is inadequate because HUD does not adequately oversee the review process conducted by program offices and because the program officials who are responsible for reviewing the balances are not required to certify that the unexpended balances associated with the Section 8 project-based contracts continue to be needed. As we discussed in chapter 2, we identified about $517 million in funding that was still reflected in HUD’s accounting system as of September 30, 1997, and that was no longer needed because the contracts expired, were terminated, or were never executed. If such funding had been identified by HUD, it could have been used to help offset the Department’s need for Section 8 amendment funding. In addition, the Department has requested more funding for Section 8 contract amendments than needed because it does not have effective processes in place to take unexpended balances into account when determining funding needs as part of its budget process. While HUD uses a model to perform such analysis, we found a number of errors in the analysis it used for formulating its fiscal year 1999 budget request. These errors included active contracts being excluded, all available funding not being fully reflected, and weaknesses in the methodology used to estimate expenditure rates. These errors stemmed from the Department’s not ensuring that the data used in the model were complete, accurate, and current and that sufficient quality checks were performed either by HUD or contractor staff to ensure that the analyses were reliable. While HUD and contractor staff took actions to correct most of the problems that we identified during our review, it is important that HUD have effective controls in place to ensure that these problems do not recur in future analyses. In addition, HUD has yet to correct problems with the methodology used by the BFS model to estimate future expenditure rates. To improve the Department’s oversight of Section 8 project-based balances, we recommend that the Secretary of Housing and Urban Development require the Chief Financial Officer to revise the procedures used in the Department’s annual review of unexpended balances to ensure that reviews are completed and that balances that are not needed are identified and deobligated in a timely manner. This process should include a requirement that those officials responsible for reviewing the balances actually certify the continued need for the unexpended balances associated with Section 8 project-based contracts and that the Office of the Chief Financial Officer provide sufficient oversight to determine the adequacy of the reviews conducted. We also recommend that the Secretary require the Chief Financial Officer and the Office of Housing to ensure that HUD’s future funding requests for the Section 8 project-based program fully take into account the availability of unexpended balances that may be used to offset funding needs. To accomplish this goal, the Department would need to establish controls to ensure that the data used in any supporting analyses are complete, accurate, and current; that available funding is fully reflected; and that sufficient checks are performed to ensure that the analyses produced are reliable. In addition, the Department should improve the methodology used to estimate future expenditure rates for Section 8 project-based contracts. We provided a draft copy of this report to HUD for its review and comment. In commenting on the draft, HUD agreed with the data presented in the report and with the recommendations. However, HUD disagreed with the way in which we presented the results of three analyses of Section 8 project-based funding needs. HUD believed that the report’s presentation would have been strengthened if instead of emphasizing one of the analyses, we presented the results of the three analyses dated April 1998 in a consolidated table and did more to explain the risks associated with each analysis. In this regard, HUD stated that our report highlights a HUD-prepared analysis that uses the low inflation assumptions for the 1999 budget and essentially “freezes” much of the expenditures on Section 8 contracts at current rates for long periods. HUD stated that a more realistic assumption is that rents and incomes will increase in the future (notwithstanding current law limiting certain rent increases) and these increases will result in a growing drain on the obligated balances on those contracts. HUD also emphasized that it believes that estimates of Section 8 project-based amendment needs are very sensitive to inflation rates and that estimates of amendment shortfalls and recaptures should be expressed as a range, based on alternative inflation assumptions. We agree with HUD that estimates of long-term amendment needs are sensitive to assumptions regarding inflation. In fact, the report clearly states that HUD’s long-term amendment needs could increase substantially if inflation rates prove to be higher than currently estimated. However, we believe that our presentation of the three analyses of Section 8 project-based funding needs is appropriate. The report gives more emphasis to one analysis because that analysis is based on legislatively mandated limits on rent increases for certain properties and OMB’s economic assumptions for the fiscal year 1999 budget. In contrast, the other two analyses of long-term amendment needs that HUD prepared do not reflect the legislatively mandated limits on rent increases and thus tend to overstate the increases in Section 8 assistance that many properties would receive under current law. Accordingly, we do not agree with HUD’s assertion that these analyses reflect more realistic assumptions concerning Section 8 project-based amendment needs. Our report does recognize, however, that HUD views the three analyses as a potential range of amendment funding needs. HUD also stated that the report leads to a conclusion that remaining balances can be diverted out of the project-based inventory with no long-range consequences. Instead, HUD states that each dollar taken from the inventory will have to be replaced with budget authority at some point in the future. Our report does not conclude that remaining balances can be diverted from the program. However, we do not agree that HUD is in a position to conclude that each dollar taken from Section 8 project-based amendment funding would necessarily have to be replaced at some point in the future. Before a reliable conclusion on the long-term funding needs of the Section 8 project-based program can be made, HUD needs to implement our recommendation to improve the methodology used to estimate future expenditure rates for the Section 8 project-based contracts because the methodology currently used may substantially overstate expenditure rates. In addition, HUD needs to establish controls to ensure that the data used in its analyses are complete, current, and accurate. Once these actions are completed, we believe the Department will be in a better position to reach reliable conclusions concerning its short- and long-term funding needs. (The complete text of HUD’s comments are provided in app. II.) | Pursuant to a legislative requirement, GAO reviewed the Department of Housing and Urban Development's (HUD) systems for budgeting and accounting for Section 8 rental assistance funds, focusing on whether the systems ensure that unexpended Section 8 project-based funds do not reach unreasonable levels and that obligations are spent in a timely manner. GAO noted that: (1) as of September 30, 1997, HUD's Section 8 project-based rental assistance program had about $59.1 billion in unexpended balances in three major categories: (a) undisbursed obligations--funds obligated to Section 8 contracts but not yet disbursed; (b) unobligated but reserved funds--balances reserved for specific rental assistance contracts but not yet obligated; and (c) unobligated and unreserved funds--funds that are neither obligated nor reserved for any specific contracts; (2) most of the unexpended balances--$55.4 billion--represent undisbursed obligations associated with approximately 31,000 rental assistance contracts; (3) in addition, at the end of fiscal year (FY) 1997, HUD had about $3 billion in unobligated funds that were reserved for but not yet obligated to specific contracts and about $.7 billion in unobligated and unreserved funds that were carried over for use in 1998; (4) while most of the unexpended balances are needed for HUD to fulfill its commitments to the Section 8 contracts for which the funds have been obligated or reserved, GAO found at least $517 million in unexpended balances that are no longer needed for such purposes and thus could be recaptured by HUD and used to help fund other Section 8 contracts; (5) HUD's procedures for identifying and deobligating funds that are no longer needed to meet its Section 8 contractual obligations are not effective; (6) specifically, the procedures do not ensure that all Section 8 project-based balances are evaluated each year and that any excess balances are identified and deobligated in a timely manner; (7) while HUD's program offices are responsible for reviewing unexpended balances each year to determine whether they are still needed or can be deobligated, GAO found that some offices did not perform annual reviews in 1997 and that some funds identified as being available for deobligation in earlier reviews were not deobligated; (8) in addition, GAO found errors in the process HUD used to identify and take into account unexpended balances when formulating its budget request for FY 1999; (9) as a result, HUD's FY 1999 request for $1.3 billion in amendment funding to cover shortfalls in existing Section 8 contracts was significantly overstated; and (10) more recent analyses that correct most of these errors and update the economic assumptions used indicate that HUD already has sufficient funding available to meet its amendment needs for FY 1999. |
Over the last 50 years, the composition of the American household has changed dramatically. During this period, the proportion of unmarried individuals in the population increased steadily as couples chose to marry at later ages and cohabit prior to marriage—and as divorce rates rose (see fig. 1). From 1960 to 2010, the percentage of single-parent families also rose. In fact, from 1970 through 2012, the estimated proportion of single-parent families more than doubled, increasing from 13 to 32 percent of all families. The decline in marriage and rise in single parenthood over this period were more pronounced among low-income, less-educated individuals, and some minorities. For example, from 1960 to 2010, the proportion of married, 45- to 54-year old men in the highest income quintile declined modestly while the proportion of married men in the lowest income quintile declined from an estimated 71 to 27 percent (see fig. 2). Similarly, the percentage of single parents among 45- to 54-year-old men and women in the highest income quintile remained flat, while there was a steep rise in the percentage of single parents in the lowest income quintile, according to our estimates. In terms of education, among individuals age 18 years and older, the rise in single parenthood was steeper for those without a high school diploma in comparison to their counterparts with 4 or more years of college. Over the same period, the labor force participation rate of married women increased (see fig. 3). In 1960, labor force participation rates among married men, single men, married women, and single women ranged from 89 percent for married men to 32 percent for married women, according to our estimates. Since then, the differences in labor force participation rates for these four groups have narrowed, with labor force participation among married and single women within 3 percentage points in 2010. As a result of married women’s increasing labor force participation, the proportion of married couples with two earners has risen—along with the wives’ contributions to household income. According to the Bureau of Labor Statistics, from 1970 through 2010, women’s median contribution to household income rose from 27 to 38 percent. Further, from 1987 through 2010, the percentage of households in which the wives’ earnings exceeded their husband’s rose from 24 to 38 percent. As marriage and workforce patterns have changed, the U.S. retirement system has undergone its own transition. Specifically, over the last two decades employers have increasingly shifted away from offering their employees traditional DB to DC plans, and roughly half of U.S. workers do not participate in any employer-sponsored pension plan. DB plans typically offer retirement benefits to a retiree in the form of an annuity that provides a monthly payment for life, including a lifetime annuity to the surviving spouse, unless the couple chooses otherwise. In contrast, under a DC plan, workers and employers may make contributions to individual accounts. Depending on the options available under the plan, at retirement DC participants may take a lump sum, roll their plan savings into an IRA, leave some or all of their money in the plan, or purchase an annuity offered through the plan. Further, many of the remaining DB plans now offer lump sums as one of the form-of-payment options under the plan. Participants who elect a lump sum forgo a lifetime annuity. Some DB plan sponsors have also begun offering special, one-time lump sum elections to participants who are already retired and receiving monthly pension benefits. Taken together, the trends in marriage and workforce participation have implications for the receipt of Social Security retirement benefits, especially for women. Specifically, the proportion of women who are not eligible to receive Social Security spousal benefits because they were either never married, or divorced after less than 10 years of marriage— the length of time required for eligibility for Social Security divorced spouse benefits—has increased over the last two decades. The decline in the proportion of women with marriages that qualify them for spousal benefits—coupled with the rise in the percentage of women receiving benefits based on their own work record—has resulted in fewer women today receiving Social Security spousal and survivor benefits than in the past.been more dramatic. In general, the trend away from women receiving spousal benefits is projected to continue, with the largest shift occurring among black women, according to SSA analyses. For many elderly, this shift is likely to be positive, reflecting their higher earnings and greater capacity to save for retirement. However, elderly women with low levels of lifetime earnings, who have no spouse or do not receive a spousal benefit—a group that is disproportionately represented by black women— For blacks, the rise in ineligibility for spousal or widow benefits has are expected to have correspondingly lower Social Security retirement benefits relative to those with higher incomes. These trends have also affected household savings behavior and the financial risks households face in retirement. Households with DC plans have greater responsibility to save and manage their retirement savings so that they have sufficient income throughout retirement. However, our analysis of SCF data shows that many households approaching retirement still have no or very limited retirement savings (see fig. 4). Married households—in which many women now make significant contributions to retirement savings—are more likely to have retirement savings, but their median savings are low. The majority of single-headed households have no retirement savings. Single parents, in particular, tend to have fewer resources available to save for retirement during their working years and are less likely to participate in DC plans. In addition to challenges with accumulating sufficient savings for retirement, individuals may also find it difficult to determine how to invest their savings during their working years and spend down their savings when they reach retirement. During their working years, DC plan participants typically must determine the size of their contributions and choose among various investment options offered by the plan. At retirement or separation from their employer, plan participants must decide what to do with their plan savings. Participants in DB plans also face similar decisions if the plan offers a lump sum option, including whether to take the annuity or lump sum, and if a lump sum is elected, how to manage those benefits. GAO has found that these decisions are difficult to navigate because the appropriate investment strategy depends on many different aspects of an individual’s circumstances, such as anticipated expenses, income level, health, and each household’s tolerance for risk. In addition, individuals with DC plans face challenges comparing their distribution options, in part due to a host of complicated factors that must be considered in choosing among such options. They may also lack objective information to inform these complicated decisions. In fact, while financial experts GAO has interviewed typically recommended that retirees convert a portion of their savings into an income annuity, or opt for the annuity provided by an employer-sponsored DB pension instead of a lump sum withdrawal, we found that most retirees pass up opportunities for additional lifetime retirement income. These choices coupled with increasing life expectancy may result in more retirees outliving their assets. Lastly, the transition from DB to DC plans has increased the vulnerability of some spouses due to differences in the federal requirements for spousal protections between these two types of retirement plans. For DB plans, spousal consent is required if the participant wishes to waive the survivor annuity for his or her spouse. In contrast, for DC plans, spousal consent is not required for the participant to withdraw funds from the account—either before or at retirement—and DC plans do not generally offer annuities at all, including those with a survivor benefit. While this may not be a concern among many couples, it is a concern for some, especially those who depend on their spouse for income. While the trends described above have the potential to affect many Americans, it is likely that they will impact the nation’s most vulnerable more severely. Despite the role Social Security has played in reducing poverty among seniors, poverty remains high among certain groups (see fig. 5). These groups include older women, especially those who are unmarried or over age 80, and nonwhites. Moreover, individuals nearing retirement who experience economic shocks, such as losing a job or spouse, are also vulnerable to economic insecurity. During the 2007-2009 recession, unemployment rates doubled for workers aged 55 and older. When older workers lose a job they are less likely to find other employment. In fact, the median duration of unemployment for older workers rose sharply from 2007 to 2010, more than tripling for workers 65 and older and increasing to 31 weeks from 11 weeks for workers age 55 to 64. Prior GAO work has shown that long- term unemployment can reduce an older worker’s future retirement income in numerous ways, including reducing the number of years the worker can accumulate savings, prompting workers to claim Social Security retirement benefits before they reach their full retirement age, Similarly, our and leading workers to draw down their retirement assets.past work has shown that divorce and widowhood in the years leading up to and during retirement have detrimental effects on an individual’s assets and income, and that these effects were more pronounced for women. As a result of the trends described above, these vulnerable populations may face increasing income insecurity in old age and be in greater need of assistance. For example, during the 2007-2009 recession, the demand for food assistance rose sharply among older adults. Specifically, from fiscal year 2006 to 2009, the average number of households with a member age 60 or older participating in the Supplemental Nutrition Assistance Program rose 25 percent, while the population in that age group rose by 9 percent. Pub. L. No. 89-73, 79 Stat. 218 (codified as amended at 42 U.S.C. §§ 3001-3058ff). past work, we noted that the national funding formula used to allocate funding to states does not include factors to target older adults in greatest need, such as low-income older adults, although states are required to consider such factors when developing the intrastate formulas they use to allocate funds among their local agencies. We found that certain formula changes to better target states with elderly adults with the greatest need would have disparate effects on states, depending on their characteristics. We have also found that lack of federal guidance and data make it difficult to know whether those with the greatest need are being served. Our findings underscore how retirement security can be affected by changing circumstances in the American household and the economy. As the composition of the American family continues to evolve and as our retirement system transitions to one that is primarily account-based, vulnerable populations in this country will face increasing risk of saving sufficiently and potentially outliving their assets. For those with little or no pension or other financial assets, ensuring income in retirement may involve difficult choices, including how long to wait before claiming Social Security benefits, how long to work, and how to adjust consumption and lifestyle to lower levels of income in retirement. Poor or imprudent decisions may mean the difference between a secure retirement and poverty. Planning for these needs will be crucial if we wish to avoid turning back the clock on the gains we have achieved over the past 50 years from Social Security in reducing poverty among seniors. Chairman Nelson, Ranking Member Collins, and Members of the Committee, this completes my statement. I would be happy to answer any questions you might have. In addition to the above, Charlie Jeszeck, Director; Michael Collins, Assistant Director; Jennifer Cook, Erin M. Godtland, Rhiannon Patterson, and Ryan Siegel made significant contributions to this testimony and the related report. In addition, James Bennett, David Chrisinger, Sarah Cornetto, Courtney LaFountain, Kathy Leslie, Amy Moran Lowe, Sheila McCoy, Susan Offutt, Marylynn Sergent, Frank Todisco, and Shana Wallace made valuable contributions. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Over the past 50 years, poverty rates among older Americans have declined dramatically, in large part due to the availability and expansion of Social Security benefits. Social Security is now the most common type of income for retirees. Social Security retirement benefits are available not only to those who qualify based on their own work history, but also to spouses, widows/widowers, and in some cases former spouses of workers who qualify. However, in recent decades, marriage has become less common, women have entered the workforce in greater numbers, and many employers have shifted from offering DB to DC plans. In light of these trends, GAO is reporting on: (1) the trends in marriage and labor force participation in the American household and in the U.S. retirement system, (2) the effect of those trends on the receipt of retirement benefits and savings, and (3) the implications for vulnerable elderly populations and current challenges in assisting them. This statement draws from previously issued GAO work and a recently issued report, which was based on an analysis of nationally representative survey data including the Survey of Consumer Finances, the Survey of Income and Program Participation, and the Current Population Survey (CPS); and a broad literature review. GAO also interviewed agency officials and a range of experts in the area of retirement security. GAO is making no recommendations. The decline in marriage, rise in women's labor force participation, and transition away from defined benefit (DB) plans to defined contribution (DC) plans have resulted in changes in the types of retirement benefits households receive and increased vulnerabilities for some. Since the 1960s, the percentage of unmarried and single-parent families has risen dramatically, especially among low-income, less-educated individuals, and some minorities. At the same time, the percentage of married women entering the labor force has increased. The decline in marriage and rise in women's labor force participation have affected the types of Social Security benefits households receive, with fewer women receiving spousal benefits today than in the past. In addition, the shift away from DB to DC plans has increased financial vulnerabilities for some due to the fact that DC plans typically offer fewer spousal protections. DC plans also place greater responsibility on households to make decisions and manage their pension and financial assets so they have income throughout retirement. As shown in the figure below, despite Social Security's role in reducing poverty among seniors, poverty remains high among certain groups of seniors, such as minorities and unmarried women. These vulnerable populations are more likely to be adversely affected by these trends and may need assistance in old age. Note: The category “White” refers to people who are white only, non-Hispanic. “Black” refers to people who are black only, non-Hispanic. “Asian” refers to people who are either Asian only, Pacific Islander only or Asian and Pacific Islander, and are non-Hispanic. Hispanic people may be any race. Percentage estimates for poverty rates have margins of error ranging from 0.6 to 8.6 percentage points. See the hearing statement for more information on confidence levels and the data. |
U.S. relations with Micronesia and the Marshall Islands began during World War II, when the United States ended Japanese occupation of the region. The United States administered the region under a United Nations trusteeship beginning in 1947. The four states of the FSM voted in a 1978 referendum to become an independent nation, while the Marshall Islands established its constitutional government and declared itself a republic in 1979. Both locations remained subject to the authority of the United States under the trusteeship agreement until entry into force of the compact in 1986. The FSM is a loose federation of four states, and has a population of approximately 108,500, scattered over many small islands and atolls. The FSM states maintain considerable power, relative to the national government, to allocate U.S. assistance and implement budgetary policies. Chuuk, the largest state, has 50 percent of the FSM’s population, followed by Pohnpei (32 percent), Yap (11 percent), and Kosrae (7 percent). The RMI has a constitutional government, and its 29 constituent atolls have local government authority. About two-thirds of its approximately 56,000 residents are in Majuro Atoll, the nation’s capital, and Kwajalein Atoll. The two countries are located just north of the equator in the Pacific Ocean. (See fig. 1.) The United States, the FSM, and the RMI entered into the original Compact of Free Association in 1986 after lengthy negotiations. The compact provided a framework for the United States and the two countries to work toward achieving the following three main goals: (1) secure self-government for the FSM and the RMI, (2) ensure certain national security rights for all of the parties, and (3) assist the FSM and the RMI in their efforts to advance economic development and self-sufficiency. The first and second goals were met; the FSM and the RMI are independent nations, and the three countries established key defense rights, including securing U.S. access to military facilities on Kwajalein Atoll in the RMI through 2016. The compact’s third goal was to be accomplished primarily through U.S. direct financial assistance to the FSM and the RMI. For the 15-year period covering 1987 to 2001, funding was provided at levels that decreased every 5 years, with an extension for 2002 and 2003 during negotiations to renew expiring compact provisions. For 1987 through 2003, the FSM and the RMI are estimated to have received about $2.1 billion in compact financial assistance. As we previously reported, economic self-sufficiency was not achieved under the first compact. Under the original compact, the FSM and the RMI used funds for general government operations; capital projects, such as building roads and investing in businesses; debt payments; and targeted sectors, such as energy and communications. The FSM concentrated much of its spending on government activities, while the RMI emphasized capital spending. Compact funds to the FSM were divided among the FSM’s national government and four states, according to a distribution agreement first agreed to by the five governments in 1984. In 2000, we reported that compact funds spent on general government operations maintained high government wages and a high level of public sector employment, discouraging private sector growth, and that compact funds used to create and improve infrastructure likewise did not contribute to significant economic growth. Furthermore, many of the projects undertaken by the FSM and the RMI experienced problems because of poor planning and management, inadequate construction and maintenance, or misuse of funds. While the compact set out specific obligations for reporting and consultations regarding the use of compact funds, the FSM, RMI, and U.S. governments provided little accountability over compact expenditures and did not ensure that funds were spent effectively or efficiently. The “full faith and credit” provision made withholding funds impracticable. In addition, under the original compact, both nations also benefited from numerous U.S. federal programs, while citizens of both nations exercised their right under the compact to live and work in the United States as “nonimmigrants” and to stay for long periods of time. In 2003, the United States approved separate amended compacts with the FSM and the RMI that went into effect on June 25, 2004, and May 1, 2004, respectively. The amended compacts provide for direct financial assistance to the FSM and the RMI from 2004 to 2023, decreasing in most years, with the amount of the decrements to be deposited in the trust funds for the two nations established under the amended compacts (see table 1). Moreover, the amended compacts require the FSM and the RMI to make one-time contributions of $30 million each to the trust funds, which both countries have done. In addition, the RMI amended compact includes an agreement that allows the U.S. military access to certain sites in Kwajalein Atoll until 2086 and provides $15 million annually starting in 2004, rising to $18 million in 2014, to compensate for any impacts of the U.S. military on the atoll. The amended compacts and fiscal procedures agreements require that grant funding be targeted to support the countries, in six defined sectors, with the following general objectives: Education: Advance the quality of the basic education system. Health: Support and improve the delivery of preventative, curative, and environmental care. Environment: Increase environmental protection and engage in environmental infrastructure planning. Public sector capacity building: Build effective, accountable, and transparent national, state (in the FSM), and local government and other public sector institutions and systems. Private sector development: Attract foreign investment and increase indigenous business activity. Infrastructure: Provide adequate public infrastructure, prioritizing primary and secondary education capital projects and projects that directly affect health and safety, with 5 percent dedicated to maintenance. The RMI must also target grant funding to Ebeye and other Marshallese communities within Kwajalein Atoll: $3.1 million annually for 2004 through 2013 and $5.1 million annually for 2014 through 2023. In addition, $1.9 million is provided from annual grant funds to address special needs within Kwajalein Atoll, with emphasis on the Kwajalein landowners. Other funds are provided to the RMI government related to U.S. use of the atoll for military purposes. (See app. III for Kwajalein-related compact funding provisions.) Under the amended compacts and according to the fiscal procedures agreements, annual assistance for the six sectors in the FSM and the RMI is to be made available in accordance with an implementation framework with several components. Prior to the annual awarding of compact funds, the countries must submit development plans that identify goals and performance objectives for each sector. In addition, the countries must submit a budget for each sector that aligns with its development plan. The joint management and accountability committees for each country are to approve annual sector grants and, subsequent to the awards, evaluate sector management and progress. Finally, for each sector, the FSM and the RMI are to prepare quarterly financial and performance reports to serve as a mechanism for tracking progress against goals and objectives and monitoring performance and accountability. Figure 2 shows the amended compact implementation framework. Both countries are to develop multiyear development plans that are strategic in nature and continuously reviewed and updated through the annual budget process and that address the assistance for the defined sectors. The plans are to identify how the countries will use compact funds to promote broad compact development goals such as economic advancement and budgetary self-reliance. The plans are also to identify goals and objectives for each sector. In addition, through the annual budget process, the FSM and the RMI are to prepare annual sector grant budget proposals that are based on the development plans, including performance goals and indicators. U.S. officials are to evaluate the sector budget proposals each year to ensure that they are consistent with compact requirements and have the appropriate objectives and indicators and that the expenditures are adequate to achieve their stated purposes. Budget consultations between the governments are to take place regarding the sector proposals. Joint Management and Accountability Committees JEMCO and JEMFAC—jointly established by the United States and, respectively, the FSM and the RMI—are to strengthen management and accountability and promote the effective use of compact funding. Each five-member committee comprises three representatives from the United States and two representatives from the country. JEMCO’s and JEMFAC’s designated roles and responsibilities include reviewing the budgeting and development plans from each of the governments; approving grant allocations and performance objectives; attaching terms and conditions to any or all annual grant awards to improve program performance and fiscal accountability; evaluating progress, management problems, and any shifts in priorities in each sector; and reviewing audits called for in the compacts. The FSM, the RMI, and the United States are required to provide the necessary staff support to their representatives on the committee to enable the parties “to monitor closely the use of assistance under the Compacts.” FSM and RMI Grant Management The FSM and the RMI are responsible for grant management, including managing and monitoring the day-to-day operations and financial administration of each sector. Program monitoring. The FSM and RMI governments are to manage the sector and supplemental education grants and monitor day-to-day operations to ensure compliance with grant terms and conditions. Monitoring also is required to ensure the achievement of performance goals. The governments are to report quarterly to the United States, using a uniform format that includes a comparison of actual accomplishments to the objectives and indicators established for the period; any positive events that accelerate performance outcomes; any problems or issues encountered, reasons, and impact on grant activities and performance measures; and additional pertinent information, including, when appropriate, an analysis and explanation of cost overruns. In addition, the FSM and the RMI must annually report to the U.S. President on the use of U.S. grant assistance and other U.S. assistance provided during the prior fiscal year, and must also report on their progress in meeting program and economic goals. Financial administration. The FSM and the RMI must adhere to specific fiscal control and accounting procedures. The fiscal procedures agreements state that the countries’ financial management systems must meet several standards addressing financial reporting, accounting records, internal and budget controls, allowable cost, cash management, and source documentation. The systems must also specify applicable procedures regarding real property, equipment, and procurement. Quarterly financial reports are to be provided to the United States and used to monitor the (1) general budget and fiscal performance of the FSM and the RMI and (2) disbursement or outlay information for each sector grant. In addition, the FSM and the RMI are required to submit annual audit reports, within the meaning of the Single Audit Act as amended. According to the act, single audit reports are due within 9 months after the end of the audited period. Single audits are focused on recipients’ internal controls over financial reporting and compliance with laws and regulations governing U.S. federal awardees. Single audits also provide key information about the federal grantee’s financial management and reporting. A single audit report includes the auditor’s opinion (or disclaimer of opinion, as appropriate) regarding whether the financial statements are presented fairly in all material respects in conformity with generally accepted accounting principles, and findings about the internal controls related to financial statements; the entity’s audited financial reporting; the schedule of expenditures of federal awards and the auditor’s report on the schedule; the auditor’s opinion (or disclaimer of opinion) regarding whether the auditee complied with the laws, regulations, and provisions of contracts and grant agreements (such as the compact), which could have a direct and material effect on each major federal program, as well as findings on internal controls related to federal programs; a summary of findings and questioned costs for the federal program; corrective action plans for findings identified for the current year as well as unresolved findings from prior fiscal years. The United States is responsible under the fiscal procedures agreements for using the performance and financial reports to monitor, respectively, the countries’ sector grant performance and their budget and fiscal performance. Also, U.S. officials are responsible for monitoring compliance with grant terms and conditions, including any special grant conditions. If problems are found in areas such as the monitoring of sector grants or a lack of compliance with grant terms, the United States may impose special conditions or restrictions, including requiring the acquisition of technical or management assistance, requiring additional reporting and monitoring, or withholding funds. Under the implementing legislation, the U.S. President is required to report annually to Congress on the use and effectiveness of U.S. assistance. The President’s report also is to include an assessment of U.S. program and technical assistance provided to the countries and an evaluation of their economic conditions. According to federal policy implementing the Single Audit Act, U.S. agencies may take actions regarding late audits to ensure that award recipients address audit findings contained in single audit reports. According to the grants management common rule, awarding agencies may issue a high-risk designation to grant recipients if single audits reveal substantial and pervasive problems. In addition to establishing the joint management and accountability committees, each of the three countries has designated units that are responsible for compact administration. United States. OIA has responsibility for U.S. management and oversight of the FSM and RMI sector and supplemental education grants. OIA’s Honolulu field office has four professional staff— specialists in health, education, infrastructure, and financial management—who perform various activities, such as analyzing FSM and RMI budgets and required reports; reviewing expenditures and performance with FSM and RMI government officials and conducting site visits; providing briefings and advice to OIA, HHS, and State officials regarding progress and problems; providing support for JEMCO and JEMFAC meetings; monitoring the countries’ compliance with grant terms and withholding funds from the countries for noncompliance with requirements such as those expressed in the fiscal procedures agreements or in grant conditions (such remedies did not exist in the previous compact). FSM. In 2005, the FSM established its Compact Management Board and OCM. The board consists of seven members: two FSM national government appointees, a member appointed by each state, and the head of OCM. The board is responsible for actions such as formulating guidelines for FSM JEMCO members and providing oversight of compact implementation, including conducting investigations to ensure compliance with all terms of the compact. OCM, which has five staff members, is principally responsible for daily communications with JEMCO and the United States regarding JEMCO and compact matters. OCM is expected to undertake various actions, such as visiting the FSM states, to monitor compliance with compact terms. RMI. The RMI government identified the Office of the Chief Secretary as the official point of contact for all communication and correspondence with the U.S. government concerning compact sector grant assistance. Among the Chief Secretary’s responsibilities are providing oversight management and monitoring of sector grants and activities and coordination. Its role is supported by the Economic Policy, Planning, and Statistics Office, which works with the ministries receiving grants to prepare the annual budget proposals; quarterly reports, including developing performance indicators; and annual monitoring and evaluation reports. The ministries conduct day-to-day oversight. In addition to receiving compact sector grants, the FSM and the RMI are eligible for a Supplemental Education Grant (SEG). The amended compacts’ implementing legislation authorized appropriations beginning in 2005 to the Secretary of Education to supplement the education grants under the amended compacts. The SEG is awarded in place of grant assistance formerly awarded to the countries under several U.S. education, health, and labor programs. Under the fiscal procedures agreements, SEG funds are to be used to support “direct educational services at the local school level focused on school readiness, early childhood education, primary and secondary education, vocational training, adult and family literacy, and the smooth transition of students from high school to postsecondary educational pursuits or rewarding career endeavors.” Funding for the SEG is appropriated to a Department of Education account and transferred to an Interior account for disbursement, with Interior responsible for ensuring that the use, administration, and monitoring of SEG funds are in accordance with a memorandum of agreement among the Departments of Education, HHS, Labor, and the Interior as well as with the fiscal procedures agreements. The U.S. appointees to JEMCO and JEMFAC are required by the compacts’ implementing legislation to “consult with the Secretary of Education regarding the objectives, use, and monitoring of United States financial, program, and technical assistance made available for educational purposes.” JEMCO and JEMFAC are responsible for approving the SEG grants annually. JEMCO and JEMFAC approved allocations of compact grants primarily to the infrastructure, education, and health sectors. The FSM and the RMI also both received a new SEG, meant to support the goals and objectives in the education sector development plans. However, the countries’ use of compact funds has been limited by several factors, including delays in implementing infrastructure projects in the FSM and ongoing land use disputes with RMI landowners on both Majuro and Kwajalein. In addition, neither country has planned for the scheduled annual decrements in compact funding, and the FSM has not undertaken local needs assessments to target funds. The three largest FSM sectors—education, infrastructure, and health—accounted for almost 85 percent of the compact sector grant allocations in 2006. Of this total, education funding represented 33 percent; infrastructure represented 31 percent, up from 23 percent in 2004; and health represented 21 percent. The other three sectors—public sector capacity building, private sector development, and the environment—together accounted for less than 20 percent of the FSM’s compact funding in 2006. Figure 3 shows the FSM sector grant allocations for 2004 through 2006. (See app. IV for a breakout of compact funding, by FSM state.) In general, the funds allocated for each sector were used as follows: Education. JEMCO approved allocations for the education sector amounting to $79 million, or 34 percent of compact funds in 2004 through 2006. U.S. assistance is the main source of revenue for the FSM education system. At the FSM national government level, compact funding supports, among other things, the College of Micronesia, the development of national education standards, the national standardized testing program, and the college admissions test. At the state level, the funding is principally targeted to primary and secondary education. Compact funding levels vary among the FSM states, with Chuuk receiving the least funding per student (approximately $500) and Yap receiving the most (approximately $1,300). The difference in the funding levels for these two states is directly reflected in student-to-teacher ratios, with Chuuk having a higher student-to-teacher ratio (19:1) than Yap (8:1). Overall, we found the condition of school facilities and the adequacy of their supplies and equipment to be poorer in Chuuk than in the other FSM states. The FSM is making efforts to improve teacher qualifications through a grant from Education. Despite some progress, FSM educational outcomes remain poor. For example, according to an official from the FSM’s Department of Health, Education, and Social Affairs, graduates of FSM high schools often are not qualified to take college-level courses. Health. JEMCO approved allocations amounting to $49 million, or 21 percent of compact funds in 2004 through 2006, for health care activities such as medical and nursing services, dispensary services, and public health services. According to health officials in Chuuk and Pohnpei, funding under the amended compact provided for increased budgets for pharmaceuticals and supplies. However, a 2005 FSM Department of Health, Education, and Social Affairs assessment of primary care reported that most facilities lacked an appropriate range and quantity of medicine and supplies in each of the four FSM states. We found that each of the states’ hospitals and primary care facilities lacked some or all of the following: maintenance, adequately trained staff, functional equipment, and medical and pharmaceutical supplies. In addition, health sector allocations varied considerably across the four FSM state governments. For example, in 2006 Yap received more than twice as much health sector funding per person as Chuuk. During our site visits, we observed that Chuuk’s hospital and primary care facilities were in the poorest condition of the four states’ facilities. Infrastructure. JEMCO approved allocations amounting to $58.7 million, or 25 percent of compact funds in 2004 through 2006, to infrastructure. However, the FSM’s allocation of funds for 2004 and 2005 did not meet the recommendation in the compact’s implementing legislation, which stated that it was the sense of Congress that not less than 30 percent of annual compact sector grant assistance should be invested in infrastructure. In addition, the FSM has not completed any infrastructure projects. As of November 2006, OIA had approved 14 of the FSM’s priority projects, including several schools, a wastewater treatment facility, power and water distribution systems, and road and airport improvements. However, construction on these projects had not begun. Furthermore, according to an OIA official, the FSM had not met a compact requirement to establish and fund an infrastructure maintenance fund. Public sector capacity building. JEMCO approved allocations for public sector capacity building amounting to $25.6 million, or 11 percent of compact funding in 2004 through 2006. About 12 percent of these funds supported the operations of the public auditors’ offices in three of the four states and the FSM national government. OIA found that this use of the funds met the grant’s purpose. However, according to OIA, most of the remaining funds were to be used to support basic government operations, rather than for the grant’s intended purpose of developing the internal expertise needed to build an effective, accountable, and transparent government. In 2004, JEMCO required that the FSM develop a plan to eliminate funding for such nonconforming purposes by 2009. The FSM submitted a plan to OIA that illustrates an annual reduction of such funding, but the plan does not detail how the nonconforming activities, such as those supporting public safety and the judiciary, will otherwise be funded. FSM officials told us that they plan to replace capacity-building funds in part with local monies. However, recent tax revenues have largely stagnated despite some improvements. Private sector development. JEMCO approved private sector allocations amounting to $10.2 million, or 5 percent of compact funding in 2004 through 2006. These funds supported more than 38 different offices throughout the FSM—including visitor bureaus, land management offices, and marine and agriculture departments—and economic development and foreign investment activities. Environment. JEMCO approved allocations for the environment amounting to $6.6 million, or 3 percent of compact funding in 2004 through 2006. These funds supported 21 offices throughout the four states and the FSM national government, including offices responsible for environmental protection, marine conservation, forestry, historic preservation, public works, and solid waste management. In addition to receiving compact sector funding, the FSM education sector also received $24 million in SEG funds in 2005 and 2006. However, SEG funding was “off cycle” in both years. As a result, according to Interior, the FSM did not receive its 2005 SEG funding until October 2005 and did not receive its 2006 SEG funding until September 2006, near the end of each fiscal year. In Chuuk and Pohnpei, SEG funding mainly supported early childhood education, while in Yap and Kosrae, the largest portion of SEG funding went to school improvement projects that provided supplemental instructional services, such as after-school tutoring and professional development programs. The SEG funding also supported vocational training, skills training, and staff development. In addition, the FSM national government received some SEG funding for monitoring, coordination, technical assistance, and research. The College of Micronesia received SEG funds for financial aid for students and for training students to be teachers through the teacher corps. The three largest RMI sectors—infrastructure, education, and health—accounted for 92 percent of the compact sector grant allocations in 2006. Infrastructure received approximately 40 percent of the funding between 2004 and 2006, while education received approximately 33 percent and health received approximately 20 percent. Funding was also allocated for Ebeye special needs; however, only a small portion had been expended as of August 2006. As in the FSM, public sector capacity building, private sector development, and the environment received the least compact funding, totaling less than 4 percent between 2004 and 2006. Figure 4 shows the sector grant allocations for the RMI for 2004 through 2006. (See app. IV for a breakout of compact funding, by RMI sector grants.) Education. JEMFAC approved allocations for the education sector amounting to $34.2 million, or 33 percent of compact funds in 2004 through 2006. These funds have primarily supported the operations of the primary and secondary schools, providing approximately $800 per student annually. In addition, compact education funding has supported the National Scholarship Board and the College of Marshall Islands. Furthermore, some 2004 through 2006 funding was designated specifically for Ebeye’s schools. The quality of school facilities varies widely in the RMI. Although new classrooms were built with infrastructure funds, we found that many existing classrooms remained in poor condition. For example, in several Marshall Island High School classrooms, ceilings had fallen in, making the classrooms too dangerous to use. The RMI is making efforts to improve teacher qualifications through a grant from Education. However, although improved educational outcomes is a compact priority, standardized test scores show that RMI educational outcomes remain poor. Moreover, according to the College of the Marshall Islands, graduates of RMI high schools often are not qualified to take college-level courses. Health. JEMFAC approved allocations amounting to $20.6 million, or 20 percent of compact funds in 2004 through 2006, for health care activities such as medical and nursing services, dispensary services, and public health services. A large portion of this funding was allocated to hospital service improvements, such as hiring additional staff, providing specialized training for doctors and nurses, and purchasing equipment in both Majuro and Ebeye. Infrastructure. JEMFAC approved allocations amounting to $41.7 million, or 40 percent of compact funds in 2004 through 2006, for infrastructure—thereby meeting the RMI compact requirement to allocate at least 30 percent, and not more than 50 percent, of annual compact sector grant assistance funds to this sector. Furthermore, the RMI undertook and completed several infrastructure projects and established and funded an infrastructure maintenance fund, as required. From October 2003 to July 2006, 9 new construction projects and 17 maintenance projects in the RMI either were completed or were under way. All of the new projects were schools where there was a clear title or an existing long-term lease for the land. Environment, private sector development, and public sector capacity building. JEMFAC approved allocations of $2.6 million, or 3 percent of compact funds in 2004 through 2006, for these three sectors. This funding supported four entities, including the Environmental Protection Authority; the Land Registration Authority; the Office of the Auditor General; and Ministry of Resources and Development, which comprises the Small Business Development Council and the Marshall Islands Visitors’ Authority. The RMI’s Chief Secretary indicated during our meeting in March that the RMI would no longer seek compact funds for activities in these sectors and would instead focus all compact resources on education, health, and infrastructure. Ebeye. JEMFAC approved allocations amounting to $5.8 million, or almost 6 percent of all compact funds in 2004 through 2006, for Ebeye special needs. However, because OIA obligated none of these funds for Ebeye during 2004 and 2005, JEMFAC approved the reallocation of the entire amount in 2006. According to OIA, approximately $500,000 has been used to pay for utility costs for certain Ebeye residents, while another $500,000 has been used to support utility operations. In addition to receiving compact sector funding, the RMI also received $12 million in SEG funding for 2005 and 2006. However, because SEG funding was off cycle in both 2005 and 2006, according to OIA, the RMI did not receive its 2005 SEG until August 2005 and its 2006 SEG until September 2006, near the end of each fiscal year. The SEG mainly supported early childhood education but also supported activities at other education levels, including the purchasing of textbooks and supplies; supporting foreign volunteer teachers and substitute teachers; and funding the National Vocational Training Institute, which is an alternative to the mainstream high schools. Political factors and land use issues have hindered compact implementation in the FSM and the RMI. Political factors. In the FSM, although $58.7 million had been allocated for infrastructure as of September 2006, no infrastructure projects were built because of, among other issues, a lack of internal agreement among the five FSM governments regarding project implementation and the governments’ inability to demonstrate how the funding will be managed in a unified and comprehensive method. For example, one FSM state governor told us that he had refused to meet with the FSM national government’s project management unit because he so strongly disagreed with the unit’s management process. Such disagreements led to delays in the national government’s implementation of its project management unit, and, according to OIA officials, significant challenges remain with respect to implementing the unit. In the RMI, the government and landowners on Kwajalein Atoll disagreed about the management of the entity designated to use the compact funds set aside for Ebeye special needs, with an emphasis on the needs of Kwajalein landowners. This entity, the Kwajalein Atoll Development Authority (KADA), had had problems accounting for and effectively and efficiently using funds; moreover, according to the RMI’s Chief Secretary, the RMI government developed a restructuring plan for the authority but the plan was not fully implemented. Moreover, Kwajalein landowners disputed the composition of the KADA board and its role in distributing these funds. As a result, as of September 2006, only approximately $1.0 million of the $5.8 million allocated for Ebeye special needs had been released for the community’s benefit. Land use issues. In the FSM, project implementation in Chuuk was hindered by the state’s inability to secure leases due to the lack of clear title, established fair market values, and local revenues to pay for land leases. Because of a lack of established fair market values, using compact funding for land lease or purchase under the original compact may have led to unreasonably high payment. A recent study of land valuation practices in Chuuk found sales of comparable land in Weno, the state’s capital, ranging from $5 per square meter to $1,704 per square meter, with the higher payment associated with lease agreements paid for by the compact funding. In the RMI, land disputes prevented construction of the Uliga Elementary School on Majuro, the country’s main atoll, while another project site on Majuro was abandoned because a lease agreement could not be concluded with the landowner. On Kwajalein Atoll, construction of Kwajalein Atoll High School was delayed because of the inability of the RMI government to secure a long-term lease from Kwajalein landowners for a site large enough to accommodate new facilities for up to 600 students. Similar problems delayed construction of Ebeye Elementary School. RMI projects were built where the land titles were clear and long-term leases were available. However, future RMI infrastructure projects may be delayed because of uncertainty regarding the land titles for remaining projects. The FSM and the RMI lack concrete plans for addressing the annual decrement in compact funding and, as a result of revenue shortfalls, will likely be unable to sustain current levels of government services as compact funding diminishes. In both countries, compact funding represents a significant portion of the government revenue—approximately 38 percent in the FSM and 27 percent in the RMI, according to the 2005 single audits. Personnel expenses account for a substantial share of compact funding expenditures. For example, 57 percent of the education sector grant in the FSM and 75 percent of the grant in the RMI paid for personnel in 2006. Over the past 5 years, government employment has grown in both countries: in the FSM, the public sector employment level has varied since 2000 but peaked for this period in 2005, while in the RMI, the government wage bill rose from $17 million in 2000 to $30 million in 2005. Given the countries’ current levels of spending on government services, the decrement—$800,000 per year for the FSM, beginning in 2007, and $500,000 per year for the RMI since 2005—will result in revenue shortfalls in both countries, absent additional sources of revenue. In addition, in the FSM, cessation of nonconforming uses of the public sector capacity building grant will require government operations currently supported by compact funds to rely on a different revenue source. Officials in the FSM and the RMI told us that they can compensate for the decrement in various ways, such as through the yearly partial adjustment for inflation, provided for in the amended compacts, or through improved tax collection. However, the partial nature of the adjustment causes the value of the grant to fall in real terms, independent of the decrement, thereby reducing the government’s ability to pay over time for imports, such as energy, pharmaceutical products, and medical equipment. Moreover, as we recently reported, although tax reform may provide opportunities for increasing annual government revenue in the FSM and the RMI, the International Monetary Fund, the Asian Development Bank, and other economic experts consider both nations’ business tax schemes to be inefficient because of a poor incentive structure and weak tax collection. In the FSM’s and the RMI’s response to our draft report, both countries raised the possibility that the decrement’s negative effect might be addressed during the periodic bilateral review, which is called for every 5 years, under the compact. The FSM distributed compact funding among its four states according to a formula that did not fully account for states’ differing population sizes or funding needs. The formula, established in an FSM law enacted in January 2005 and in force through 2006, allotted a set percentage to each state as well as 8.65 percent to the national government. Use of the distribution formula resulted in varying per capita compact funding among the states (see table 2). For example, we calculated that in 2006, Yap received more than twice as much education funding per student and health care funding per person as Chuuk. Both the FSM government and U.S. officials acknowledged that the funding inequality resulted in different levels of government services across states, with particularly low levels of services in Chuuk. For example, an FSM health official told us that Chuuk’s low immunization rate is a result of low per-capita health funding, and, according to a U.S. health official, HHS immunization staff see Chuuk as vulnerable. However, as of October 2006, neither the FSM nor JEMCO had assessed the impact of such differences on the country’s ability to meet national goals or deliver services. Although the FSM and the RMI established performance measurement mechanisms, several factors limited the countries’ ability to assess progress toward compact goals. The FSM and the RMI development plans contain sector goals and objectives, and the countries are collecting performance indicators for health and education. However, neither country can assess progress using these indicators because of incomplete and poor quality data. Moreover, problems in the countries’ quarterly performance reports—disorganized structure in the FSM reports as well as incomplete and inaccurate information in both the FSM and the RMI reports—limit their usefulness for tracking performance. A lack of technical capacity also challenges the countries’ ability to collect performance data and measure progress. Both countries established development plans that include strategic goals and objectives for the sectors receiving compact funds. These strategic goals are broad—for example, both countries list improving primary health care as a strategic goal. In addition, the development plans list various objectives related to each strategic goal. For example, in the FSM, the objectives related to improving primary health care include (1) increasing by 20 percent the use of basic primary health care services provided at dispensaries and health centers and (2) decreasing by 50 percent the use of primary health care services provided at hospital outpatient clinics. According to OIA, outcome measures for some sectors in the FSM were inappropriate, absent, or poorly defined. The RMI health sector’s complex performance hierarchy and lack of readily available baselines for many measures initially made it difficult for the Ministry of Health to collect data. In 2004, JEMCO and JEMFAC required the countries to submit a streamlined and refined statement of performance measures, baseline data, and annual targets to enable the tracking of goals and objectives for education, the environment, health, private sector development and in public sector capacity building. The countries have developed some performance indicators that are intended to help demonstrate progress in education and health, as required by JEMCO and JEMFAC, but have not done so for the other sectors. In 2006, JEMFAC also required the RMI to include in its reports six performance indicators for the environmental sector and two performance indicators for private sector development. The FSM and the RMI ministries have begun to collect performance indicators for the education and health sectors, as required by JEMCO and JEMFAC. However, the ministries are not yet able to assess progress with the indicators, because baseline data for some indicators were incomplete and the quality of some data was poor. Education sector. As required by JEMCO, in 2005, the FSM began submitting data for 20 indicators to gauge progress in the education sector. In 2005, the FSM submitted some data for 11 of the 20 required education performance indicators. In 2006, it submitted some data for all of the 20 indicators, with data for 5 indicators being incomplete because some states did not submit them. For example, none of the states submitted data for the number and percentage of high school graduates going to college. Chuuk and Yap did not provide the required average daily student attendance rate, and Kosrae, Pohnpei, and Yap did not provide data to establish a baseline for dropout rates. Furthermore, we found some of the data submitted to be of questionable quality. For example, Chuuk’s 2006 submission of data for the 20 indicators indicated a dropout rate of less than 1 percent. However, according to an expert familiar with the Chuuk education system, the actual dropout rate was much higher. Moreover, when comparing the 2005 and 2006 submissions, we identified possible problems with some of the most basic data, such as the number of teachers, students, and schools, due to inconsistent definitions of the indicators. For example, the student enrollment figure reported in 2006 was for public schools only, but the figure submitted in 2005 included both public and private schools, according to an FSM education official. Likewise, reporting on the number of teachers in the school system differed among states. For example, Chuuk reported only the number of teachers, while the other states also included nonteaching staff. Health sector. FSM state and national health directors agreed on 14 health indicators in April 2006 as a means to gauge progress. The FSM national government and all four states are collecting data for 9 of the 14 indicators, while data for the other 5 indicators have yet to be collected. According to the FSM national government, delays in collecting data for some indicators resulted from the time required to establish a common methodology—that is, definitions and processes—among all of the states and governments. Furthermore, we found that some of the health data collected were ambiguous and therefore difficult to use. For example, it was unclear whether reports on data from Yap’s outer islands relating to 1 of the 14 health indicators, the number of dispensary encounters, covered 1 or 2 months; according to a Yap health official, data for this indicator may be incomplete. Likewise, OIA’s health grant manager indicated that there are weaknesses in the FSM’s health data. Education sector. As required by JEMFAC in 2005, the RMI started tracking some of the 20 indicators as a way to gauge progress in the education sector. The RMI submitted data for 15 of the 20 required education performance indicators in 2005, repeating the submission in 2006 without updating the data, according to an OIA official. JEMFAC required the RMI to submit data for the 5 indicators omitted in 2005—including staff education levels and parent involvement—but did not receive them. In addition, some of the information reported was outdated. For example, the 2005 submission of data for an indicator on student proficiency was based on a test given in the RMI in 2002. Health sector. The RMI’s Ministry of Health began identifying performance indicators when the amended compact entered into force in 2004. Initially, the ministry developed numerous indicators, which, according to an OIA official, threatened to overwhelm the ministry’s capacity for data collection and management. The ministry has since made refinements and reduced the number of indicators to a more manageable size. However, according to an RMI government report for 2005, it is difficult to compare the ministry’s 2004 and 2005 performances because of gaps in the data reported. For example, limited data were available in 2004 for the outer island health care system and Kwajalein Atoll Health Services. According to the RMI government report, data collection improved and most needed data were available, but some data were still missing. Although the FSM and the RMI began compiling quarterly performance reports beginning in 2004, as required by the fiscal procedures agreements, the usefulness of the reports for assessing progress toward sector goals is limited by several factors. First, the FSM’s reports had format problems, such as a lack of uniform structure, and some FSM reports were missing. Second, both countries’ reports contained incomplete activity-level information. Third, in both countries’ reports, some activity-level information, such as budget and expenditure data, were inaccurate. Problematic format. The usefulness of the FSM quarterly performance reports is diminished by a lack of uniform structure, excessive length, and disorganization. In addition, some FSM reports were missing. The five FSM governments’ quarterly 2005 performance reports that we reviewed lacked the uniform structure required by the fiscal procedures agreement. For example, while Kosrae combined sector and activities into one report, Pohnpei reported on each activity separately. Moreover, the volume of reporting was excessive. For example, the 2005 fourth- quarter reports for the FSM education sector totaled more than 600 pages for all five governments’ quarterly submissions and more than 1,500 pages for the entire year. The reports were also disorganized. For example, we found misfiled reports in the FSM’s submission to OIA. We also found that 19 sector reports were missing in 2005. Noting shortcomings similar to those we observed, officials from OIA and the FSM stated that the performance reports could not be used as an effective management tool. In contrast, the RMI reports were uniformly formatted, as specified by the fiscal procedures agreement, and all required reports were submitted to OIA. Incomplete information. Both countries’ quarterly reports lacked complete information on program activities. For example, for 2005, the FSM national government’s second-quarter health sector report lacked information on the environmental health and food safety programs (although its other quarterly reports included such information), and Pohnpei’s first-quarter health sector report lacked information on 28 of 31 activities. In the fourth quarter of 2005, Kosrae did not provide budgetary and expenditure information regarding the provision of education and support services to individuals with disabilities. The RMI’s statistics office gathered information from the RMI’s 2005 quarterly performance reports, which contained primarily activity-level information, and attempted to assess progress in the various sectors. However, because of weaknesses in information collected in 2004, including missing information for some activities for entire quarters, the RMI had difficulty in making comparisons and determining whether progress was being made in many of its sectors. Inaccurate information. Both the FSM’s and the RMI’s quarterly performance reports contained inaccurate information on program activities. We found that the performance reports for the five FSM governments did not accurately track or report annual activity budgets or expenditures. For example, a 2005 Pohnpei education performance report stated that more than $100,000 per quarter was allocated to pay the salaries of two cultural studies teachers. The state’s Department of Education could not explain the high salary figure but indicated that the number was incorrect. According to FSM officials in the departments we visited, the departments were not given an opportunity to review the budget and expenditure data before the performance reports were sent to OCM and OIA and were therefore unaware of the errors. Some of the RMI’s quarterly performance reports also contained inaccuracies. For example, although the RMI’s private sector development performance report for the fourth quarter of 2005 stated that eight new businesses were created in 2005, officials from the Ministry of Resources and Development indicated that only four businesses were started that year. In addition, the RMI Ministry of Health’s 2005 fourth-quarter report contained incorrect outpatient numbers for the first three quarters, according to a hospital administrator in Majuro. In the RMI quarterly reports for the education sector, we found several errors in basic statistics, such as the number of students attending school. In addition, RMI Ministry of Education officials and officials in the other sectors told us that they had not been given the opportunity to review final performance reports compiled by the statistics office before the reports’ submission to OIA, and that they were unaware of the errors until we pointed them out. The FSM’s ability to measure progress is limited by its lack of capacity to collect, assemble, and analyze performance data. According to OIA, the education sector currently lacks a reliable system for the regular and systematic collection and dissemination of information and data. An OCM official also stated that the lack of performance baseline data for the private sector development and environment sectors could be attributable to “weak capacity in performance budgeting and reporting” and that staff lack expertise in one or both areas. The RMI statistics office, which is the main entity tasked to collect data, indicated that it is not currently able to assess progress toward compact and development plan goals because of the government’s lack of capacity to collect, assemble, and analyze data in all sectors. Likewise, the office’s own capacity is limited. Officials from the office emphasized the importance of building capacity in the ministries to evaluate their activities. In particular, they said that improvements in data collection would enable ministries to respond quickly to requests for information from both national and international sources. For example, the officials noted that the Ministry of Education needs to develop measures to report on the quality of education. The officials also noted that other offices in the ministry should hire more trained professionals, such as the recently hired Assistant Secretary of Administration with a graduate degree in public administration. The FSM’s and the RMI’s required monitoring of sector grant performance was limited by capacity constraints, among other challenges. In addition, the countries’ single audit reports for 2004 and 2005, particularly the FSM’s reports, indicated weaknesses in the countries’ financial statements and compliance with the requirements of major federal programs, calling into question their accountability for the use of compact funds. However, the FSM’s timeliness in submitting its single audit reports improved from 2004 to 2005, and the RMI submitted its single audit reports for these 2 years on time. The FSM’s monitoring of sector grant performance, required by the fiscal procedures agreement, was limited at the national and state levels by lack of capacity in the FSM’s OCM and in the state governments, among other factors. In addition, the FSM’s single audit reports for 2004 and 2005 showed weaknesses in its financial statements and a lack of compliance with requirements of major federal programs, suggesting that the FSM has limited ability to account for the use of compact funds. However, the government’s timeliness in submitting its audit reports improved. The FSM national government provided limited monitoring of the day-to-day operations of sector grants in 2004 through 2006. In addition to facilitating coordination and communication between the national government and the states and between the FSM and OIA, OCM is intended to have some responsibility for overseeing compact-funded programs. However, according to the office’s director, OCM has neither the staff nor the budget to undertake such activities. As of November 2006, OCM had five of its own professional staff, including the director. Prior to 2007, staff from other FSM national departments were assigned to the office, but only the economic affairs and finance departments provided detailees. One staff was converted to a permanent hire in OCM and it is unclear if the other detailee will remain at OCM or return to the Office of Economic Affairs. The FSM Office of the National Public Auditor had not conducted any performance or financial audits of compact sector grants. The FSM states, as subgrantees of compact funds, are required to submit performance reports to the FSM national government. However, the Director of OCM indicated that he did not know how or whether each state, other than Chuuk, was set up to perform day-to-day monitoring of sector grants. In Chuuk, a financial control commission was established in July 2005 to address financial management and accountability requirements. However, while the commission had exercised a financial control function, it had not monitored the performance of the sector grants. In addition, the FSM’s Secretary of Foreign Affairs and JEMCO representative told us that all of the states were weak on monitoring. Although the states’ public auditors could conduct audits of compact performance, their efforts had been limited to financial audits. For example, in both Yap and Pohnpei, the public auditor’s office issued four audits in 2005, two of which were for compact-funded activities. Furthermore, in Chuuk, the public auditor position required by the state constitution was not filled, prompting JEMCO to deny the Chuuk auditor’s office state-budgeted funds. The FSM’s single audit reports for 2004 and 2005 showed that the FSM’s ability to account for the use of compact funds was limited, as shown by weaknesses in its financial statements and lack of compliance with requirements of major federal programs. However, the FSM’s timeliness in submitting its audit reports improved during this period. FSM financial statements. In general, the FSM single audit reports call into question the reliability of the country’s financial statements. Of the single audit reports that the FSM national and state governments submitted for 2004 and 2005, only one report—Pohnpei state’s report for 2005—contained an unqualified opinion on the financial statements, while the other reports contained qualified, adverse, or disclaimed opinions. (See app. V for the FSM’s single audit financial statement opinions.) For example, for the FSM 2005 reports, the auditors’ inability to obtain audited financial statements for several subgrantees led them in part to render qualified opinions. Chuuk reports for 2004 and 2005 contained disclaimers of opinion related to seven and eight major issues, respectively, including the inability of auditors to determine the propriety of government expenses, fixed assets, cash, and receivables; the capital assets of one of its subunits; and the accounts payable and expenses of the Chuuk State Health Care Plan. In addition, the single audit reports include specific findings related to the financial statements. For example, the national and state governments’ 2005 single audit reports contained 57 reportable findings of material weaknesses and reportable conditions in the governments’ financial statements, such as the lack of sufficient documentation for (1) the disposal of fixed assets, including a two-story building, and (2) purchases of vehicles and copiers. Fourteen of the FSM 2005 findings had been cited as reportable findings in previous audits. FSM compliance with requirements of major federal programs. Each of the FSM national and state governments’ single audit reports for 2004 and 2005 contained qualified opinions on the governments’ compliance with requirements of major federal programs, and the 2004 and 2005 reports noted 47 and 45, respectively, total reported weaknesses, on compliance. (App. V shows the FSM single audit reports’ total numbers of material weaknesses and reportable conditions regarding compliance with requirements of major federal programs.) Four of the 2005 reports’ 45 findings recurred from the 2004 reports. In 2006, the FSM developed corrective action plans that addressed 60 percent of the 2005 audit findings of noncompliance. Timeliness of audits. The timeliness of the FSM national and state governments’ submission of single audits reports improved from 2004 to 2005. The national government submitted its 2004 and 2005 single audits in August and September 2006, 14 and 2 months, respectively, after the due dates. While the four FSM states submitted their 2004 single audits from 7 to 13 months after the due dates, three of the four states submitted their 2005 audits within the 9-month period allowed by OIA. The RMI government provided limited monitoring of sector grants, in part because of the lack of capacity in the Chief Secretary’s office and in most ministries that receive compact funds. The RMI’s single audit reports for 2004 and 2005 indicated weaknesses in its financial statements and compliance with requirements of major federal programs. However, the government developed corrective action plans to address the 2005 findings related to such compliance. The RMI government submitted its single audits for 2004 and 2005 on time. The RMI’s Chief Secretary, who is responsible for compact implementation and oversight, monitored sector grant operations on a limited basis. Day-to-day monitoring and oversight responsibilities were delegated to the ministries that receive compact funds. According to the RMI’s statistical office, it lacked the time and resources to devote to oversight and focused instead on helping the ministries to develop the annual budgets and sector portfolios and the quarterly and annual monitoring and performance reports. The office noted the ministries’ lack of personnel and skills needed to collect, assemble, and analyze data and emphasized the importance of building the ministries’ capacity to monitor and evaluate their own compact-funded activities. (However, according to an OIA official, the Ministry of Health made important strides in measuring performance and using performance management to improve the delivery of services.) The RMI Auditor General’s office conducted financial audits, but no performance audits, of compact sector grants. The RMI, like the FSM, failed to submit its required annual reports in a timely manner. The RMI’s single audit reports for 2004 and 2005 contained opinions and findings that indicated weaknesses in its financial statements and compliance with requirements of major federal programs. However, the government developed a corrective action plan that addressed all of the findings on compliance in its 2005 single audit report. The RMI submitted both of the single audit reports on time. RMI financial statements. The RMI’s single audit reports for 2004 and 2005 contained qualified opinions on the government’s financial statements. (See app. V for a list of the opinions on financial statements in the RMI’s audit reports for 2001 through 2005.) For example, several of the RMI’s subgrantees, such as the Ministry of Education’s Head Start program and the Kwajalein Atoll Joint Utilities Resources, Inc., were unable to produce audited financial statements. In addition, the 2005 single audit found two reportable findings in the RMI’s financial statements. The report cited the lack of audited financial statements and the lack of a complete asset inventory listing in the RMI as material weaknesses. Both of these findings had been cited in previous audits. RMI compliance with requirements of major federal programs. Both of the RMI’s single audit reports for 2004 and 2005 contained qualified opinions on the government’s compliance with requirements of major federal programs. In addition, the 2005 report noted 11 reported weaknesses in the country’s compliance with requirements of major federal programs. The RMI developed corrective action plans to address all of these findings, 2 of which had recurred from 2004. (App. V shows the total number of material weaknesses and reportable conditions findings for the RMI for 2001 through 2005 single audit reports.) Timeliness of audits. The RMI submitted its 2004 and 2005 single audit reports within the 9-month period required by the Single Audit Act. As administrator of the amended compact grants, OIA monitored the FSM’s and RMI’s sector grant and fiscal performance, assessed their compliance with compact conditions, and took action to correct persistent shortcomings. However, although OIA provided technical assistance to help the FSM improve its single audit timeliness, the office did not address recurrent findings and adverse opinions in the FSM and the RMI audits. OIA’s oversight efforts were hindered by the need to address problems in the FSM and by internal staffing challenges. In addition, Interior’s Office of Inspector General actively engaged in reviewing the countries’ implementation of the compact, although the office did not release its products to the public, and, as of October 2006, several reports remained in draft form. OIA undertook several administrative oversight efforts including monitoring the countries’ sector grant performance, monitoring the countries’ fiscal performance and sector grant outlays, and assessing the countries’ compliance with sector grant conditions. OIA’s efforts also included actions such as suspending or withholding grant payment in response to persistent shortcomings that it identified. Monitoring sector grant performance. OIA grant managers monitored the countries’ sector grant performance, using site visits and analysis of the quarterly sector performance reports. For example, in 2006, OIA’s visits and analyses led it to determine that 14 of the 61 offices in the FSM that receive private sector and environment sector grants were underperforming or nonperforming. As a remedy, OIA recommended and JEMCO agreed that future sector funding for these entities should be on a project basis. Also, in response to the shortcomings of the FSM’s and RMI’s performance evaluations for 2004 and 2005, JEMCO and JEMFAC, under OIA’s chairmanship, called for improved performance measurement and monitoring. In the FSM, JEMCO reprogrammed unused compact funds to improve capacity in this area. In addition, in response to recurrent lack of uniformity in the FSM’s performance reports, OIA rejected the first-quarter reports for 2006 (although it accepted nonuniform FSM reports later in the year). Although OIA had used the performance reports to monitor sector performance, it was unaware, until we notified the office, that almost 20 percent of the FSM’s 2005 performance reports were missing. Monitoring sector grant outlays and fiscal performance. OIA monitored the countries’ fiscal performance and sector grant outlays through analyses of the countries’ quarterly financial reports and, as Chair of JEMCO and JEMFAC, through reviews of the countries’ single audit reports. In August 2004, OIA analyses of both countries’ third-quarter cash transactions reports showed that some sector grant funding had not been spent. In response, OIA delayed payments to the FSM and the RMI for those sectors. Reviewing single audit reports. As Chair of JEMCO and JEMFAC, OIA led the committees’ reviews of, and responses to, the FSM’s and the RMI’s single audit reports. At a March 2006 JEMCO meeting, noting that single audits were the most important indicator of financial stability provided by a grantee to a grantor, OIA’s Director of Budget and Grants Management said that OIA intended to “apply a remedy” for single audit noncompliance beginning October 1, 2006, if the FSM failed to complete all of its audit reports by July 1, 2006, or within 3 months of the due date. The Director stated that OIA’s response would include withholding cash payments for various grants not related to the provision of medical care, emergency public health, or essential public safety. The Director also stated that OIA would notify and seek the concurrence of other U.S. agencies providing financial and technical assistance in designating the FSM a “high-risk grantee.” Three FSM states met OIA’s July 1 deadline, while the national government and Chuuk missed the deadline by 2 and 1 months, respectively. OIA ensured that the FSM received technical assistance to help address its single audit reports’ lack of timeliness, placing advisors through a third party in the state governments to facilitate their completion of overdue reports. In 2004, we recommended that OIA initiate appropriate actions to correct compact-related single audit findings and respond to violations of grant conditions or misuse of funds identified by single audits. Since then, OIA has provided technical advice and assistance to help the FSM and the RMI improve the quality of their financial statements and develop controls to resolve audit findings and prevent recurrences. Assessing compliance with grant conditions. OIA assessed the FSM’s and the RMI’s compliance with sector grant conditions through site visits to the countries and reviews of the countries’ submitted paperwork. In certain instances of the FSM’s or the RMI’s noncompliance with grant conditions, OIA monitored progress toward meeting the requirements and allowed the countries more time, while in other instances, OIA did not specifically address FSM or RMI noncompliance. (See apps. VI and VII for a list of sector grant special terms and conditions and their status.) However, OIA took corrective actions in several instances. Suspended grant funding. In December 2004, OIA staff conducting a site visit were unable to verify that food purchased by the program had been received by the Chuuk Education Department or served to students. In response, OIA suspended the Chuuk 2005 education grant’s meal service program funding of almost $1 million. OIA contacted Interior’s Office of Inspector General for a follow-up investigation to determine whether Chuuk was misusing compact funds. Withheld grant funding. OIA withheld the FSM’s May and June 2004 public sector capacity building and private sector development grant funding—approximately $2.4 million—when the FSM national government missed a March 2004 deadline to provide a transition plan for ending nonconforming use of the grant. In addition, OIA withheld awarded funds for the FSM infrastructure grant and the RMI Kwajalein special needs grant until the countries met grant terms. After our July 2005 report, which recommended that OIA determine the amount of staff travel to the FSM and the RMI needed to promote compliance with compact and grant requirements, OIA travel to the countries increased. Whereas travel to the two countries accounted for 15 percent of overall staff time in 2004, it rose to 20 percent in 2005 and 25 percent for the first three quarters of 2006. However, according to an OIA assessment, OIA’s current budget does not support extended, detailed reviews of U.S. funds in the various remote islands. OIA’s oversight was hampered by the need to respond to problems in the FSM as well as by the office’s difficulty in filling staff positions. FSM challenges. The need to respond to various challenges facing the FSM reduced OIA’s administrative oversight of assistance provided under the compact. According to the Director of OIA, the FSM’s budgets for 2005 through 2007 were poorly prepared, and, as a result, OIA grant managers were forced to spend an inordinate amount of time readying the budgets for the JEMCO meetings. In addition, according to OIA’s Director of Budget and Grants Management, the constant need to respond to emergent issues, such as education issues in Chuuk and land issues in the FSM, limited OIA’s ability to conduct oversight. Staffing challenges. Although OIA increased the 2006 budget for the Honolulu field office so that it could increase the number of staff positions, those new positions remained vacant. In December 2005, an advertised position to be based in Guam went unfilled, while an education grant specialist position in Honolulu was advertised twice after April 2006 but remained vacant for the entire fiscal year. In addition, the OIA private sector development and environment specialist position became vacant in September 2006. Interior’s Office of Inspector General undertook compact oversight activities, finding deficiencies in the FSM’s and the RMI’s compact implementation and accountability. In 2005 and 2006, the Inspector General conducted six reviews (three remained in draft form as of October 2006) addressing issues such as environmental and public health concerns in Chuuk (draft dated June student meal programs in Chuuk (draft dated June 2005), the RMI’s progress in implementing the amended compact (final report issued August 2005), the FSM’s progress in implementing the amended compact (draft dated January 2006), the FSM’s infrastructure grant implementation (final report issued July the FSM’s compact trust fund status (final report issued July 2006). Although the Inspector General distributed the three final reports to OIA and the FSM and the RMI governments, the final reports were not released to the public or disseminated widely in the FSM and the RMI. However, one of the draft reports circulated unofficially and was cited by the media. According to the Inspector General, the reports are considered advisory in nature and, as such, no specific response is required from OIA regarding the recommendations. Nonetheless, OIA officials stated that the office has found the recommendations useful and has made an effort to address them. Since enactment of the amended U.S. compacts with the FSM and the RMI, the two countries have made significant efforts to meet new requirements for implementation, performance measurement, and oversight. However, in attempting to meet these requirements, both countries face significant challenges that, unless addressed, will hamper the countries’ progress toward their goals of economic advancement and budgetary self-reliance before the annual grant assistance ends in 2023. In 2004 through 2006, compact grants were, for the most part, allocated among the countries’ six sectors as required, with emphasis on health, education, and infrastructure, and the countries have made progress in implementing the grants in most sectors. However, despite the revenue shortfalls they will face with the scheduled grant decrements, neither nation has concrete plans to raise the funds needed to maintain government services in the coming years. Furthermore, although the FSM’s allocation of funds among the states and among sectors caused significant inequalities in per-student support for education and per-capita funding for health care, neither the FSM nor JEMCO evaluated the impact of these differences on the country’s ability to meet national goals or deliver services. Furthermore, although the countries worked to develop the sector grant performance indicators required by JEMCO and JEMFAC, a lack of complete and reliable baseline data limited the countries’ use of the indicators to measure performance and evaluate progress. Moreover, weaknesses in the countries’ required quarterly performance reports—including missing and, in some cases, inaccurate activity data—limited the reports’ usefulness. Unless the FSM and the RMI take steps to correct these weaknesses in performance measurement, their ability to use the sector grants to optimal effect will continue to be curtailed. Given the FSM’s and the RMI’s need to maximize the benefits of compact assistance before the 2023 expiration of annual grants and to make steady progress toward the amended compact goals, we are providing the following seven recommendations to the Secretary of the Interior. To improve FSM grant administration, planning, and measurement of progress toward compact goals, and to ensure oversight, monitoring, and accountability for FSM compact expenditures, we recommend that the Secretary of the Interior direct the Deputy Assistant Secretary for Insular Affairs, as Chairman of JEMCO, to coordinate with other U.S. agencies on the committee in working with the FSM national government to take the following actions: establish plans for sector spending and investment by the FSM national and state governments to minimize any adverse consequence of reduced funding resulting from the annual decrement or partial inflation adjustment; evaluate the impact of the current FSM distribution between states and sectors on the ability of the nation to meet national goals or deliver services; fully develop the mechanism for measuring sector grant performance and collect complete baseline data to track progress toward development goals; and ensure that the quarterly performance reports contain reliable and verified program and financial information for use as a monitoring tool by both the FSM and the U.S. governments. To improve RMI grant administration, planning, and measurement of progress toward compact goals, and to ensure oversight, monitoring, and accountability for RMI compact expenditures, we recommend that the Secretary of the Interior direct the Deputy Assistant Secretary for Insular Affairs, as Chairman of JEMFAC, in coordination with other U.S. agencies on the committee in working with the RMI government to take the following actions: establish plans for sector spending and investment that minimize any adverse consequence of reduced funding resulting from the annual decrement or partial inflation adjustment; fully develop the mechanism for measuring sector grant performance and collect complete baseline data to track progress toward development goals; and ensure that the quarterly performance reports contain reliable and verified program and financial information for use as a monitoring tool by the RMI and the U.S. governments. We received comments from the Department of the Interior as well as from the FSM and the RMI (see app. VIII through X for detailed presentations of, and our responses to, these comments). We also received technical comments from the Departments of Education, Health and Human Services, and State, which we incorporated in our report as appropriate. Interior concurred with our recommendations and stated that the report was accurate and well balanced. The FSM also viewed the report as a balanced and fair assessment of its progress in planning for sustainability, measuring progress, and ensuring accountability and agreed with our overall conclusion that it faces significant challenges in meeting the various amended compact requirements. The FSM, however, defended its distribution formula for allocating compact funds to the national and state governments. The RMI acknowledged that its lack of capacity has slowed its implementation of the compact’s monitoring and reporting requirements. The RMI also stated that it has refrained from expanding ministry staffs, given the need for budgetary restraint. In addition to providing copies of this report to your offices, we will send copies to interested congressional committees. We will also provide copies of this report to the Secretaries of Education, Health and Human Services, the Interior, and State as well as the President of the Federated States of Micronesia and the President of the Republic of the Marshall Islands. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XI. This report examines, for 2004 through 2006, (1) the Federated States of Micronesia’s (FSM) and the Republic of the Marshall Islands’ (RMI) use of compact funds; (2) FSM and RMI efforts to assess progress toward their stated development and sector goals; (3) FSM and RMI monitoring of sector grants and accountability for the use of compact funds; and (4) the Department of the Interior’s (Interior) administrative oversight of the compacts. Appendix II contains information about activities funded by key U.S. programs. To report on the FSM’s and the RMI’s use of amended compact funds, we reviewed the U.S., FSM, and RMI annual compact reports for 2004 and 2005; FSM and RMI strategic planning documents and budgets; briefing documents prepared by Interior’s Office of Insular Affairs (OIA) in preparation for the annual bilateral meetings with the two countries; and FSM and RMI single audits for 2001 through 2005. We reviewed all 2004, 2005, and 2006 grant agreements with both countries obtained from OIA, including special terms and conditions included in these agreements. We compared and analyzed fund uses with the purposes specified in the amended compacts, the implementing legislation, subsidiary fiscal procedures agreements, and sector grant special terms and conditions. To identify issues that impact the use of compact funds, we discussed planning efforts with U.S., FSM, and RMI government officials and identified issues through our own analysis that affected planning, such as the FSM’s use of its distribution formula. We reviewed relevant documents such as FSM and RMI legislation, and we also reviewed documentation provided to the U.S. government, such as the FSM’s transition plan to eliminate the nonconforming spending under the public sector capacity building grant. To compute education spending per student, we used FSM and RMI grant data and student and population statistics. To calculate the variability in health spending per capita across the four FSM states, we used FSM grant data and population statistics. We did not calculate health spending per capita for the RMI. We determined that these data were sufficiently reliable for the purposes of our report. Although we were asked to evaluate the effectiveness of the compact funds, we determined it was too soon after the amended compacts’ implementation to do this; therefore, we report on whether the countries are able to measure progress. To identify FSM and RMI efforts to assess progress toward their stated goals, we reviewed FSM and RMI strategic planning documents. We evaluated the framework in place for the FSM and the RMI to measure the achievement of stated goals in strategic planning documents and compared them with the countries’ budget and quarterly performance documents. To determine whether the quarterly performance reports were being used as a tool to measure progress, we analyzed quarterly performance reports for 2005 consistently across five sectors and the accuracy of the budget information. We then verified the results of our analyses with each office or department we interviewed in the FSM and the RMI in March and April 2006. We asked if they used these reports to measure progress and discussed discrepancies we found in the reports. To identify obstacles to measurement and achievement of goals, we reviewed the U.S. annual compact reports for 2004 and 2005, FSM and RMI annual compact reports for 2004 and 2005, FSM and RMI strategic planning documents and budgets, U.S. government briefing documents, and the RMI’s 2005 Performance Monitoring Report. We verified this information with FSM, RMI, and OIA officials. To identify the extent to which the FSM and RMI governments conducted monitoring and accountability activities, we reviewed the amended compacts and fiscal procedures agreements to identify specific monitoring responsibilities. We also reviewed the U.S. government briefing documents, as well as the minutes and resolutions, when available, that were related to the Joint Economic Management Committee (JEMCO) and Joint Economic Management and Financial Accountability Committee (JEMFAC) meetings. We further reviewed FSM and RMI documents—such as budget justifications and portfolios, quarterly performance reports, and annual financial reports for 2004 through 2006, as available—submitted by the FSM and RMI governments to the U.S. government to confirm compliance with accountability reporting requirements. We discussed the sufficiency of quarterly performance reports with OIA officials. We obtained the single audit reports for 2001 through 2005 from the FSM National Auditor’s Web site and the RMI’s Office of the Auditor General. These reports included audits for the FSM national government; the state governments of Chuuk, Kosrae, Pohnpei, and Yap; and the RMI national government. In total, the 30 single audit reports covered 5 years, a period that we considered sufficient for identifying common or persistent compliance and financial management problems involving U.S. funds. We determined the timeliness of submission of the single audit reports by the governments using the Federal Audit Clearinghouse’s (FAC) “Form Date,” which is the most recent date that the required SF-SAC data collection form was received by the FAC. We noted that the Form Date is updated if revised SF-FACs for that same fiscal year are subsequently filed. Our review of the contents of the single audit reports identified the auditors’ opinions on the financial statements, matters cited by the auditors in their qualified opinions, the numbers of material weaknesses and reportable conditions reported by the auditors, and the status of corrective actions. We did not independently assess the quality of the audits or the reliability of the audit finding information. We analyzed the audit findings to determine whether they had recurred in successive single audits and were still occurring in their most recent audit, and we categorized the auditors’ opinions on the financial statements and the Schedules of Expenditures of Federal Awards. To determine oversight activities conducted by the OIA Honolulu office, we reviewed senior management statements regarding the purpose and function of this office and job descriptions for all staff. To identify the extent that the Honolulu office staff traveled to the FSM and the RMI, we obtained the travel records for all program specialists and discussed this information with OIA officials to ensure that these data were sufficiently reliable for our use. We calculated the percentage of time spent conducting on-site reviews in the two countries between 2004 and the third quarter of 2006 and compared these data with the total available work time for the program specialists. In addition, to report on the FSM’s and the RMI’s use of noncompact federal funds, we updated our prior review of U.S. programs and services that GAO issued in 2002. The prior review selected 13 programs and services, including those with the largest expenditures and loans over a 15-year period, as well as each of the services that the U.S. government agreed to provide under the compact. Funding for 3 of these programs was consolidated into the Supplemental Education Grant under the amended compacts and was excluded from this update. Moreover, to report on OIA- awarded technical assistance and operations and maintenance improvement program grants, we selected several projects that assisted compact implementation or complemented sector grant priorities, such as education and health, from among grants awarded to the FSM and the RMI for 2004 and 2005. We also requested applications and grant evaluation information for these projects from OIA. To determine the total amount of noncompact federal funding that the FSM received from the United States, we used the schedule of expenditures of federal awards from the 2004 and 2005 single audit reports of the FSM national government, the four FSM states, and the College of Micronesia to calculate total FSM expenditures. For the FSM national government expenditure total, we included only direct expenditures and did not include funds that were passed from the national government to the states. We compiled the expenditure amounts passed directly to the four states from each of the state’s respective single audit reports and combined these states totals and the national government totals to obtain the total FSM expenditure amount. We excluded compact and amended compact expenditures from our calculation. For the RMI, the federal awards section of the RMI and College of the Marshall Islands 2004 and 2005 single audit reports was used to calculate total RMI expenditures. The amount of compact funding for the FSM and the RMI was compared with the total amount of federal expenditures for 2004 and 2005 to get the percentage of noncompact U.S. federal funding. To address all of our objectives, we held interviews with officials from Interior (Washington, D.C.; Honolulu, Hawaii; the FSM; and the RMI) and the Department of State (Washington, the FSM, and the RMI). We also interviewed officials from the Departments of Health and Human Services (Washington and Honolulu); Education (Washington; San Francisco, California; and Seattle, Washington); and Agriculture (Washington, Honolulu, and Guam); the Federal Aviation Administration (Honolulu); the National Weather Service (Honolulu); the Federal Emergency Management Agency (FEMA) (San Francisco and Honolulu); and the U.S. Postal Service (Honolulu). We traveled to the FSM (Chuuk, Kosrae, Pohnpei, and Yap) and the RMI (Arno, Kwajalein, and Majuro Atolls). In addition, in Chuuk, we visited the islands of Fanapangas, Fefan, Polle, Toll, Tonoas, Udot, Uman, Ut, and Weno. In both countries, we visited primary and secondary schools, colleges, hospitals, dispensaries and community health centers, farms, fisheries, post offices, weather stations, telecommunication offices, and airport facilities. We discussed compact implementation with the FSM (the national, Chuuk, Kosrae, Pohnpei, and Yap governments) and the RMI officials from foreign affairs, finance, budget, health, education, public works, and audit agencies. Furthermore, we met with the RMI’s Chief Secretary and the FSM’s Office of Compact Management. In Kwajalein Atoll, we also met with officials from the U.S. Army Kwajalein Atoll and Ebeye’s Mayor, with its Ministry of Finance, and with the public utility and health and education officials to discuss compact implementation issues. We met with a representative from the FSM’s Micronesian Seminar, a nonprofit organization in Pohnpei that provides public education on current FSM events, to obtain views on compact implementation and development issues. We also observed 2005 and 2006 JEMCO and JEMFAC meetings. We met with officials from Interior’s Office of Inspector General (Guam, Honolulu, and Washington) to discuss ongoing investigations in the FSM and the RMI. We conducted our review from October 2005 through December 2006 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the Departments of the Interior, State, and Health and Human Services as well as the governments of the FSM and the RMI. All of these entities’ comments are discussed in the report and are reprinted in appendixes VIII through X. In addition, we considered all technical comments and made changes to the report, as appropriate. In addition to compact funding, both the FSM and the RMI received approximately 30 percent of their total U.S. expenditures during 2004 and 2005 from other federal agencies, including the Departments of Agriculture, Education, Health and Human Services, and Transportation. As part of the amended compacts’ subsidiary agreements with the RMI and the FSM, the United States agreed to extend and subsidize essential federal services, such as weather, aviation, and postal services that were provided under the original compact. The amended compacts also extend the programs and services of FEMA to the FSM and the RMI, but only until December 2008. At that time, responsibility for disaster assistance in the countries is transferred from FEMA to the United States Agency for International Development. U.S. program assistance is authorized by various sources, including the amended compacts and their implementing legislation as well as other U.S. legislation. Table 3 shows the amount of noncompact U.S. program funds expended on the FSM and the RMI for 2004 and 2005. Details of several key U.S. programs follow in tables 4 through 14. Table 16 lists the compact sector grant allocation to the five FSM governments in 2004 through 2006. Table 17 lists the compact sector grant allocation of the RMI, including the Kwajalein funding, in 2004 through 2006. The FSM national government and the individual states in most cases did not submit their required single audit reports on time for 2001 through 2005, while the RMI has generally improved the timeliness of its single audits, with its last three reports submitted by the established deadlines. In nearly all cases, auditors rendered qualified audit opinions on both the financial reporting and compliance with requirements of major federal programs for those single audit reports that were submitted. Furthermore, internal control weaknesses have persisted in both countries since we last reported on single audits in October 2003. In March 2006, JEMCO threatened to take action, such as withholding funds, designating the FSM as a high-risk grantee, or conditionally approving sector grants for 2007, if the FSM and its states did not submit their 2005 single audits by July 1, 2006. The FSM and the RMI are required to submit audit reports each year to comply with compact and fiscal procedures agreement requirements. The submitted audits are to be conducted within the meaning of the Single Audit Act, as amended. Single audits are a key control for the oversight and monitoring of the FSM and RMI governments’ use of U.S. awards, and are due to the Federal Audit Clearinghouse 9 months after the end of the audited period. All single audit reports include the auditor’s opinion on the audited financial statements and a report on the internal controls related to financial reporting. The single audit reports also include the auditor’s opinion on compliance with requirements of major federal programs and a report on internal controls related to compliance with laws, regulations, and the provisions of contracts or grant agreements. The FSM national government and the individual states in most cases did not submit their single audit reports on time for 2001 through 2005, while the RMI has generally improved the timeliness of its single audits, with its last three reports submitted by the established deadlines. Table 18 shows the timeliness of reports for the FSM and the RMI. The lack of timeliness of the single audit reports for 2001 through 2005, especially for the FSM and its four states, has meant that U.S. agencies have limited knowledge of the territorial governments’ accountability over U.S. funds received. In addition, the governments’ inability to prepare financial statements and have them audited within 9 months of the fiscal year-end suggests weaknesses in the underlying financial systems and processes needed to produce financial information to efficiently and effectively manage the day-to-day operations of government. Among the 30 audit reports on financial reporting submitted by the FSM national and its state governments and the RMI for 2001 through 2005, 26 reports received qualified opinions. In 2005, Pohnpei received an unqualified (“clean”) audit opinion on their financial statements. In 2004 and 2005, Chuuk received a disclaimed opinion on its financial statement, and Yap received a qualified/adverse opinion on its 2004 financial statement. Table 19 shows the type of financial statement audit opinions for the FSM and the RMI from 2001 through 2005. All of the audit opinions of the FSM national government’s financial statements from 2001 through 2005 were qualified. The opinions were qualified because of the lack of supporting evidence and restrictions on the scope of the audit. For example, the auditors qualified their opinion on the financial statements in the 2005 FSM report due to the following matters: Their inability to determine (1) the propriety of cash and cash equivalents, receivables, advances, and amounts due to the FSM state governments for the governmental activities and the general fund; (2) receivables and amounts due to the FSM state governments for the U.S. Federal Grants Fund and the aggregate remaining fund information; and (3) cash and cash equivalents and receivables for the Asian Development Bank Loan Fund, and their effect on the determination of revenues and expenditures/expenses for government activities and the remaining aggregate remaining fund information. The lack of audited financial statements of the National Fisheries Corporation; Micronesia Longline Fishing Company; Yap Fishing Corporation; Yap Fresh Tuna Inc.; Chuuk Fresh Tuna, Inc.; and Kosrae Sea Venture, Inc. In addition, all of the audit opinions of the RMIs’ financial statements during the 2001 through 2005 period were qualified. For example, as of 2005, the auditors still could not determine the following: the propriety of governmental activities’ capital assets, net assets invested in capital assets, and the net of related debt and depreciation expenses. The auditors also were unable to obtain audited financial statements for the following RMI component units: Ministry of Education Head Start Program; Air Marshall Islands, Inc.; Kwajalein Atoll Joint Utilities Resources, Inc.; and Marshall Islands Development Bank. The single audits also identified material weaknesses and reportable conditions related to the 2005 financial statements reports, totaling 57 for the FSM and 2 for the RMI (see table 20). These findings indicated a lack of sound internal control over financial reporting, which is needed to (1) adequately safeguard assets; (2) ensure that transactions are properly recorded; and (3) prevent or detect fraud, waste, and abuse. For example, in the 2005 FSM single audit report, material weaknesses included (1) the lack of documentation to support the amounts and disposition of fixed assets, (2) the lack of reconciliation of U.S. program receivables, (3) the lack of monitoring of receivable billing and collecting, and (4) unreimbursed U.S. expenditures. In the RMI 2005 single audit, the auditors found material weaknesses that included the use of unaudited financial statements from several component units and the lack of fixed asset inventory. We found that 14 of the 57 findings previously mentioned from the 2005 FSM single audit report on financial reporting were recurring problems from the previous year or had been reported for several consecutive years. Likewise, the 2 findings from the 2005 RMI single audit report were recurring problems reported for several consecutive years. The FSM has developed corrective action plans to address about 91 percent of the financial statement findings in the 2005 single audits, and the RMI has developed plans for both of its financial statement reportable findings. For example, the FSM said that it would make efforts to reconcile intergovernmental balances and discuss this issue with all four states in 2006, and the RMI said that it would hire a consultant qualified to conduct the valuation of fixed assets. In addition to the auditor’s report on financial statement findings, the auditors also provide a report on the countries’ compliance with requirements of major federal programs. All 30 of the audit reports on such compliance submitted by the FSM national and its state governments and the RMI for 2001 through 2005 received qualified opinions. Moreover, in the 2005 single audit reports of compliance with requirements of major federal programs, auditors reported 45 material weaknesses and reportable conditions findings for the FSM and 11 for the RMI (see table 21). For example: In the FSM, findings that were material weaknesses included (1) the lack of internal controls over cash management requirements and (2) no reconciliation of U.S. grants receivable per Catalog of Federal Domestic Assistance number or by program number. In the RMI, findings that were material weaknesses included (1) a lack of inventory of fixed assets and (2) the lack of audit reports from subrecipient component units. We found that only 4 of the 45 findings from the 2005 FSM single audit report, and only 2 of the 11 findings from the 2005 RMI single audit report, were recurring problems from the previous year or had recurred for several consecutive years. For the RMI, this was a significant shift from 2002, when 8 of the 11 findings were recurring problems from the previous year or had recurred for several consecutive years. The FSM has developed corrective action plans to address about 60 percent of the 2005 single audit’s reportable findings on compliance with requirements of major federal programs, and the RMI has developed plans for all its reportable findings on such compliance. For example, the FSM said that on October 1, 2005, a new procedure was implemented to properly monitor the drawdown of U.S. funds and to properly reimburse the states on time, and the RMI said that it would hire a consultant to assist component units in rectifying their accounting books and records. According to OMB Circular A-133, if a grantee fails to complete its single audit reports, U.S. agencies may impose sanctions such as, but not limited to, (1) withholding a percentage of federal U.S. awards until single audits are completed satisfactorily, (2) withholding or disallowing overhead costs, (3) suspending U.S. federal awards until the single audit is conducted, or (4) terminating the U.S. federal award. At the special March 2006 JEMCO meeting, the OIA Budget Director noted that single audits were the most important indicator of financial stability provided by a grantee to a grantor. He emphasized that OIA was particularly concerned about the lack of FSM single audits and notified FSM JEMCO participants that OIA intended to “apply a remedy” for single audit noncompliance beginning October 1, 2006, that would include the possibility of withholding cash payments. OIA also may take necessary steps to have the FSM designated as a “high-risk” grantee. Finally, OIA recommended to JEMCO in the March 2006 meeting that if audits were not completed by July 1, 2006, that it only conditionally approve sector grants for 2007 so that funds may only be released to entities in compliance with single audit requirements. This warning appeared to have an impact on most of the FSM states, because Kosrae, Pohnpei, and Yap completed their 2005 reports on time. Other U.S. agencies have not designated the FSM as high risk in the past, even though they can assign a grantee as high risk if the grantee has a history of unsatisfactory performance, is not financially stable, has an inadequate management system, has not conformed to the terms and conditions of previous awards, or is otherwise irresponsible. Federal agencies that designate a grantee as high risk may impose special terms and conditions. Currently, none of the U.S. agencies providing funds to the FSM and the RMI have designated either country as a high-risk grantee, although this may be a possibility if the single audits are not completed within the deadlines requested by Interior. Officials from the Department of Education told us that, because most of the direct grant funding to the FSM has been subsumed by the Special Education Grant, which is administered by Interior, Education now has an even smaller share of the U.S. funds in the FSM, and therefore Interior would be in the best position to invoke a high-risk designation if warranted for a particular grant. Nevertheless, Education officials did take into account the lack of single audit performance when administering program funds and, in the case of funds for special education, had imposed additional reporting requirements. Tables 22 and 23 show the total numbers of material weaknesses and reportable conditions identified in single audit reports for the FSM and the RMI in 2001 through 2005. Special terms and conditions in sector grants The FSM shall have 60 days from the date of grant award to realign this sector budget so that activities and related costs are clearly defined for each funding input under the grant. In doing so, the FSM should use a common or unified format wherever possible. Between October and December 2003, OIA lacked education staff needed to conduct the follow-up. The FSM shall have 60 days from the date of grant award to identify the amounts, sources, and the specific strategic focus and activities of all noncompact funding and direct technical assistance that relates to this sector. Between October and December 2003, OIA lacked education staff needed to conduct the follow-up. The FSM shall submit within 180 days from the date of grant award a streamlined and refined statement of outcome measures, baseline data, and annual targets to enable the tracking of outputs and outcomes. In doing so, the FSM should use a common or unified format wherever possible. These materials shall form the basis for setting measurable annual targets for the sector grant budget and performance plan that the FSM submits for 2005 funding. Between January and March 2004, OIA staff had discussions with all state directors of education and other sectors, expressing concern regarding performance-based budgeting and the lack of a unified format. OIA did not receive formal communication regarding these concerns from the FSM in 2004. The new education grant manager placed a similar condition on the FSM in 2005. As a condition precedent to the drawdown of funding for this specific activity, Pohnpei shall provide written materials to justify the request for $52,463 for the funding of the public library from the education sector grant. Written justification was not received from the Pohnpei Department of Education. However, OIA held discussions with the Pohnpei Director of Education during OIA’s first site visit in February 2004. The OIA education grant manager approved the use of education funds to support the library’s purchase of children’s books. The FSM shall submit within 90 days from the date of the grant award a streamlined and refined statement of national strategic goals, outcome measures, baseline data, and annual targets to enable the tracking of uniform and consistent, national, and state outputs and outcomes. In doing so, the FSM should use a common or unified format. The FSM did not meet the deadline. OIA reminded the FSM National Division of Education (NDOE) of the requirement several times, and finally indicated it would cut off funds to them in March 2005 if the submission was not received. The FSM provided the required submission in late February, but the quality of the information was deemed “questionable” by OIA. The FSM shall conduct four evaluation studies and performance assessments. Special terms and conditions in sector grants (1) Within 60 days from the date of the grant award, an analysis of school year 2004-2005 staffing patterns will be submitted and include, but not be limited to, the number of students enrolled as of October 1, 2004; the number of staff by category (principals, vice-principals, teachers, teacher assistants, specialists, support staff, etc.) as of October 1, who are full-time and part-time employees; changes in staffing from school year 2003-2004; the number of staff in each category in each school; and the October 1 student-to-teacher ratio. (1) The staffing patterns report was submitted in a summarized form. The summary document did not include data on all of the staffing categories cited in the grant condition—for example, no data on vice- principals were received. (2) Within 60 days from the date of the grant award, an inventory of textbooks and related resource materials for each grade in the core subjects of language arts, math, social studies, and science will be conducted and submitted. (2) Each state completed their textbook inventory and submitted it to NDOE. NDOE transmitted the document “as is,” without a summary or any analysis. Yap’s report file could not be opened; a revision was received a few days later. (3) Within 180 days from the date of the grant award, a national inventory of educational facilities will be conducted and progress to date submitted. The inventory will include, but not be limited to, the number of educational buildings, age of each, condition of each, list of repair needs by school, and date when last renovated. (3) OIA was asked by NDOE to provide a sample format for the states to follow. NDOE was late in sending out the proposed format to the states. Thus, the four state submissions came in different formats, with no summary or analysis provided. (4) Within 180 days from the date of the grant award, an evaluation of the effectiveness of the national student testing (NST) systems will be conducted and progress to date submitted. The NST and state testing instruments will be evaluated for validity and alignment to state standards and curricula. (4) The report was completed for the FSM by a consultant. OIA learned later that the FSM hired the same consultant who had created the NST to evaluate it. The FSM shall provide data of educational progress no less than annually, in time for submittal to JEMCO. At a minimum, data on the 20 indicators of educational progress discussed at the August 11 JEMCO meetings will be gathered and submitted by state, along with a national summary, no later than July 30, 2005. The FSM submitted a summary document, but it contained little narrative. According to OIA, it was difficult to decipher the meaning of some of the charts. The Office of Compact Management questioned the quality of the report, but it was submitted unchanged to OIA. A later submission contained a narrative analysis. The FSM shall ensure that within 90 days of the grant award, the FSM Department of Health, Education, and Social Affairs, in consultation with the four state departments of education and OIA, shall develop a national process and procedure for the procurement of textbooks on a 5-year purchasing cycle. The FSM submitted the final national process and procedure document to OIA on March 16, 2006. According to OIA, the document was well thought out and included significant state input, but did not include the proposed purchasing cycle for each state. This omission will be a grant condition in 2007. Special terms and conditions in sector grants The FSM shall ensure that in 2006 through 2008, no less than $2.5 million of compact education sector funding allocated to state governments shall be used to purchase textbooks for the primary and secondary education systems and related instructional materials. The states provided revised line item budgets, indicating their contribution to the $2.5 million requirement in November 2005. OIA withheld a portion of the education sector grant funding in October and November 2005 until this requirement was met. The FSM shall provide data of educational progress no less than annually, in time for submittal to JEMCO. At a minimum, data on the 20 indicators of educational progress discussed at the August 11 JEMCO meetings will be gathered and submitted by state, along with a national summary, no later than July 30, 2006. The FSM missed the original July 30, 2006, deadline. However, OIA granted their requested extension until August 14, 2006. The report on the 20 indicators was received on that date. SEG: The FSM shall submit, for approval by OIA, a written description and annual plan for the use of the grant funds. No funds may be disbursed until the plan is approved. OIA approved the plan submitted by the FSM in September 2005. SEG: Timelines for all major objectives and activities must match the annual funding period. Timelines for the 2005 funding period are due to OIA by October 31, 2005. Revised timelines were received directly from each state, with no attempt to submit them as an FSM-wide deliverable. SEG: The FSM shall submit to OIA by December 31, 2005, a framework for each sub-grantee that illustrates how the programs and goals funded by the Special Education Grant correlate to the programs and goals funded by the compact education sector grant, and how all correlate to the FSM Strategic Development Plan’s education goals. The national submission was received on January 30, 2006. According to OIA, it was obvious the national submission was written by one author who used little of what the states submitted. The FSM shall have 60 days from the date of grant award to realign this sector budget so that activities and related costs are clearly defined for each funding input under the grant. In doing so, the FSM should use a common or unified format wherever possible. Never submitted. The FSM shall have 60 days from the date of grant award to identify the amounts, sources, and the specific strategic focus and activities of all noncompact funding and direct technical assistance that relates to this sector. Met. The FSM shall submit within 180 days from the date of grant award a streamlined and refined statement of outcome measures, baseline data, and annual targets to enable the tracking of outputs and outcomes. In doing so, the FSM should use a common and unified format wherever possible. These materials shall form the basis for setting measurable annual targets for the sector grant budget and performance plan that the FSM submits for 2005 funding. Never submitted. Special terms and conditions in sector grants As a condition precedent to the drawdown of funding for this specific activity, Chuuk shall provide written justification to OIA for the funding of $100,990 for Marine Resources. Justification was provided, and the funding was released. Justification was submitted, and the fiscal procedures agreement language was broad enough to encompass all of the agencies’ core missions. Funding was released. Chuuk: Marine Resources, Agricultural Operations, Yap: Roadside Maintenance and YAPCAP The FSM has 30 days from the date of grant award to submit the appropriate performance measures and baseline data to OIA for all approved activities. The measures and data are to be specific to each funded activity, not for the sector as a whole. Funding was held until performance measures and baseline data were eventually submitted, the information was extremely poor quality. However, no guidance was given or requested by OIA to the FSM for the development of the information. The FSM shall not incur obligations against this grant until OIA has approved all proposed budget line items for the national government and its subgrantees. OIA approved the budgets and released funds. The FSM shall have 60 days from the date of grant award to realign this sector budget so that activities and related costs are clearly identified for each funding input under the grant. In doing so, the FSM should use a common or unified format wherever possible. Partially met. According to OIA, while the numbers added up, the connection between activities and costs, and the relationship between costs to expected outputs—or how outputs linked back to the FSM’s strategic goals and stated performance outcomes—remained unclear. The FSM shall have 60 days from the date of grant award to identify the amounts, sources, and specific strategic focus and activities of all noncompact funding and direct technical assistance that relates to this sector. Met. The FSM shall submit within 180 days from the date of grant award a streamlined and refined statement of outcome measures, baseline data, and annual targets to enable the tracking of outputs and outcomes. In so doing, the FSM should use a common or unified format wherever possible. These materials shall form the basis for setting measurable annual targets for the sector grant budget and performance plan that the FSM submits for 2005 funding. Partially met. According to OIA, statements of outcome measures were revised and submitted but work was still required to make the FSM’s intent and targets clear. There were also problems related to having a common baseline year and using and providing information in a unified and common format. The FSM shall have until September 30, 2006, to obligate the carryover funds from 2004. According to OIA, obligations are in process. Special terms and conditions in sector grants Consistent with the resolution adopted by JEMCO on August 2004, funds made available through this award may only be used for health-related infrastructure expenditures and are subject to conditions applicable to the public infrastructure grant. Such allowable uses include facility upgrades, renovation and repair, and fixed equipment and other capital assets. The list of projects and purchases received by OIA complied with the resolution. The FSM Office of Compact Management shall compile a list of proposed related infrastructure expenditures identified by the FSM National Department of Health, Education, and Social Affairs and by Chuuk, Pohnpei, and Yap to be funded under this grant. The list shall be submitted to OIA’s Honolulu Field Office for review and concurrence by November 30, 2005. No expenditures shall be allowed prior to that review, unless specifically approved by OIA. OIA communicated directly with Chuuk, Pohnpei, and Yap and notified the Office of Compact Management that the lists were acceptable. The deadline was extended because the grant award was not signed by the FSM until December 19, 2005, due to a technical (nonsubstantive) error. This error was not brought to OIA’s attention for correction until after the deliverable’s due date. The FSM shall have 30 days from the date of grant award to provide information on the three health insurance programs in existence for national and state government employees. At a minimum, this information should include (1) a breakdown of costs associated with the programs in Chuuk and Pohnpei; (2) the numbers served by each of the three programs; (3) eligibility requirements; (4) the basis for calculating premiums and/or government subsidies; and (5) capitation payments to private providers, state hospitals, and, as applicable, off-island tertiary care facilities. The required information was provided, but its emphasis was on the FSM national government’s program. OIA asked for and received clarification on Chuuk and Pohnpei’s programs as well. The FSM shall have until April 1, 2005, to complete a comprehensive evaluation of the effectiveness of existing primary care systems and expansion plans in all four states. The study shall give specific emphasis on dispensaries, community health centers, and rural health and cover the following areas: (1) dispensary staffing, (2) communications, (3) referrals, (4) infrastructure, (5) transportation, (6) the procurement and distribution of medicines and other essential supplies, and (7) new and in-service training. The responsible agency for the evaluation shall be the FSM National Department of Health, Education, and Social Affairs (HESA) in consultation with the four state departments of health. The FSM submitted an acceptable evaluation of its existing primary care systems and expansion plans for all four states on time, and provided an oral report to JEMCO at the August 2005 meeting in Pohnpei. HESA shall have 30 days from the date of grant award to submit an implementation plan and scope of work to OIA before going forward with the study. Deadline was extended by OIA and met by the FSM. Special terms and conditions in sector grants The FSM has 30 days from the date of grant award to reprogram $4,391 earmarked for Chuuk’s Department of Education to a purpose specifically linked to the state’s Department of Health Services. Met. The FSM has 30 days from the date of grant award to reprogram $11,500 earmarked for agricultural programs of Yap’s Department of Resources and Development, to either nutrition education or another program activity directly managed by the state’s Department of Health Services. Met. No money shall be used by the FSM National Department of Health, Education, and Social Affairs for either building new facilities or renovating existing buildings. The findings of any physical assessment of health facilities funded under this grant shall be submitted to OIA no later than 90 days before the end of the grant period and also to the infrastructure development planning committees in all four FSM states. Met. By April 15, 2005, the FSM national government and Chuuk, in consultation with OIA, shall develop an outline of a plan that shall promptly address the deficiencies found in the Chuuk health dispensary program. The completed plan shall be transmitted to OIA by May 15, 2005. An acceptable plan was developed in consultation with OIA and submitted on time. A verbal report on the plan’s implementation was accepted by JEMCO at its August 2005 meeting in Pohnpei. The FSM shall not incur obligations against this grant until OIA has approved all proposed budget line items from the national government and its subgrantees. OIA gave its approval at the start of the fiscal year. The FSM shall have 180 days from the date of grant award to submit information to OIA on (1) the common year selected by the National Department of Health, Education, and Social Affairs and all four state health departments that shall serve as the base for evaluations of sector grant performance and (2) data collected from that baseline year for all appropriate outcome measures described in the health care chapter of the FSM Strategic Development Plan. The submission shall also include 2004 data linked to these performance measures. The FSM health directors met in September 2005 and agreed to use 2004 as the baseline year. At that time, they established a process to review the strategic goals and outcome measures in the FSM’s development plan. In January 2006, they met again and reaffirmed their previous selection of 10 outcome measures and added 4 more measures. The FSM national government also began collecting baseline data. The FSM shall have 180 days from the date of grant award to submit the appropriate actual performance targets for 2006 and prospectively for 2007 and 2010. The health directors established medium- term targets for 2010 but did not meet the condition to submit actual performance targets for 2006 or prospectively for 2007. According to OIA, the FSM health directors were confused about the requirement. No grant funds may be expended or obligated before an infrastructure development plan (IDP) is developed by the FSM and submitted to OIA for review. Not met in 2004. Special terms and conditions in sector grants To the extent that the infrastructure priorities in the IDP differ materially from those set forth in the FSM Infrastructure Development Plan prepared by Nathan Associates, Inc., written justification should be provided to OIA for concurrence. Not met in 2004. An amount equal to 5 percent of the total grant must be placed in a separate bank account (the Infrastructure Maintenance Fund (IMF)), which upon deposit by the FSM will be matched by OIA. Funds in this account may be used for operations and maintenance needs once an IMF plan has been developed and submitted by the FSM and approved by OIA. Not met in 2004. JEMCO resolves that infrastructure investment for 2005 should move toward being funded at no less than 30 percent of annual compact grant funding, consistent with the sense of Congress, and shall achieve that level for 2006. Met. JEMCO resolves that OIA shall approve no projects until JEMCO has granted its concurrence in compact- funded portions of the FSM’s Infrastructure Development Plan. Met. JEMCO resolves that OIA shall deem approved no projects until the FSM national government has provided OIA with, and OIA has approved, a consolidated list of projects in order of national priority consistent with the IDP concurred by JEMCO. Not met in 2005. JEMCO resolves that as part of the justification of each infrastructure project, the FSM national government shall demonstrate that the project implementation shall be professionally managed. Met. JEMCO allocates from the infrastructure sector the amount of $1 million for the initial establishment of a project management unit. Met. JEMCO resolves that by August 31, 2005, the FSM national government shall conduct detailed planning studies to determine the infrastructure requirements of the health and education sectors. Not met in 2005 —extended to January 31, 2006, by JEMCO resolution. Extension of its deadline also was not met by the FSM. An amount equal to 5 percent of the total grant must be placed in a separate bank account, the IMF, which upon deposit by the FSM, will be matched by OIA. Funds in this account may be used for maintenance needs once an IMF plan has been developed and submitted by the FSM and approved by OIA. Not met as of August 2006. Special terms and conditions in sector grants The FSM shall have 60 days from the date of grant award to realign this sector budget so that activities and related costs are clearly defined for each funding input under the grant. In doing so, the FSM should use a common or unified format wherever possible. Never submitted. The FSM shall have 60 days from the date of grant award to identify the amounts, sources, and the specific strategic focus and activities of all noncompact funding and direct technical assistance that relates to this sector. Met. The FSM shall submit within 180 days from the date of grant award a streamlined and refined statement of outcome measures, baseline data, and annual targets to enable the tracking of outputs and outcomes. In doing so, the FSM should use a common and unified format wherever possible. These materials shall form the basis for setting measurable annual targets for the sector grant budget and performance plan that FSM submits for 2005 funding. Never submitted. Funding under this grant shall not be used by Yap for the Visitor’s Bureau unless OIA approves a reprogramming request. Yap submitted a revised budget and received approval for funding the Visitor’s Bureau. Included within this grant is $888 for Yap to use for Resources and Development. In accordance with the condition, Yap budgeted the funding for Resources and Development. As a condition precedent to the drawdown of funding for this specific activity, Kosrae shall provide written materials to justify the request for $152,000 for the funding of Livestock Research/Tissue Culture. Justification was provided, and the funding was released. As a condition precedent to the drawdown of funding for this specific activity, Kosrae shall provide written materials to justify the request for $205,000 for the funding of the Mangrove Crab Project. Funding was released. The FSM contested the notion of a phase- out plan for the private sector development grant and planned to discuss the issue further at the next JEMCO meeting. OIA sent a letter agreeing to release the funds, and the issue was dropped. Justifications were submitted. The fiscal procedures agreements language is broad enough to encompass all agencies’ core missions. Funding was released. Pohnpei: Economic Development Authority Yap: Resources and Development The FSM has 30 days from the date of the grant award to submit the appropriate performance measures and baseline data to OIA for all approved activities. The measures and data are to be specific to each funded activity, not for the sector as a whole. Funding was held until performance measures and baseline data were submitted. When performance measures and baseline data were eventually submitted, the information was of extremely poor quality. However, no guidance was given by OIA or requested from the FSM for the development of the information. None. The FSM shall have 60 days from the date of the grant award to realign its budget so that activities and related costs are clearly defined for each funding input. In doing so, the FSM should use a common or unified format wherever possible. Partially met. According to OIA, while the numbers added up, the connection between activities and costs, and the relationship between costs to expected outputs or how outputs— linked back to the FSM’s strategic goals and stated performance outcomes—remained unclear. The FSM shall have 60 days from the date of the grant award to identify amounts, sources, and the specific strategic focus and activities of all noncompact funding and technical assistance that relates to this sector. Met. The FSM shall submit within 180 days from the date of the grant award a streamlined and refined statement of outcome measures, baseline data, and annual targets to enable the tracking of outputs and outcomes. In doing so, the FSM should use a common or unified format wherever possible. These materials shall form the basis for setting measurable annual targets for the sector grant budget and performance plan that the FSM submits for 2005 funding. Not met. The public sector capacity building grant does not contain any conforming, unified outcome measures; baseline data; or annual targets. As a condition precedent to the drawdown of funding of $122,698, Chuuk shall hire a qualified public auditor. Chuuk has not yet hired a qualified public auditor. The plan was late, and funds were temporarily withheld. The FSM had 30 days from the date of the grant award to submit the appropriate performance measures and baseline data to OIA for all approved activities. The measures and data are to be specific to each funded activity, not for the sector as a whole. No submittal from the FSM. A schedule was submitted that showed the reduction of public sector capacity building revenues going to basic operations funding, but not how it would be replaced by local revenue. plan to migrate basic operations funding from the public sector capacity building grant to local revenues. This plan will provide for this migration to happen over a period not to exceed 5 years. None. Special terms and conditions in sector grants The RMI shall have 60 days from the date of grant award to realign its budget so that activities and related costs are clearly defined for each funding input. The category, “U.S. and Other Grants,” shall list components and allowable uses. Met. The RMI shall submit within 180 days from the date of grant award a streamlined and refined statement of performance measures, baseline data, and annual targets to enable the tracking of goals and objectives. These materials shall form the basis for setting measurable annual targets for the sector grant budget and performance plan that the RMI submits for 2005 funding. Met. The RMI shall conduct three evaluation studies and performance assessments. (1) Within 60 days from the date of the grant award, an inventory of textbooks and related resource materials for each grade in the core subjects of language arts, math, social studies, and science will be conducted and submitted. (1) An extension was requested. The deliverable was extended to 2006. An inventory of 71 of 80 schools was received on July 21, 2006. The remaining 9 schools’ inventory is required in the first quarter of 2007. (2) Within 180 days from the date of the grant award, an analysis of school year 2004-2005 staffing patterns will be submitted and include, but not be limited to, the number of students enrolled as of October 1, 2004; the number of staff by category (principals, vice-principals, teachers, teacher assistants, specialists, support staff, etc.) as of October 1, who are full-time and part-time employees; changes in staffing from school year 2003- 2004; the number of staff in each category in each school; and the October 1 student-to-teacher ratio. (2) A summary document was received. A revision was requested to meet the requirement for select data by school, not in summary format. The revision was received in the Fall of 2005. (3) Within 180 days from the date of the grant award, an evaluation of the effectiveness of the national student testing systems will be conducted and progress to date submitted. The third and sixth grade national testing instruments will be evaluated for validity and alignment to national standards and curricula. An eighth grade testing instrument will be designed. (3) An extension was requested, and was granted. The deliverable was extended to 2006. Special terms and conditions in sector grants The RMI shall provide data of educational progress no less than annually, in time for submittal to JEMFAC. At a minimum, data on the 20 indicators of educational progress discussed at the August 13th JEMFAC meetings will be gathered and submitted no later than July 1, 2005. Quarterly performance reports must include completed data charts, effective immediately, and incorporate the 20 indicators of educational progress no later than July 1, 2005. The majority of the 20 indicators were submitted on time. The outstanding 5 indicators were requested but not received. The RMI shall routinely submit to the OIA Honolulu Field Office one copy of all educational studies, surveys, and performance evaluations completed with education sector or Supplemental Education Grant funds. Some locally developed reports are routinely submitted to OIA. Other reports are not routinely submitted but are identified in quarterly reports, which OIA then requests. Quarterly financial and performance reports shall include completed data charts, data on Ebeye Special Needs expenditures and activities, and copies of all reports completed with education sector or Supplemental Education Grant funds. The quarterly reports were received on time and included information specific to Ebeye. However, data charts embedded in the RMI format were often incomplete. Other reports completed with compact or Supplemental Education Grant funds are occasionally but not routinely transmitted to OIA. All 20 indicators of educational progress shall be reported by July 1st annually. Received July 28, 2006. The Government of the Republic of the Marshall Islands shall complete the textbook and staffing inventories by October 31, 2005. (1) An extension was requested for the textbook inventory. (2) The staffing inventory was received by the deadline. The Government of the Republic of the Marshall Islands shall spend those monies required, up to $100,000, to conduct the mandated national evaluation of the effectiveness of the national student testing systems by a reputable testing and evaluation expert within 180 days of the grant award, and to conduct other evaluations and assessments as needed. These monies shall come from the education sector grant award, unless available from other sources. According to OIA, the national student testing system is in near final form. The RMI brought in a consultant to review and validate its new testing system. The consultant provided a minimal assessment of the testing system to the RMI. The RMI shared with OIA. OIA requested a more thorough analysis, but the RMI did not provide this by the end of school year 2005-2006. This grant condition will continue into 2007. The RMI shall have 60 days from the date of grant award to realign its budget so that activities and related costs are clearly defined for each funding input. The category, “U.S. and Other grants,” shall list components and allowable uses. Never submitted. Special terms and conditions in sector grants The RMI shall submit within 180 days from the date of the grant award a streamlined and refined statement of performance measures, baseline data, and annual targets to enable the tracking of goals and objectives. Never submitted. The RMI shall deliver to OIA the appropriate performance measures and baseline data for all approved activities by November 30, 2004. The RMI submitted revised portfolios. The grantee shall submit a written explanation of each budgeted activity no later than 30 days after the date of grant award. The RMI submitted revised portfolios. The deadline is the end pf 2006. According to OIA’s environmental grant manager, the RMI is expected to submit all 6 indicators by the deadline. Percentage of safe outer island water supply Percentage of dead and endangered reef areas Total number of solid waste violation per quarter The RMI shall have 60 days from the date of the grant award to realign its budget so that activities and related costs are clearly defined for each funding input. The category, “U.S. and Other Grants,” shall list component and allowable uses. Partially met. Soon after the grant was awarded, OIA worked closely with the RMI’s consultant on performance budgeting, and with the RMI’s Economic, Policy Planning Statistics Office and Ministry of Health, on addressing the grant’s budget realignment requirements. The results were evident in improvements to the first and subsequent quarterly reports in 2004 and the 2005 budget submitted to OIA. Although the requirement was directed to the Ministry of Health, the condition had a beneficial spillover effect in improving reporting and performance budgeting for all compact grant sectors. In retrospect, the deadline imposed in the grant may have been premature since the realignment process required time and effort beyond the 60-day framework and is still continuing. Special terms and conditions in sector grants The RMI shall submit within 180 days from the date of grant award a streamlined and refined statement of performance measures, baseline data, and annual targets to enable the tracking of goals and objectives. These materials shall form the basis for setting measurable annual targets for the sector grant budget and performance plan that the RMI submits for its 2005 funding. Partially met. The RMI reduced the number of performance measures it tracks to those that primarily relate to effectiveness and efficiency, and ensured its annual targets were output oriented. In retrospect, the deadline imposed in the grant may have been premature since the realignment process required time and effort beyond the 180-day framework and is still continuing. Insofar as possible, performance measures should apply equally to both Majuro and Ebeye health subsystems, and baselines should reflect differences in health status and service levels in the two urban centers. Measures of disease incidence or prevalence should also be developed to gauge the impact of environmental and infrastructure improvements on health status. Soon after the grant was awarded, OIA worked with the RMI’s consultant on performance budgeting and with RMI’s statistics office to improve the consistency of performance budgeting between Ebeye and Majuro. According to OIA, the reporting has improved and is reflected in the 2005 and 2006 budgets submitted to OIA. Measures of disease incidence and prevalence, however, still do not adequately track environmental conditions. The RMI, however, is working to improve its health status statistics. Education and health infrastructure projects were the RMI’s priority in 2004 and 2005. The RMI shall have 90 days before the end of the grant period to complete a comprehensive evaluation of the effectiveness of its existing primary health care system and expansion plans. No less than 1 percent of the total grant award shall be set aside for this purpose. Met. The RMI shall have 30 days to submit an implementation plan and scope of work to OIA before implementing the study. Met. Up to a maximum of $100,000 in carryover funding from the 2004 health sector grant shall be used to continue the provision of technical assistance in performance budgeting and measurement. The scope of work shall focus on refining outcome statements, measures, and baselines that demonstrate the effectiveness or efficiency of the Ministry of Health’s interim outputs. The 2004 carryover grant awarded to the RMI provided funds to continue the provision of technical assistance to build performance budgeting and monitoring capacity. No grant funds may be used by agencies outside the health sector or for general government administrative costs, unless specifically justified and preapproved by JEMFAC. This condition was meant to prohibit any further levying of a percentage cost for the Office of the Auditor General as was done (and not disclosed) by the RMI during 2004. Special terms and conditions in sector grants The Ministry of Health shall have 30 days from the date of grant award to submit a list to OIA’s Honolulu Field Office that describes the specific uses of funding provided under CSG-RMI-2006-C. Funds may not be used for recurring salaries and may not be used for other operating costs, except as approved by OIA. Partially met. The Ministry of Health notified OIA of its intent to use most of its carryover funds to support the continuation of performance budgeting technical assistance. This notification was within the 30-day time frame. The remaining funds were to go to Majuro Hospital, but specific uses were not identified until August 2006. None. The RMI shall submit a formal infrastructure development and maintenance plan to OIA prior to the expenditure of sector grant funds for construction activities. Met. Funds designated for Infrastructure Maintenance Funds will be deposited after the RMI has transmitted its 2004 infrastructure maintenance plan to OIA for its concurrence in writing. Met. The RMI government shall formulate a project development plan, consistent with the Infrastructure Development Maintenance Plan format for the project entitled “Ebeye Hospital Repair.” No plan formulated as of September 13, 2006. None. The RMI shall have 60 days from the date of grant award to realign its budget so that activities and related costs are clearly defined for each funding input. The category, “U.S. and Other grants,” shall list components and allowable uses. Never submitted. The RMI shall submit within 180 days from the date of grant award a streamlined and refined statement of performance measures, baseline data, and annual targets to enable the tracking of goals and objectives. These materials shall form the basis for setting measurable annual targets for the sector grant budget and performance plan that the RMI submits for 2005 funding. Never submitted. The RMI shall deliver to OIA the appropriate performance measures and baseline data for all approved activities by November 30, 2004. The RMI submitted revised portfolios. Never submitted. (1) Dollar amount of export revenues from local products. (Baseline will be established in 2006, and this measure will be used in future years to determine program development.) (2) Number of international tourist arrivals. None. The RMI shall deliver to OIA the appropriate performance measures and baseline data for all approved activities by November 30, 2004. Never submitted. The RMI shall deliver to OIA an audit work plan and audit schedule for 2006 by October 31, 2005. Submitted late. The following are GAO’s comments on the Federated States of Micronesia’s letter dated December 4, 2006. 1. As we noted in both our June 2006 report and this report, the FSM’s efforts to address the decrement to date have not yielded the financial changes, including significant tax reforms, required to address the decrement. Therefore, we reiterate our position that the FSM needs to develop a plan to address the decrement. If the FSM fails to address the decrement, the federal and states’ budgets will likely be reduced, making it difficult to maintain current personnel levels. 2. We recognize that the FSM established its 70:30 formula according to its stated goal of providing for certain needs common to each state, regardless of population size, such as the need for airports and seaports. However, the differences in per-capita funding resulting from use of the formula may have contributed to disparate conditions among the FSM states, especially in health and education, that cannot be ignored. These differences have also been identified by a Department of Health and Human Services official and in the FSM’s own development plans as well as in a study by the Asian Development Bank. We believe that the formula’s impact on each state’s performance and development should be continuously evaluated and the allocation of funds revised as necessary. As we observe in this report, such an assessment requires the full development of the mechanism for measuring sector grant performance and collecting complete baseline data. 3. We testified three times in 2003, before the House and the Senate, regarding our assessment of the new arrangements and requirements of the amended compacts. The following are GAO’s comments on the Republic of the Marshall Islands’ letter dated December 4, 2006. 1. Throughout the report, we differentiate between the FSM and the RMI when discussing findings specific to each country. For example, when addressing land issues that have delayed projects in the countries, we discuss the issues and projects in each country separately. However, when findings were the same for both countries, we discussed the findings jointly. For example, we discuss planning for the decrement jointly because both the FSM and the RMI face the same issue. 2. The RMI projects that the annual inflation adjustment will allow the nominal value of annual grants to increase. However, using the Congressional Budget Office’s projections on the GDP Implicit Price Deflator, we found that for most years, the nominal value of the grants for the RMI declines each year from the previous years. We believe that the RMI response does not capture the true impact of the decrement and the urgent need for sector grant planning to take it into account. The combined impact of the decrement and partial inflation creates difficult challenges. First, absent full adjustment of the grants for inflation, the grants’ real value declines, leading to reduced sector resources and creating challenges in recruiting and retaining agency staff. RMI government agencies will not be able to maintain the current levels of imported resources when the real value of grants decline. Imported items needed for the education and health sectors, such as textbooks and pharmaceuticals, are subject to rising external prices. Likewise, increasing costs of imported building supplies may reduce the purchasing power of the infrastructure grant. In the RMI, personnel expenses are the largest area of government expenditures. Recruiting and retaining staff will be difficult if salaries are not fully adjusted for inflation. Furthermore, because RMI citizens can move to the United States to work, and many have done so, finding qualified personnel may become more difficult. A recent assessment of Marshallese emigration concluded that about one quarter of Marshallese now live abroad. Second, although the RMI states in its letter that it expects import duties to increase with external inflation, the inflation increase will not fully compensate for the decrements without aggressive growth in import duties. In addition to the individual named above, Emil Friberg, Assistant Director; Julie Hirshen; Ming Chen; Tracy Guerrero; Emmy Rhine; and Eddie Uyekawa made key contributions to this report. Joe Carney, Etana Finkler, Mary Moutsos, and Reid Lowe provided technical assistance. Compacts of Free Association: Development Prospects Remain Limited for Micronesia and the Marshall Islands. GAO-06-590. Washington, D.C.: June 27, 2006. Compacts of Free Association: Implementation of New Funding and Accountability Requirements Is Well Underway, but Planning Challenges Remain. GAO-05-633. Washington, D.C.: July 11, 2005. Compact of Free Association: Single Audits Demonstrate Accountability Problems over Compact Funds. GAO-04-7. Washington, D.C.: October 7, 2003. Compact of Free Association: An Assessment of Amended Compacts and Related Agreements. GAO-03-890T. Washington, D.C.: June 18, 2003. Foreign Assistance: Effectiveness and Accountability Problems Common in U.S. Programs to Assist Two Micronesian Nations. GAO-02-70. Washington, D.C.: January 22, 2002. Foreign Relations: Kwajalein Atoll Is the Key U.S. Defense Interest in Two Micronesian Nations. GAO-02-119. Washington, D.C.: January 22, 2002. Foreign Relations: Migration From Micronesian Nations Has Had Significant Impact on Guam, Hawaii, and the Commonwealth of the Northern Mariana Islands. GAO-02-04. Washington, D.C.: October 5, 2001. Foreign Assistance: Lessons Learned From Donors’ Experiences in the Pacific Region. GAO-01-808. Washington, D.C.: August 17, 2001. Foreign Assistance: U.S. Funds to Two Micronesian Nations Had Little Impact on Economic Development. GAO/NSIAD-00-216. Washington, D.C.: September 22, 2000. Foreign Relations: Better Accountability Needed Over U.S. Assistance to Micronesia and the Marshall Islands. GAO/RCED-00-67. Washington, D.C.: May 31, 2000. | In 2003, the United States signed Compacts of Free Association with the Federated States of Micronesia (FSM) and the Republic of the Marshall Islands (RMI), amending a 1986 compact with the countries. The amended compacts provide the countries with a combined total of $3.6 billion from 2004 to 2023, with the annual grants declining gradually. The assistance, targeting six sectors, is aimed at assisting the countries' efforts to promote economic advancement and budgetary self-reliance. The Department of the Interior (Interior) administers and oversees the assistance. Complying with a legislative requirement, GAO examined, for fiscal years 2004 through 2006, (1) the FSM's and the RMI's use of compact funds, (2) their efforts to assess progress toward development goals, (3) their monitoring of sector grants and accountability for compact funds, and (4) Interior's administrative oversight of the assistance. GAO visited the FSM and the RMI; reviewed reports; and interviewed officials from the FSM, RMI, and U.S. governments. For 2004 through 2006, compact assistance to the FSM and the RMI was allocated largely to the education, infrastructure, and health sectors, but various factors limited the countries' use of compact funds. Deterrents to the FSM's use of infrastructure funds included constraints on land use and disagreement on project implementation processes. Land use issues also hindered the RMI's use of infrastructure funds. In addition, the FSM's distribution of the grants among its four states resulted in significant differences in per-student education and per-capita health funding. Neither country has planned for long-term sustainability of the grant programs, taking into account the annual decreases in grant funding. To assess progress toward development goals, the FSM and the RMI established goals and objectives for each sector and are collecting performance data for education and health. However, a lack of complete and reliable baseline data prevents the countries from gauging progress in these sectors. Also, both countries' required quarterly performance reports contained incomplete and unreliable information, limiting the reports' utility for tracking progress. The countries' ability to measure progress is further challenged by a lack of technical capacity to collect, assemble, and analyze baseline and performance data. Although the FSM and the RMI are required to monitor day-to-day sector grant operations, their ability to meet this requirement for 2004 through 2006 was limited. According to officials in the respective governments, the responsible offices have insufficient staff, budgets, and time to monitor grant operations. In addition, both countries' single audit reports for 2004 and 2005 indicated weaknesses in their ability to account for the use of compact funds. For instance, the FSM's audit report for 2005 contained 57 findings of material weaknesses and reportable conditions in the national and state governments' financial statements for sector grants, and the RMI's report contained 2 such findings. Furthermore, both countries' single audit reports indicated noncompliance with requirements of major federal programs. For example, the FSM's audit report for 2005 contained 45 findings of noncompliance, while the RMI's audit report contained 11 findings. Interior's Office of Insular Affairs (OIA) has conducted administrative oversight of the sector grants by monitoring the countries' sector grant performance and spending, assessing their compliance with sector grant conditions, and monitoring the audit process. In response to shortcomings that it identified, OIA took several actions, such as withholding or suspending grant funding and ensuring the provision of technical assistance. However, OIA's oversight has been limited by the need to deal with challenges facing the FSM, such as its difficulty in preparing budgets, as well as by its own staffing challenges. |
Of VA’s $48.8 billion budget in fiscal year 2001, $20.9 billion was for carrying out its four health care missions. Its first, most visible health care mission is to provide medical care for veterans. VA operates a national health system of hospitals, clinics, nursing homes and other facilities that provide a broad spectrum of medical, surgical, and rehabilitative care. More than 3.8 million people received care in VA health care facilities last year. Under its second mission—to provide education and training for health care personnel—VA manages the largest medical education and health professions training program in the United States, training about 85,000 health professionals annually in its medical facilities that are affiliated with almost 1,400 medical and other schools. Under its third mission—to conduct medical research—VA funding was about $1.2 billion in 2000 for over 15,000 medical research projects and related medical science endeavors. VA’s fourth mission—to serve as backup to the Department of Defense (DOD) health system in war or other emergencies and as support to communities following domestic terrorist incidents and other major disasters—has attracted greater congressional interest since the September 11 terrorist attacks in the United States. This role, however, is not new. Since the early 1980s, when a national system was put in place to provide for local medical responses when a disaster occurs, VA has been providing medical support. In fiscal year 2001, less than one-half of 1 percent of VA’s total health care budget, $7.9 million, was allocated to this mission. VA was first formally assigned a federal disaster management role in 1982, when legislation tasked VA with ensuring the availability of health care for eligible veterans, military personnel, and the public during military conflicts and domestic emergencies. In the immediate aftermath of the September 11 attacks, VA medical facilities in New York, Washington, D.C., Baltimore, and Altoona, Pennsylvania, were readied to handle casualties. In prior emergencies, such as hurricanes Andrew and Floyd and the 1995 bombing of the federal building in Oklahoma City, VA deployed more than 1,000 medical personnel and provided substantial amounts of medical supplies and equipment as well as the use of VA facilities. VA’s role as part of the federal government’s response for disasters has grown with the reduction of medical capacity in the Public Health Service and military medical facilities. VA established an Emergency Management Strategic Healthcare Group with responsibility for the following six emergency response functions: Ensuring the continuity of VA medical facility operations. Prior to emergency conditions, VA emergency management staff are responsible for minimizing disruption in the treatment of veterans by developing, managing, and reviewing plans for disasters and evacuations and coordinating mutual aid agreements for patient transfers among VA facilities. During emergency conditions these staff are responsible for ensuring that these plans are carried out as intended. Backing up DOD’s medical resources following an outbreak of war or other emergencies involving military personnel. In 2001, VA has plans for the allocation of up to 5,500 of its staffed operating beds for DOD casualties within 72 hours of notification. In total, 66 VA medical centers are designated as primary receiving centers for treating DOD patients. In turn, these centers must execute plans for early release or movement of VA patients to 65 other VA medical centers designated as secondary support centers. Jointly administering the National Disaster Medical System (NDMS). In 1984, VA, DOD, the Federal Emergency Management Agency (FEMA), and the Department of Health and Human Services (HHS) created a federal partnership to administer and oversee NDMS, which is a joint effort between the federal and private sectors to provide backup to civilian health care in the event of disasters producing mass casualties. The system divides the country into 72 areas selected for their concentration of hospitals and proximity to airports. Nationwide, more than 2,000 civilian and federal hospitals participate in the system. One of VA’s roles in NDMS is to help coordinate VA hospital capacity with the nonfederal hospitals participating in the system. Carrying out Federal Response Plan efforts to assist state and local governments in coping with disasters. Under FEMA’s leadership, VA and other agencies are responsible for carrying out the Federal Response Plan, which is a general disaster contingency plan. As a support agency, VA is one of several federal agencies sharing responsibility for providing public works and engineering services, mass care and sheltering, resource support, and health and medical services. VA is also involved with other agencies in positioning medical resources at high-visibility public events requiring enhanced security, such as national political conventions. VA also maintains a database of deployable VA medical personnel that is intended to help the agency to quickly locate medical personnel (such as nurses, physicians, and pharmacists) for deployment to a disaster site. Carrying out Federal Radiological Emergency Response Plan efforts to respond to nuclear hazards. Depending on the type of emergency involved, VA is responsible for supporting the designated lead federal agency in responding to accidents at nuclear power stations or terrorist acts to spread radioactivity in the environment. VA also has its own medical emergency radiological response team of physicians and other health specialists. When requested by the lead agency, VA’s response team is expected to be ready to deploy to an incident site within 12 to 24 hours to provide technical advice, radiological monitoring, decontamination expertise, and medical care as a supplement to local authorities’ efforts. Supporting efforts to ensure the continuity of government during national emergencies. VA maintains the agency’s relocation site and necessary communication facilities to continue functioning during a major national emergency. In addition to these functions, VA plays a key support role in the nation’s stockpiling of pharmaceuticals and medical supplies in the event of large- scale disasters caused by weapons of mass destruction (WMD). These stockpiles are critical to the federal assistance provided to state and local governments should they be overwhelmed by terrorist attack. Under a memorandum of agreement between VA and HHS’ Office of Emergency Preparedness (OEP), VA maintains at designated locations medical stockpiles containing antidotes, antibiotics, and medical supplies and smaller stockpiles containing antidotes, which can be loaned to local governments or predeployed for special events, such as the Olympic Games. In fiscal year 2001, OEP reimbursed VA $1.2 million for the purchase, storage, and maintenance of the pharmaceutical stockpiles. VA also maintains stockpiles of pharmaceuticals for another HHS agency, the Centers for Disease Control and Prevention (CDC). Under contract with CDC, VA purchases drugs and other medical items and manages a spectrum of contracts for the storage, rotation, security, and transportation of stockpiled items. VA maintains the inventory of pharmaceutical and medical supplies called “12-hour push packages,” which can be delivered to any location in the nation within 12 hours of a federal decision to deploy them. It also maintains a larger stock of antibiotics, antidotes, other drugs, medical equipment, and supplies known as vendor-managed inventory that can be deployed within 24 to 36 hours of notification. In fiscal year 2001, CDC contracts included an estimated $60 million to reimburse VA for its purchasing and management activities associated with the stockpiles, including the cost of medical items. Consistent with the agency’s fourth health care mission, VA operates as a support rather than command agency under the umbrella of several federal policies and contingency plans for combating terrorism. Its direct emergency response activities include conducting and evaluating terrorist attack simulations to develop more effective response procedures and maintaining the inventories for stockpiled pharmaceuticals and medical supplies. Our prior work on federal coordination of efforts to combat terrorism found that VA led many disaster response simulation exercises and conducted follow-up evaluations. These exercises are an important part of VA’s efforts to prepare for catastrophic terrorist attacks. The exercises test and evaluate policies and procedures, test the effectiveness of response capabilities, and increase the confidence and skill level of personnel. Those exercises held jointly with other federal, state, and local agencies facilitate the planning and execution of multiagency missions and help identify strengths and weaknesses of interagency coordination. VA has sponsored or participated in a variety of exercises to prepare for combating terrorism, including those involving several federal agencies and WMD scenarios. In addition, VA participates in numerous other disaster-related exercises aimed at improving its consequence management capabilities. The following are examples of terrorism-related exercises in which VA has participated. In March 1997, in conjunction with the state of Minnesota, VA participated in the “Radex North” exercise in Minneapolis, which simulated a terrorist attack on a federal building. The attack involved simulated explosives laced with radioactive material, requiring the subsequent decontamination and treatment of hundreds of casualties. One of the objectives was to test the capabilities of VA’s radiological response team. The exercise had 500 participants and was designed to integrate the federal medical response into the state and local response, including local hospitals. In July 1997, VA participated in “Terex ‘97” in Nebraska. The exercise’s main objectives were to provide federal and state public health agencies with integrated training in disaster response and to assess coordination among federal, state, and local agencies for responding to a catastrophic, mass-casualty incident. The VA hospital in Lincoln provided bed space for mock casualties wounded by simulated conventional explosives. In addition, VA management staff worked with other federal, state, and local health care officials to coordinate emergency response efforts. In May 1998, VA, DOD, and HHS cosponsored “Consequence Management 1998” in Georgia. The 2-day exercise trained and evaluated federal medical response team personnel in emergency procedures for responding to a WMD attack. In organizing the event, VA’s radiological response team worked with the Marine Corps’ special response force to decontaminate mock casualties. The VA medical center in Augusta supplied logistics support, including stockpiled pharmaceuticals. In May 1999, VA sponsored “Catex ‘99” in Minnesota. Over 80 groups representing federal, state, and local governments, the military, volunteer organizations, and the private sector worked with VA to train for a mass- casualty WMD incident. In a scenario depicting simultaneous chemical weapons attacks throughout the Twin Cities region, VA activated and oversaw an emergency operations center, which coordinated response efforts, including simulated casualty evacuations to hospitals in Detroit, Cleveland, Milwaukee, and Des Moines. In May 2000, VA participated in “Consequence Management 2000” in Georgia. Developed jointly by VA, DOD, HHS, and various state and local agencies, the exercise trained federal emergency personnel in procedures and techniques for responding to a WMD attack. The event also served to familiarize federal, state, and local agencies with the U.S. Army Reserves’ role in the event of a catastrophic terrorist incident. Simulating a mass- casualty terrorist attack in Georgia, VA emergency response teams performed triage and decontaminated patients exposed to chemical and radiological agents. Several VA medical centers in Georgia, Alabama, and South Carolina provided care to simulated serious casualties. In May 2000, VA participated in “TOPOFF 2000,” a national, “no-notice” exercise designed to assess the ability of federal, state, and local agencies to respond to coordinated terrorist attacks involving WMD. The event was the largest peace-time terrorism exercise ever sponsored by the Department of Justice and FEMA, and incorporated three main crisis simulations: a radiological scenario in Washington, D.C.; a chemical scenario in New Hampshire; and a biological scenario in Colorado. VA provided consequence management support to other federal agencies, identified hospital bed space for potential casualties, and dispatched medical personnel to various locations. VA also placed its radiological response team on alert. VA also conducts follow-up evaluations of these simulation exercises. Evaluations typically include, among other things, operational limitations, identified strengths and weaknesses, and recommended actions. Our work shows that VA has a good record of evaluating its participation in these exercises. The evaluations generally discuss interagency issues and are disseminated within VA. Among the favorable findings from VA’s reviews were that emergency personnel were activated quickly and were deployed to incident sites fully equipped and prepared; personnel demonstrated high levels of motivation and technical expertise; and interaction among federal, state, and local personnel and between civilian and military counterparts was positive. The reviews also identified the following concerns: On-site medical personnel experienced communications problems due to incompatible equipment. Communication between headquarters and field offices was at times hindered by an over-reliance on a single means of communication. Unclear standards and inadequate means for reporting available bed space also posed problems. Caregivers sometimes had difficulty tracking patients as they progressed through on-site treatment stages. Incident-site security was a recurrent concern, especially with respect to decontamination controls. We have made a number of recommendations to federal lead and support agencies to improve such interagency exercises and follow-up evaluations, including the dissemination of evaluation results across agencies. VA has improved the internal controls and inventory management of several medical supply stockpiles it maintains for OEP and CDC to address previously identified deficiencies. VA is responsible for the purchase, storage, and quality control of thousands of stockpile supply items. It maintains stockpiles at several sites around the country for immediate use by federal agency teams staffed with specially trained doctors, nurses, other health care providers, and emergency personnel whose mission is to decontaminate and treat victims of chemical and biological terrorist attacks. In 1999, we found that VA lacked the internal controls to ensure that the stockpiled medical supplies and pharmaceuticals were current, accounted for, and available for use. However, our recent work shows that VA has taken significant corrective actions in response to our recommendations that have resulted in reducing inventory discrepancy rates and improved accountability. At the same time, we have recommended additional steps that, VA, in concert with OEP and CDC, should take to further tighten the security of the nation’s stockpiles. These include finalizing and implementing approved operating plans and ensuring compliance with these plans through periodic quality reviews. VA supports these recommendations and is taking action with OEP and CDC to implement them. VA has significant capabilities related to its four health care missions that have potential applicability for the purpose of homeland security. At the same time, it is clear that some of these capabilities would need to be strengthened. How best to employ and enhance this potential will be determined as part of a larger effort currently underway to develop a national homeland security strategy. As the Comptroller General recently noted, this broad strategy will require partnership with the Congress, the executive branch, state and local governments, and the private sector to minimize confusion, duplication of effort, and ineffective alignment of resources with strategic goals. It will also require a systematic approach that includes, among other elements, ensuring the nation’s ability to respond to and mitigate the consequences of an attack. In this regard, VA has a substantial medical infrastructure of 163 hospitals and more than 800 outpatient clinics strategically located throughout the United States, including the largest pharmaceutical and medical supply procurement systems in the world and a nationwide register of skilled VA medical personnel. In addition, VA operates a network of 140 treatment programs for post-traumatic stress disorder and is recognized as the leading expert on diagnosing and treating this disorder. VA holds other substantial health system assets. For example, the agency has well-established relationships with 85 percent of the nation’s medical schools. According to VA, more than half of the nation’s medical students and a third of all medical residents receive some of their training at VA facilities. In addition, more than 40 other types of health care professionals, including specialists in medical toxicology and occupational and environmental medicine, receive training at VA facilities every year. In recent years, VA expanded physician training slots in disciplines associated with WMD preparedness. In 1998, several government agencies, including VA, contributed to a presidential report to the Congress on federal, state, and local preparations and capability to handle medical emergencies resulting from WMD incidents. The report outlined both strengths and weaknesses in regard to VA’s emergency response capabilities. The report noted the potential for VA to augment the resources of state and local responders because more than 80 percent of VA hospital emergency plans are included in the local community emergency response plan. However, the report also noted that VA hospitals do not have the capability to process and treat mass casualties resulting from WMD incidents. VA hospitals and most private sector medical facilities are better prepared for treating injuries resulting from chemical exposure than those resulting from biological agents or radiological material. VA hospitals, like community hospitals, lack decontamination equipment, routine training to treat mass casualties, and adequate on-hand medical supplies. Currently, VA’s budget authority does not include funds to address these shortcomings. Myriad federal efforts are underway to strengthen the nation’s ability to prevent and mitigate the consequences of terrorism. Consideration of what future role VA may assume in coordination with its federal partners in consequence management is an important element. Currently, the agency, in a supporting role, makes a significant contribution to the emergency preparedness response activities carried out by lead federal agencies. Expanding this role in response to stepped up homeland security efforts may be deemed beneficial but would require an analysis of the potential impact on the agency’s health care missions, the resource implications for VA’s budget, and the merits of enhancing VA’s capabilities relative to other federal alternatives. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the committee may have. For more information regarding this testimony, please contact me at (202) 512-7101. Stephen L. Caldwell, Hannah F. Fein, Carolyn R. Kirby, and Paul Rades also made key contributions to this statement. | In the event of a domestic terrorist attack or other major disasters, the Department of Veterans Affairs (VA) is to provide backup medical resources to the military health system and local communities. VA now assists other federal agencies that have lead responsibility for responding to disasters, including terrorism. Its areas of responsibility include disaster simulation exercises and maintaining medical stockpiles. VA's efforts in these areas have enhanced national emergency preparedness by improving medical response procedures and by strengthening the security of federal pharmaceutical stockpiles to ensure rapid response to local authorities. VA also has resources that could play a role in future federal homeland security efforts. Its assets include the bricks, mortar, and human capital components of its health care system; graduate medical education programs; and expertise involving emergency backup and support activities. In managing large-scale medical emergencies arising from terrorist attacks, VA's emergency response capabilities have strengths and weaknesses. Determining how VA can best contribute to homeland security is especially timely given the extraordinary level of federal activity underway to manage large-scale disasters. |
In 1968, the Congress added Section 242 to the National Housing Act establishing the Hospital Mortgage Insurance Program to address a serious shortage of hospitals and the need for existing hospitals to expand and renovate. Through this program, FHA insures the loans lenders make for the construction and renovation of hospitals. Since the inception of the program, FHA has insured 341 hospital mortgages for $11.9 billion in 42 states and Puerto Rico. As of the end of calendar year 2005, FHA was insuring 74 hospital mortgages totaling nearly $5 billion. The number of loans insured annually has increased in recent years, from 2 in fiscal year 2001 to 11 in fiscal year 2005 (see table 1). According to the House report accompanying the Hospital Mortgage Insurance Act of 2003, which revised the standards for determining the need and feasibility for hospitals, as well as eligibility requirements for small, rural hospitals, hospitals face significant financial challenges when providing care to patients who are covered by Medicare and Medicaid, as well as those that are uninsured. At the same time, improvements in technology and health care knowledge necessitate capital improvements such as additions and renovations to existing buildings. FHA’s Office of Insured Health Care Facilities and HHS’ Division of Facilities and Loans coordinate to implement the hospital program. HUD has statutory responsibility for the program based on the FHA’s experience with promoting housing construction through housing mortgage insurance programs. As such, HUD is fully responsible for management of the program, including developing and proposing legislation, policy development, strategic planning, and approval of applications and loan documents. The House Committee on Banking and Currency, in recommending that HUD be given this responsibility, expected HUD to draw upon HHS’s hospital expertise to devise standards for insuring hospitals’ mortgages. Through an interagency agreement, HUD formally delegates authority to HHS to assist in the review of applications for mortgage insurance and the monitoring of insured loans. HHS is also given full responsibility for construction monitoring. See appendix II for additional information about FHA and HHS’s loan processing responsibilities. FHA’s Hospital Mortgage Insurance Program generally serves the segment of the market consisting of hospitals that are too risky to obtain private bond insurance but are strong enough to pass FHA’s underwriting tests. Mortgage insurance, like private bond insurance, guarantees that lenders will be paid if the hospital stops making payments on its loan. In addition, both mortgage insurance and private bond insurance are forms of credit enhancement and improve the credit rating of the underlying debt for the insured entity, resulting in a lower interest rate for the loan. Hospitals with FHA-insured mortgages automatically receive investment-grade ratings (AA or AAA) because the reliability of the cash flows from the mortgage note are rated on the insurer’s, not the hospital’s, ability to repay the debt. Both FHA and HHS officials and private insurers agree that FHA’s Hospital Mortgage Insurance Program serves a different market than private insurers. According to FHA and HHS officials, FHA insures loans that are too risky, too small, or too large for private insurers, or are located in a market not served by private insurers. For the Hospital Mortgage Insurance Program, if a hospital fails to make any payment due under the mortgage, the mortgage is in default. If a default continues for 30 days, the lender is entitled to receive benefits from FHA. FHA may pay claims in either cash or debentures. Federal agencies that provide direct loans or loan guarantees are required by the Federal Credit Reform Act of 1990 to estimate the expected cost of programs by estimating or predicting their future performance and reporting the costs to the government in their annual budgets. Such estimates are important in that they more accurately measure the government’s costs of federal loan programs and permit better cost comparisons among different programs. Under credit reform procedures, the cost of loan guarantees, such as mortgage insurance, is the net present value of all expected future cash flows, excluding administrative costs. For guarantees, cash inflows consist primarily of fees and premiums charged to insured borrowers and recoveries on assets, and cash outflows consist mostly of payments to lenders to cover the cost of claims. Agencies discount projected future cash flows to the year in which the guaranteed loan was disbursed. The discounted cash flows are the estimated budgetary cost or gain of the cohort of loans obligated in a given fiscalyear. The net present value of each cohort’s estimated cash flows is expressed as a percentage of the volume of guaranteed loans in the cohort—that is, a subsidy rate. Agency managers are responsible for accumulating relevant, sufficient, and reliable data on which to base their credit subsidy estimates. OMB has final responsibility for determining subsidy estimates, in consultation with agencies. FHA requires hospitals to take certain steps, both before they apply for mortgage insurance and as a part of the application process, that private insurers do not mandate. These additional steps are used because FHA insures mortgages that are generally riskier than those using private bond insurance. For example, before they apply for mortgage insurance, FHA advises hospitals to compare their financial status with the program’s minimum requirements. If they meet these requirements, FHA requires hospitals to submit market and financial information so that FHA can make a preliminary assessment about the project and determine whether to conduct a preapplication meeting with the applicant to discuss the project. None of the private insurers that we met with have similar preapplication processes. After these preapplication steps are met, FHA’s application process includes additional steps compared with those of private bond insurers. FHA requires hospitals to submit a financial feasibility study containing historic and forecasted financial statements and ratios, a financing plan, and information about market demand, among other things. In addition, FHA hires consultants to evaluate the feasibility of each proposed project as an additional, independent check on the viability of the project. While the private bond insurers that we met with review the types of information included in feasibility studies, they do not require hospitals to submit such studies and do not hire consultants to assess the feasibility of proposed projects. FHA’s application process also includes a final level of review that is absent from private bond insurer processes. After an application for mortgage insurance has gone through underwriting and been reviewed by an independent consultant, it is considered by the program management group, a group of senior-level FHA and HHS staff. FHA also refers to this group as its “credit committee.” Similarly, private bond insurers also consider applications within a credit committee structure. However, while private bond insurers make final insurance decisions through their credit committees, FHA has an additional layer of review. Based upon input from the program management group, the Director of FHA’s Office of Insured Health Care Facilities makes a recommendation to the FHA Commissioner, who then makes the final decision. It generally takes FHA longer to process applications than it takes private bond insurers. According to program data, it took FHA an average of 265 days to process the 11 applications for hospital mortgage insurance that it endorsed in fiscal year 2005. According to the FHA, processing times vary with the complexity of the project and may be affected by issues requiring a hospital to rethink or resubmit its application, including issues that are beyond HUD’s control. In contrast, according to the private bond insurers and investment bankers that we interviewed, it generally takes private insurers up to 60 days to process an insurance application, sometimes less. While FHA’s average processing time is higher than private bond insurers, it has decreased from an average of 399 days in fiscal year 1999. According to FHA, processing times have improved as a result of implementing the preliminary review process, which disqualifies hospitals that don’t meet the program’s minimum requirements. FHA uses many of the same techniques that private insurers use to monitor insured hospitals. Both FHA and private bond insurers identify the riskiest hospitals in their portfolio for closer monitoring. Since November 1999, FHA has placed on a priority watch list hospitals it determines are at risk of having a claim filed within the next 12 months. FHA considers a hospital for inclusion on the priority watch list if certain financial criteria are not met. For example, if the ratio measuring a hospital’s ability to pay its mortgage payments with cash generated from current operations (the debt service coverage ratio) falls below an acceptable level, the hospital may be placed on the watch list. A hospital can also be placed on the list if FHA becomes aware of other conditions at the hospital, such as management or personnel problems. As of December 2005, FHA data showed that 11 of the 59 insured hospitals are on this list, representing an unpaid (insured) principal balance of approximately $762 million. Private insurers also assess the risk of the hospitals that they insure in order to identify those that should be monitored more closely. For example, one private bond insurer explained that they monitor compliance with loan agreements by reviewing financial statements, documentation of payer mix (i.e., proportion of reimbursement from Medicare, Medicaid, private insurance, etc.), and notices of litigation, among other things. As a part of their monitoring efforts, both FHA and private bond insurers monitor agreements that exist between themselves and the insured hospital. These agreements specify the requirements that the insured hospital must comply with in order to maintain the insurance. Agreements may cover issues such as the debt-service coverage ratio; liquidity, or the ability to convert assets to cash; and activities that a hospital cannot do without approval by the insurer. Both FHA and private insurers require hospitals to request waivers from agreement requirements if they are not going to meet them. FHA and private insurers monitor hospitals’ compliance with these agreements through various means, such as by evaluating changes in indicators of financial performance, as reported in hospitals’ financial statements. For example, one private bond insurer reported that it monitors hospitals’ cash on hand, and FHA monitors hospitals’ debt-service coverage ratios. FHA and private insurers monitor financial statements and other documentation quarterly and annually, respectively, and more frequently for hospitals that are experiencing financial difficulty. Both FHA and private insurers require hospitals that are not in compliance to correct violations within specific time frames. Both FHA and private insurers can require hospitals experiencing financial difficulties to hire consultants. In some cases, FHA will pay for consultants to identify and suggest solutions to hospitals’ financial difficulties. According to FHA, since fiscal year 2000, it has paid $1.3 million for consultant’s studies of 27 hospitals. However, FHA can also require hospitals to hire and pay for consulting services on their own. Similarly, private insurers can require hospitals to hire consultants to assist them with identifying and addressing problems. The requirement for a hospital to hire a consultant can be triggered if a hospital is not in compliance with its loan agreements, according to both FHA and private bond insurers. FHA and HHS coordinate key activities, including screening applicants, underwriting loans, and monitoring insured hospitals. While FHA has established performance measures for both coordinated tasks and tasks delegated to HHS through an interagency agreement, it does not collect data with which to assess most of these measures. FHA’s primary guidance for the program has not been updated in over 20 years and, therefore, does not reflect key changes in eligibility criteria. FHA and HHS coordinate to implement the hospital program based upon FHA’s experience with promoting housing construction through its housing mortgage insurance programs and HHS’s hospital and health care expertise. As previously noted, FHA is responsible for management of the program and formally delegates certain responsibilities to HHS. A Memorandum of Agreement (MOA) between FHA and HHS outlines the duties and responsibilities of each agency in carrying out the Hospital Mortgage Insurance Program, including coordinated activities and tasks that FHA delegates to HHS. In accordance with this agreement, both FHA and HHS staff are involved with the screening of applicants during the preapplication meetings. FHA’s policy is to include senior FHA staff and legal counsel, the account executive and client service team members (both of which can be either FHA or HHS staff), and engineering staff from HHS, among others, in such meetings. This policy helps insure that preapplication discussions with applicants are coordinated between FHA and HHS. FHA and HHS also coordinate activities during the underwriting review portion of the application process, which is the process used by FHA to assess the risk of a potential loan to the GI/SRI fund. The nature of coordination at this level depends on the staffing of the account executive and client service team positions, since these positions can be filled by either FHA or HHS staff or a combination of both. The account executive and client service team are responsible for underwriting activities, including analysis of the market and financial feasibility of the project. In addition, HHS engineers review all design and construction aspects of the proposed project. Appendix II presents the roles and responsibilities of each agency in more detail. FHA and HHS use regular meetings of the program management group to coordinate additional activities. This group, composed of senior FHA and HHS staff, meets weekly to assist account executives and client service teams as they review applications for mortgage insurance and monitor insured hospitals. Minutes of program management group meetings that we reviewed show joint FHA and HHS discussion of new applications, as well as issues associated with the existing portfolio. According to investment bankers, hospital associations, consulting firms, and selected hospitals we spoke with, coordination between FHA and HHS is generally seamless. The fiscal years’ 2002-2005 MOA between FHA and HHS provides for FHA to establish performance measures and use them to evaluate tasks. While the MOA between FHA and HHS contains 22 performance measures, FHA has tracked actual performance for only 2 of these measures, 1 for processing complete applications within 120 days, and 1 for processing loan modification requests within 30 days. As a result, it is not possible to evaluate how well the agencies perform in implementing the program. According to FHA officials, the agency never intended to track these measures, or use them as actual measures of performance, but rather to show FHA’s expectations of HHS. Neither HUD’s fiscal year 2005 performance plan nor its performance and accountability report includes other performance measures for this program. Moreover, OMB did not assess this program as a part of its fiscal year 2005 Program Assessment Rating Tool (PART), which is used to assess the performance of federal programs. Appendix III provides more detailed information about the 22 performance measures contained in the MOA between FHA and HHS. Analysis of the two performance measures for which data is collected shows that FHA is not meeting its performance goals for those measures. Based upon analysis of data from the Hospital Mortgage Insurance Management Information System, we determined that FHA did not meet its goal of processing 75 percent of hospital mortgage insurance applications within 120 days. Although the FHA received no more than 10 applications each year between fiscal years 2002 and 2005, FHA and HHS never processed more than 2 within 120 days (see fig. 1). In addition, according to FHA, the agency did not meet its goal of processing at least 75 percent of loan modification requests within 30 days. However, analysis of available data shows that FHA and HHS improved from processing 45 percent of loan modification requests received in fiscal year 2002 within 30 days to processing 71 percent in fiscal year 2005. FHA has not tracked other performance measures related to activities that are coordinated, or can be done, by both FHA and HHS staff. For example, according to one performance measure, hospitals with a weakening financial position should be identified early enough to allow time for the account executive to provide technical assistance and undertake default prevention measures. Since such hospitals are identified through the FHA’s priority watch list system, these data are readily available for measurement. Similarly, another performance measure is designed to capture the soundness of analysis performed by client service teams, which can include both FHA and HHS staff, in assessing insurance applications. FHA has also not tracked this measure. FHA also does not track performance measures of activities that it delegates to HHS. For example, one measure is designed to capture the number of complaints and compliments about HHS’s timeliness, helpfulness, courtesy, and understanding. According to FHA, the agency has not tracked this or other measures because it has not had enough problems with HHS to warrant establishing a tracking system and that establishment of such a system would be both an administrative burden and a poor use of their resources. However, without collecting appropriate information, FHA cannot quantify the input it receives about HHS. In addition, FHA has not tracked performance measures related to construction design and monitoring, which HHS is responsible for. According to FHA, performance measures exist to indicate FHA’s expectations of HHS’s performance, even though HHS’s performance is not tracked. Several of the performance measures contained in the agreement between FHA and HHS lack the necessary characteristics of performance measures; that is, they are not measurable or objective. As a result, they do not provide useful information about the performance of the hospital program. For example, the measures related to the number of complaints and compliments about HHS are not measurable in that they do not specify a quantifiable threshold for expected performance. As a result, even if FHA tracked complaints, it is not possible to tell whether performance is meeting expectations. Other goals lack objectivity in that they require subjective judgment to assess program performance. As an example, one performance measure indicates that “plans and specifications do not need major revisions during the construction process because of significant architectural or engineering errors.” Another indicator states that “preconstruction meetings are thorough and do not precipitate delays in application processing.” In both cases, the performance measures require subjective judgment, because they do not make explicit what constitutes “major,” “significant,” or “thorough.” As we have previously reported, useful performance information is based upon measurable and objective performance measures. If useful performance information is collected, managers could use it to identify problems, try to identify the causes of problems, and/or to develop corrective actions. (App. III provides a complete list of the performance measures.) While FHA does not track most of the performance measures outlined in the MOA, FHA’s Hospital Mortgage Insurance Management Information System captures a significant amount of quantitative and qualitative data about the performance of the program, which could be incorporated into measurable and objective performance measures. This system captures key loan processing dates, financial performance data over time, and documentation of internal meetings and actions performed by both agencies to assist insured hospitals. Incorporation of this readily-available data into meaningful performance measures would enable FHA to better assess its management of the program. FHA and HHS established a new interagency agreement covering fiscal years 2006 through 2010, which includes many of the same measures as the previous agreement, including those that are not measurable or objective. The new agreement also includes a requirement that HHS provide FHA with an annual report detailing its performance against each of the performance measures in the agreement. However, this interagency agreement does not specify whether and how FHA will track its own performance against the measures. FHA’s primary guidance for its hospital mortgage insurance program has not been updated in over 20 years and does not reflect changes to the program over that time. As a result, this document does not contain current eligibility requirements, which may cause confusion for potential applicants. In 1973, FHA published the Mortgage Insurance for Hospitals Handbook and last updated the handbook in 1984. The purpose of the handbook is to provide complete information about the processing of hospital mortgage insurance, including basic program features and requirements, to hospitals, lenders, sponsors, FHA and HHS personnel, and all other interested parties. According to FHA, the Office of Insured Health Care Facilities has not had adequate staff to revise the handbook and is waiting for a proposed regulation to become final before revising it. Since the handbook has not been updated since 1984, it does not contain current eligibility requirements, policies, and processing procedures. As we have previously reported, internal control standards applicable to federal programs provide that information should be recorded and communicated in a timely manner. The handbook does not reflect key changes that the Hospital Mortgage Insurance Act of 2003 made to the program. This act revised the existing requirement that hospitals applying for FHA mortgage insurance have either a Certificate of Need or a state-commissioned study of market need; specifically, it provided that FHA would establish the means for determining market need and feasibility for hospitals. In addition, the 2003 act exempted Critical Access Hospitals (CAH) from the requirement that at least 50 percent of care must be for general acute-care patients. According to one of the mortgage bankers that we met with, the handbook causes confusion because hospitals are uncertain about requirements applicable to them. As we have previously reported, internal control standards provide that information, such as changes in eligibility requirements and application processing procedures, should be communicated in a timely manner. While FHA publicly communicates program changes through Mortgagee Letters, updating the Applicant’s Guide, distributing copies of its minimum criteria for consideration, and updating its Web page, it has not incorporated all of this updated information into the program’s handbook. All documentation, including the handbook, should be updated in a timely manner. Maintaining current documentation is an internal control that would benefit both those interested in the program and those that administer the program. The hospital program is a relatively small program within the broader GI/SRI fund and has a record of recovering claims. Despite its small size, both program and market trends show risks that could affect the hospital portfolio. FHA has mitigation strategies in place to address some risks but does not have a formal strategy to geographically diversify the hospital loan portfolio. The Hospital Mortgage Insurance Program comprises a relatively small part of the GI/SRI fund, representing about 2.9 percent of the GI/SRI’s fund’s fiscal year 2006 total commitment authority. Moreover, the approximately $5 billion in loans that FHA currently insures through the program is 6.5 percent of the $77 billion in unpaid principal balance of the fund (see fig. 2). In addition to being a financially small component of the broader GI/SRI fund, the Hospital Mortgage Insurance Program has a record of recovering more than two-thirds of all historical claims, and lenders have not made a claim on an insured loan since 1999. Since the program’s inception in 1968, there have been a total of 22 claims totaling $225 million. Of this amount, FHA recovered 68 percent, or $153 million. In spite of the hospital program’s relatively small size and the relatively good performance history of insured loans, analysis of both program and market trends shows risks that could affect the future performance of the hospital loan portfolio. For example, the average loan size insured through the program has varied over time but has been increasing from about $26 million in 2002 to over $122 million in 2005. This growth creates financial risk because a claim from one large loan could have a significant impact upon the program. In addition, the majority of the currently insured loans in FHA’s hospital portfolio are less than 10 years old. According to HUD, 70 percent of claims have historically occurred prior to a loan’s tenth year. Currently, the loans that have been insured for less than 10 years have an aggregate unpaid principal balance of $2.8 billion, representing about 57 percent of the aggregate unpaid principal balance (see fig. 3). Comparing FHA data on selected financial indicators with the criteria the agency uses to determine the financial health of program applicants shows some favorable trends but also indicates sources of potential financial risk (see fig. 4). Specifically, our analysis of program data for calendar years 2000 to 2004 shows that some insured hospitals increased their ability to meet their monthly and future mortgage payments. For example, the median debt-service coverage ratio, a measure of a hospital’s ability to pay its mortgage with cash generated from current operations, increased from 1.54 to 2.18. While a value of 2.18 for this ratio indicates a low level of risk, according to FHA criteria, other financial indicators indicate medium levels of risk. For example, the median number of days of cash on hand and the median current ratio (which compares a hospital’s current assets to its current liabilities) both improved, yet still indicate a medium level of risk to the program. Finally, the median operating margin, which is indicative of a hospital’s ability to control costs and expenses, improved between 2000 and 2004, yet indicates a medium level of risk based on FHA’s criteria. Median financial indicators for the 11 hospitals that FHA has placed on its priority watch list show much greater levels of risk when compared with FHA’s underwriting guidelines (see fig. 5). For these hospitals, performance as measured by all four selected indicators declined from 2000 to 2004. Further, in 2004, three indicators showed a high level of risk, based on FHA’s criteria. For example, according to FHA’s criteria, an applicant with an operating margin of less than zero is considered high risk. The hospitals on FHA’s priority watch list had median operating margin of -2.65 in 2004. Similarly, according to FHA’s criteria, an applicant with less than 15 days of cash on hand is also high risk, and hospitals on the priority watch list had a median of 3.3 days of cash on hand in 2004. FHA recognizes that the high risk levels of these selected financial indicators are among the reasons that these hospitals are on its priority watch list and are, therefore, subject to closer monitoring to reduce the risk of a claim. Analysis of program data further shows that, while loans are increasingly being insured outside of the Northeast, the program is still concentrated in New York (see fig. 6). Though the percentage of the unpaid principal balance concentrated in New York has decreased from 89 percent in 2000, 61 percent of the unpaid principal balance in the program remains concentrated in New York in 2005. Of the 30 hospital loans that FHA has insured since 2000, 21 are outside of New York, and 19 are outside of the Northeast region. Since 2003, 5 of the loans insured were for CAHs. Further, 24 out of 25 mortgage insurance applications in development at the time of our study are located outside of the Northeast (see fig. 7). Despite these strides, the high concentration of the program’s unpaid principal balance in New York, as well as concentrations with single borrowers with multiple loans, creates risks. New York hospitals insured through FHA, like hospitals nationwide, rely heavily upon reimbursement through Medicare and Medicaid.Since a portion of Medicaid funding comes from states, any cuts made by the state of New York could have an especially negative impact on the hospital program. Insured hospitals in New York are also vulnerable to other state policies. For example, a task force appointed by the Governor is in the process of identifying New York hospitals for closure or restructuring. The Governor and state legislature have committed state funds to assist in restructuring efforts, and the state has had a history of helping its hospitals avoid defaults. Nevertheless, any recommendations for the closure or restructuring of FHA-insured hospitals may present the risk of an insurance claim. Further, some New York hospitals have multiple loans insured through the program, one with unpaid principal balances totaling approximately $828 million as of December 2005. According to HUD’s comments on the draft of this report, this hospital is a financially sound, well-endowed institution that poses a low risk of default. The hospital program may also face risks from changes in the health care industry at large. According to industry literature, decreasing revenue streams, increases in the number of uninsured patients, increased competition from specialized facilities, and heightened capital needs are some of the trends that affect all hospitals, including FHA-insured hospitals. We and others have reported that Medicare and Medicaid spending may not be sustainable at current levels. If program cuts occur in Medicaid, for example, states may take cost containment measures to reduce spending. Such measures may include frozen, or reduced, reimbursement rates to providers and restrictions on eligibility for these programs.In addition, the number of Medicare enrollees is projected to increase as baby-boomers age and become Medicare-eligible.These trends will affect all hospitals, including FHA-insured hospitals, which generally have Medicare and Medicaid patients in their payer mix. On average, Medicare discharges for FHA-insured hospitals represented 29 percent of total discharges per hospital, and Medicaid discharges represented 19 percent of total discharges per FHA-insured hospital. (See fig. 8 for Medicare and Medicaid discharges by state.) Stated another way, nearly 50 percent of the reimbursement that program hospitals receive is through Medicare and Medicaid. Hospitals, including FHA-insured hospitals, must also contend with the rising number of underinsured and uninsured patients, which place demands on hospitals to provide care with little to no reimbursement. According to the U.S. Census Bureau, the number of uninsured persons rose from under 40 million people in 2000 to approximately 45 million people in 2003. This trend may pose a risk to the program. In addition, hospitals in New York, where the hospital mortgage insurance is concentrated, serve a high proportion of uninsured patients. Credit rating agencies state that competition is increasing in the health care market as the type of care provided shifts to outpatient and specialty hospitals, which provide profitable services, such as cardiology, surgery, orthopedics, and diagnostic imaging. Specialty facilities providing these services can take patients and revenue from general acute-care hospitals, which supplement revenue shortfalls with profitable services after providing needed, but unprofitable, services to the community. The growth of specialty hospitals, such as ambulatory surgery centers, is strong. The average number of specialty hospital openings has increased from 5 hospital openings in the 1960s to 27 hospital openings in the 2000-present time period. Hospitals throughout the health care sector face increasing capital demands, yet many have limited access to capital according to hospital industry literature.For example, hospitals face demand for outpatient services, emergency room upgrades, and technological advancements, which have significant up-front and maintenance costs. A reputable credit rating agency estimates that information technology expenditures now range between 20 to 30 percent of a hospital’s capital budget. Financially weaker hospitals have less access to capital, yet often have pent-up capital needs. According to a recent rating agency report, New York hospitals have unmet capital needs as a result of their older infrastructure and because they are generally financially weaker than the average hospital. FHA uses a variety of tools to mitigate risk in the hospital program. For example, during its preliminary assessment of a hospital, FHA reviews the hospital’s ability to pay its mortgage by analyzing its debt service coverage ratio and determines if this ratio meets FHA’s minimum requirement. FHA takes other steps when reviewing applications (as discussed previously) designed to keep out excessively risky projects and also imposes requirements on insured hospitals to control risks. These include assessing the viability of projects at preapplication meetings with key using a comprehensive underwriting process that assesses, among other factors, past and projected financial performance and the demand for the hospital’s services; hiring an independent consultant to evaluate the feasibility of the proposed project and its potential risk to the FHA; requiring insured hospitals to establish a cash reserve fund sufficient to cover 2 years of mortgage payments; requiring insured hospitals to maintain compliance with key agreements between the hospital and FHA and monitoring these agreements; and considering insured hospitals that fail to meet certain financial criteria for placement on the priority watch list. FHA has also made some efforts to address the risks associated with the geographic concentration of the program in New York. Since 1999, FHA has had goals for geographically diversifying the hospital portfolio. Currently, FHA’s goals for diversifying the portfolio include reviewing and processing applications for projects in states other than New York. While the agency does not have a formal strategy for marketing the program outside of New York, it has made some efforts to diversify the hospital portfolio by simplifying its application process for CAHs and providing rural hospital associations with information about the program; hiring an expert in rural hospitals; visiting hospital association conferences to educate members about the educating HUD field attorneys, mortgage bankers, and consultants about the program. HUD has also cooperated with requests for program information from the trade media and assisted other researchers, which resulted in the publication of articles and reports that provided information about the advantages of the hospital program in financing capital projects. A formal strategy, however, would provide the agency with a tool for comprehensively planning for and executing activities that would lead to the geographic diversification of the hospital portfolio. OMB guidance, for example, requires that agencies include a description of the means and strategies that will be used to achieve goals in their strategic plans. Such strategies could include, for example, the processes, skills, technologies, and various resources that will be used to achieve goals. HUD uses a model for estimating annual credit subsidies that does not explicitly consider the impacts of some potentially important factors. HUD’s model incorporates factors and assumptions about how loans will perform, including estimated claim and recovery rates, which are consistent with OMB guidance. HUD has generally calculated a negative subsidy rate for the hospital program, meaning that estimated cash inflows have been greater than estimated cash outflows. However, HUD’s model does not explicitly consider the potential impacts of prepayment penalties or restrictions when estimating prepayments, or the debt-service coverage ratios of hospitals at the time of loan origination. For budgeting purposes, agencies that make loans and provide loan guarantees must estimate the costs to the government over the life of the loans that will be insured, commonly referred to as the subsidy cost. In order to estimate the subsidy cost of the Hospital Mortgage Insurance Program, HUD uses a cash-flow model to project expected net cash flows for all these loans over their entire life. HUD’s model is a computer-based spreadsheet that uses assumptions based upon historical and projected data to estimate the amount and timing of claims, subsequent recoveries from these claims, as well as premiums and fees paid by the borrower. In addition, HUD does not consider prepayment penalties and restrictions when it estimates the level and timing of prepayments, which affect estimates of future claims and premiums. HUD inputs its estimated cash flows into the OMB’s credit subsidy calculator, which produces the official credit subsidy rate. A positive credit subsidy rate means that the present value of cash outflows is greater than inflows, and a negative credit subsidy rate means that the cash inflows are estimated to exceed cash outflows. For the hospital program, cash inflows include premiums and fees, servicing and repayment income from notes held in inventory, rental income from properties held in inventory, and sale income from notes and properties sold from inventory. Cash outflows include claim payments and expenses related to properties and notes held in inventory. Since the hospital program’s inception, FHA has paid a total of 22 hospital mortgage insurance claims. The last claim was filed in 1999. Because of the small number of claims, HUD determined that claim rates based solely upon the program’s historical claims experience would not be reliable. As a result, HUD uses a methodology initially developed by OMB to increase its estimated claim rate by assuming that the lenders for some active hospitals would file claims for insurance. HUD refers to this methodology as an artificial default. In determining which loans to artificially default, HUD focuses on hospitals that generally have a higher risk of default, and are therefore on FHA’s priority watch list. According to OMB officials, the use of this artificial default accounts for the risk that exists due to the low number of large size loans insured, potential changes in Medicare or Medicaid reimbursement rates, and the geographic concentration of the program in New York, which make the program vulnerable to regional economic conditions. In 10 of the 14 years that HUD has been estimating the cost of the Hospital Mortgage Insurance Program under credit reform, HUD has estimated that the present value of cash inflows from fees, premiums, and recoveries from loans and properties sold would exceed the outflows from claim payments and other expenses related to properties and notes held in inventory. As a result, HUD calculated a negative credit subsidy rate for the hospital program for these 10 years. In the other 4 years, HUD estimated positive or no credit subsidy costs for the program. Figure 9 shows changes in the credit subsidy rate from 1992 to 2005. While HUD’s model includes assumptions that are consistent with OMB guidance, such as assumptions on estimated claim and recovery rates and an artificial default methodology to supplement the claim experience, HUD’s model does not explicitly consider the potential impact of prepayment penalties or restrictions, even though they can influence the timing of prepayments and claims and collection of premiums. Inclusion of initial debt-service coverage ratios, as a factor predictive of defaults and claim rates into HUD’s cash-flow model for the hospital program, could potentially enhance HUD’s estimate of the subsidy cost of the program. According to some economic studies, prepayment penalties, or penalties associated with the payment of a loan before its maturity date, can significantly affect borrowers’ prepayment patterns. In turn, prepayments affect claims because if a loan is prepaid it can no longer go to claim. According to FHA officials, FHA does not place prepayment penalties on FHA-insured hospital loans. However, according to the hospital program’s regulations, a mortgage loan made by a lender that has obtained the funds for the loan through bonds can impose a prepayment penalty charge and place a prepayment restriction on the mortgage’s term, amount, and conditions. According to FHA officials and mortgage bankers, prepayment restrictions on hospital loans are generally in the form of 10-year restrictions on the prepayment of bonds. While FHA does not maintain data specifically on insured hospitals’ bond-financing terms, prepayment restrictions are specified on the mortgage note, which is available to FHA. Moreover, according to the Mortgage Insurance for Hospitals Handbook, FHA has access to bond-financing terms because, upon completion of bond issues, applicants are required to submit bond-related documents to FHA so that FHA can verify that the fees, charges, and other costs previously approved with respect to debt restructuring. Incorporation of such data into the hospital program’s credit subsidy rate model could refine HUD’s credit subsidy estimate by enhancing the model’s ability to account for estimated changes in cash flows as a result of prepayment restrictions. According to HUD officials responsible for HUD’s cash-flow model, prepayment penalties and restrictions are not incorporated into the model because HUD does not collect such data. HUD officials added that, even though the cash-flow model does not explicitly account for prepayment penalties and restrictions, its use of historic data implicitly captures trends that may occur as a result of prepayment penalties and restrictions. However, by not explicitly incorporating prepayment penalties or restrictions into the cash-flow model, HUD’s model is less able to estimate the impact of changes in prepayment patterns of current and future cohorts. HUD’s cash-flow model also does not consider the initial debt-service coverage ratio of hospital loans at the point of loan origination. By initial debt-service coverage ratio, we are referring to the projected debt-service coverage ratio that is considered during loan underwriting. (HUD’s cash- flow model does consider the current debt-service coverage ratio of insured hospitals through its artificial default methodology, which, as previously explained, includes hospitals that are on FHA’s priority watch list. This list may include insured hospitals if, based upon the last available full year of data, their debt-service coverage ratio is below 1.10.) According to the HUD official responsible for HUD’s cash-flow model, the initial debt-service coverage ratio of a hospital at the point of loan origination is not included as a part of the cash-flow model for the hospital program because it (1) is not a cash flow, (2) does not vary, and (3) has no predictive value. We agree that a debt-service coverage ratio is not a cash flow. However, initial debt-service coverage ratios potentially affect relevant cash flows, as do other factors that are included in HUD’s model but are also not cash flows, such as prepayments. For example, the model considers estimated prepayments because they potentially affect future cash inflows from fees and future cash outflows from claim payments. Initial debt-service coverage ratios are another important factor that may affect cash flows, as loans with lower initial debt-service coverage ratios may be more likely to default and result in a claim payment. They can also be used to assess the financial health of either an applicant or a hospital in the existing portfolio. According to officials from FHA’s Office of Insured Health Care Facilities, the projected debt-service coverage ratio is most meaningful for the third or fourth year projected, when construction is most likely to be complete. Our analysis of projected debt-service coverage ratios, which include the amount of new debt being insured, shows that these ratios varied from 1.48 to 3.11 during the fourth year projected. All other factors being equal, loans with a debt-service coverage ratio of 3.11 are generally considered to have less risk than a loan with only a 1.48 debt-service coverage ratio. Finally, we also found that economic studies show mixed results regarding the significance of the impact of debt-service coverage ratios upon commercial mortgage defaults. Some studies find initial debt-service coverage ratios to be statistically insignificant in modeling commercial mortgage defaults. Other studies indicate that initial debt-service coverage ratios are meaningful factors in modeling default risk and are helpful in predicting commercial mortgage terminations. Analysis of initial debt-service coverage ratio information, which is available in underwriting documents, may be used to identify trends or shifts in the overall risk of the portfolios that should be considered when making credit subsidy estimates. Further, current credit reform guidance calls for agencies to use the best available data when preparing their credit subsidy estimates. The Hospital Mortgage Insurance Program plays an important role by insuring loans for capital improvements at hospitals that, due to their greater financial risks, would otherwise face difficulty in accessing capital. FHA’s process for reviewing applications for mortgage insurance, while somewhat lengthier and involving more steps compared with those of private bond insurers, appears to be a reasonable response to the generally riskier nature of the applicants. Further, the agency’s techniques for monitoring insured hospitals are quite similar to those used by private insurers, and the program has operated for several years without experiencing an insurance claim. FHA and HHS appear to work together reasonably well in carrying out their respective roles in administering the program. However, it is difficult for us, FHA’s managers, or the Congress to assess how well the agencies perform in implementing the program because FHA has not established a set of meaningful program performance measures or collected the information needed to assess performance. We have previously reported on the importance of agencies’ collecting useful performance information. If FHA collected useful performance information, such as information based on measurable and objective performance measures, the agency’s managers could use it to identify problems, try to identify the causes of problems, and/or to develop corrective actions. Many program activities, including those delegated to HHS, are recorded in FHA’s Hospital Mortgage Insurance Management Information System, and data from this system could be used to establish and monitor useful performance measures. In addition, because FHA has not updated the program handbook since 1984, hospitals, lenders, investment bankers, health care financing agencies, and other interested parties do not have ready access to a consolidated source of current program eligibility requirements, policies, and procedures. The lack of a consolidated source of current information may cause confusion and delay hospitals’ ability to prepare applications that meet FHA’s criteria. Further, outdated guidance in federal programs is an internal control weakness. Although it represents a relatively small part of HUD’s GI/SRI fund, the hospital program insures multimillion dollar loans that currently total nearly $5 billion. The continued geographic concentration of insured hospitals in the state of New York poses a source of financial risk to the program. While this concentration has decreased from its high of 89 percent of outstanding insured principal balance in 2000, the current 61 percent represents a continuing concentration of credit risk. As a result, the program is vulnerable to New York State policies, such as the governor’s call to restructure hospitals, as well as regional economic trends. While FHA has taken steps in the right direction, it does not have a formal strategy or plan for geographically diversifying the hospital portfolio, which could enhance current efforts to reach this goal. HUD’s cash-flow model used to estimate annual credit subsidy rates appears to be consistent with applicable OMB guidance; however, it does not explicitly take into account potentially useful factors such as prepayment penalties and restrictions or the initial debt-service coverage ratio of new loan cohorts. Although the program has not experienced a claim for insurance since 1999, the increasing size of loans insured, geographic concentration in New York and the Northeast, and other factors pose risks to the program. Including additional factors into HUD’s model could potentially enhance the agency’s estimates of the subsidy cost of the program, provide HUD and congressional decision makers with better cost data to assess the program, and help assure that the program adequately addresses financial risks. To improve management of the Hospital Mortgage Insurance Program and reduce potential risks to the GI/SRI fund, we recommend that the Secretary of Housing and Urban Development direct the FHA Commissioner to take the following three actions: Establish measurable and objective performance measures for the hospital program and collect appropriate information to regularly assess performance against the measures. Update the program handbook to make publicly available current eligibility requirements, policies, and procedures. Develop a formal strategy to geographically diversify its portfolio of insured hospitals, including such elements as the processes, skills, technologies, and various resources that will be used to reach diversification goals. To potentially improve HUD’s estimates of the program’s annual credit subsidy rate, we recommend that the Secretary of Housing and Urban Development explore the value of explicitly factoring additional information, such as prepayment penalties and restrictions, as well as the initial debt-service coverage ratio of hospitals, as they enter the program into its credit subsidy model. We provided a draft of this report to HUD and HHS for their review and comment. In written comments from HUD’s Assistant Secretary for Housing–Federal Housing Commissioner, which incorporated comments from HHS, HUD concurred with our four recommendations. However, the agency disagreed with our presentation of certain aspects of the program, commenting that the report’s “critique of procedural and technical matters” overshadowed the program’s accomplishments. The Assistant Secretary’s letter appears in appendix IV, and a letter from HHS appears in appendix V. HUD expressed general agreement with the recommendations and noted actions that it plans to take. Specifically, the agency agreed to develop appropriate performance measures and implement data collection procedures to evaluate both program and contract administration; with the need to consolidate updated eligibility requirements, policies, and procedures into an updated handbook, and stated its intention to have the handbook finalized by the end of 2006; to develop a formal strategy to geographically diversify its portfolio of insured hospitals, including such elements as the processes, skills, technologies, and various resources that will be used to reach diversification goals; and to explore the value of explicitly factoring additional information, such as prepayment penalties and the initial debt-service coverage ratio of hospitals as they enter the program, during its annual review of cash flow modeling techniques for the hospital program. In disagreeing with our presentation of FHA’s efforts to diversify the hospital portfolio, HUD commented that diversification has been a top program goal for many years. Our draft report acknowledged that FHA has had goals for geographically diversifying the portfolio since 1999 and provided examples of FHA’s diversification efforts. However, in response to the comments, we included additional examples of FHA’s efforts. HUD also commented that the report does not appropriately emphasize the success that HUD and HHS have had in working together to implement the hospital program. Our draft report acknowledged the agencies coordinated involvement with key meetings, underwriting, and monitoring. Further, as the letter from HHS observes, our draft report concluded that the two agencies appear to be working reasonably well together. Because we believe the report accurately characterizes the relationship, we did not change it. Finally, HUD commented that the report infers that (1) it has not maintained current policies and procedures and (2) indicates that current eligibility requirements, policies, and procedures are unavailable to the public. Our draft report stated that the handbook does not contain current eligibility requirements, policies, or processing procedures, and acknowledged that FHA publicly communicates program changes through Mortgagee Letters. Nevertheless, in response to HUD’s comments, we revised the report to include additional examples of FHA’s efforts to communicate changes in eligibility requirements, policies, and procedures. We also continue to emphasize the value of updating all program documentation, including the handbook. HUD also offered comments regarding the report’s presentation of risks facing the hospital program, including potential cuts in reimbursement from Medicare and Medicaid, the potential for closures of hospitals in New York stemming from a commission appointed by the Governor, and the large size of some loans. We recognize that potential cuts in reimbursement from the Medicare and Medicaid programs are a risk factor for hospitals in all states; however, New York is unique among states in accounting for over half of the hospital program’s insurance portfolio. We revised the report to clarify that, due to this concentration, any cuts that the state of New York makes to its Medicaid program could have an especially negative impact. Regarding the New York Governor’s commission, we are aware that state funds are available to assist in restructuring efforts, and that the Dormitory Authority of the State of New York is committed helping its hospitals avoid defaults. However, since there is no guarantee that FHA-insured hospitals will be protected, we continue to believe that a recommendation for their closure or restructuring may present the risk of an insurance claim. Finally, we revised the report as HUD suggested to note that the largest single exposure of $828 million is for a hospital that, according to HUD, poses a low risk of default. HUD commented that GAO’s presentation of processing times for applications is misleading because it does not mention that there can be periods of time in which HUD cannot continue to process applications due to factors that applicants must address and are thus beyond HUD’s control. Because HUD’s system for tracking application processing times does not capture such periods of time, it is not possible for GAO to quantify their impact. Further, the report notes that processing times vary with the complexity of the project and may be affected by issues outside of HUD’s control. HUD took exception with our conclusion that it is difficult for us, FHA’s managers, or the Congress to assess how well the agencies perform in implementing the program because FHA has not established a set of meaningful performance measures and stated a belief that program results indicate that the program is fulfilling its purpose. While our report acknowledges that the program has had a good performance history, the creation and use of performance measures can be used by agency managers to improve a program’s results. As we note in the report, analysis of performance information helps managers identify problems, identify the causes of problems, and develop corrective actions. In addition, performance information can be used to develop strategies, identify priorities, make resource allocation decisions, and identify more effective approaches to program implementation. HUD disagreed with our suggestion that it include such factors as initial debt-service coverage ratio into its credit subsidy modeling and noted that two of the studies that we cited found this ratio to be statistically insignificant in predicting commercial mortgage defaults. Our draft report in fact stated that economic studies have shown mixed results regarding the significance of the impact of debt-service coverage ratios on commercial mortgage defaults. However, we revised the report to explicitly footnote studies that show initial debt-service coverage ratios to be statistically insignificant and those that indicate that this ratio is a meaningful factor in modeling default risk. We also note that the two studies that found initial debt-service coverage ratios to be statistically insignificant were both based on the same, small data set. We acknowledge that HUD’s cash-flow model considers the current debt-service coverage ratio of insured hospitals through its artificial default methodology. However, our recommendation is to include the debt-service coverage ratios at origination, so that the risk of loans at origination will be reflected in the credit subsidy rates for the cohort. Finally, our draft report stated that FHA estimates that hospital loans are most likely to experience a claim during their tenth insured year. In its comment, HUD stated that historically 70 percent of claims occurred prior to a loan’s tenth year. The statement in our draft report was based on actual historical conditional claim rate data. HUD subsequently provided additional information, which explained that the conditional claim rate peaked due to a single claim with multiple notes. As a result, we revised the report to reflect additional information. We are sending copies of this report to the Secretaries of the Departments of Housing and Urban Development (HUD) and Health and Human Services (HHS). We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-8678 or woodd@gao.gov. Contact points for our Offices of Congressional Relations or Public Affairs may be found on the last page of this report. GAO staff who made major contributions to the report are listed in appendix VI. Our objectives were to review (1) the design and management of the program, as compared with private insurance; (2) the nature and management of the relationship between the Department of Housing and Urban Development (HUD) and the Department of Health and Human Services (HHS) in implementing the program; (3) the financial implications of the program to the General Insurance/Special Risk Insurance (GI/SRI) fund, including risk posed by program and market trends; and (4) how HUD estimates the annual credit subsidy for the program, including the factors and assumptions used. To review the design and management of the Hospital Mortgage Insurance Program we interviewed officials at both the Federal Housing Administration’s (FHA) Office of Insured Health Care Facilities and the Division of Facilities and Loans within HHS’ Health Resources and Services Administration, and reviewed program policies, documentation of application processes, laws, and regulations. To compare the program’s design with that of private insurers, we met with private bond insurers and the Association of Financial Guaranty Insurers, credit rating agencies, mortgage and investment banking firms, hospital associations, and state health care financing agencies in New York and New Jersey. To describe how FHA and HHS coordinate the implementation of the hospital program, we interviewed FHA and HHS officials about the responsibilities for each agency in implementing the program. We also reviewed the Memorandum of Agreement between FHA and HHS that describes the division of duties and responsibilities between the two agencies and organizational charts that depict FHA’s organization, HHS’s organization, and the FHA-HHS interrelationship in program administration. We analyzed the extent to which performance measures related to interagency coordination were met by obtaining available data from FHA and analyzing time frames for processing applications and loan modification requests from the Hospital Mortgage Insurance Management Information System (HMIMIS). We compared performance measures with our criteria on performance measures and compared performance measures in the 2002-2005 Memorandum of Agreement with the performance measures in the 2006-2010 Interagency Agreement between FHA and HHS to identify any changes. To identify the financial implications of the program to the GI/SRI fund, we interviewed and obtained documentation from FHA and HHS program officials and analyzed FHA data on program portfolio characteristics, including number and amount of loans by cohort, current insurance-in- force, and geographic concentration of loans, claims, and recoveries. Specifically, To obtain the number and amount of active and terminated loans, we created a report from the HMIMIS database, which is updated monthly. To assess the reliability of the HMIMIS data, we reviewed relevant documentation, interviewed agency officials who worked with this database, and conducted electronic testing of the data, including frequency and distribution analyses. We determined the data to be sufficiently reliable to obtain the number and amount of active loans. We corroborated these data with the FHA’s 2004 report to the Congress. As of December 2005, the administrators provided data from HUD’s F-47 database, a multifamily database, to show (1) that there were 59 active hospitals with 74 active loans in the Hospital Mortgage Insurance portfolio and (2) that there had been 341 loans in the portfolio since the inception of the program. To assess the reliability of data from HUD’s F- 47 database, we reviewed HUD’s Hospital Mortgage Insurance Program Functional Requirements Document, Procedures for Maintaining Group Records, and other relevant documentation, interviewed agency officials who worked with this database, and conducted electronic testing of the data, including frequency and distribution analyses. Our assessment showed that two loan records were lacking state data, and one record was lacking hospital name data but were identified by a unique project number. FHA administrators verified that these loans were endorsed long before electronic loan records were maintained and that they were unable to provide additional information. None of our analyses utilized the missing data elements for the two projects; therefore, there was no impact on this report. We determined the data to be sufficiently reliable to describe the geographic concentration of loans in the program. To determine the proportion of the Hospital Mortgage Insurance Program to the larger GI/SRI fund, we reviewed a spreadsheet provided by HUD’s Office of Evaluation dated June 2005 on insurance-in-force for the hospital program to that of the GI/SRI fund. To determine the risk posed by insurance claims to the Hospital Mortgage Insurance Program, we analyzed spreadsheets with historic claims and recoveries data provided by HUD’s Office of Evaluation and dated August 2005. To determine the geographic concentration of loans and loan prepayment history in the program, we analyzed data current as of December 31, 2005, in an extract of HUD’s F-47 database. While we obtained extracts from HUD’s F-47 database in April 2005, October 2005, and December 2005, all analyses from F-47 data in the report utilize the December 2005 extract only. We also compared data on four financial ratios including debt service coverage, days cash on hand, current, and operating margin ratios provided from HMIMIS, current as of December 2005, with applicant criteria stated in the Manual of the Hospital Insurance Program. To determine how FHA manages program risks, we interviewed FHA and HHS program officials and reviewed the Mortgage Insurance for Hospitals Handbook and manual to determine steps taken by the agency during the application and monitoring phases of the insurance process. We analyzed cash inflows to the program from income from notes held in inventory, rental income from properties held in inventory, sales income from notes, and properties sold from inventory. We also reviewed documentation of cash outflows, such as claim payments and expenses related to properties and notes held in inventory. To assess risk based on geographic concentration, we identified the state with the highest unpaid principal balance insured by the program. Finally, to assess risk posed by the geographic concentration of the program’s unpaid principal balance, we extracted 2003 data from the Centers for Medicare & Medicaid Services (CMS) database on Hospital Mortgage Insurance Program hospitals active in 2005. We used CMS data to identify the number of discharged patients whose services were paid for through the Medicare/Medicaid programs from hospitals that have loans insured through the program. More than 90 percent of these data have undergone basic edit checks; however, CMS has not yet determined whether these data require an audit. These data may change as they undergo further review by CMS. In addition, CMS does not enforce dates by which hospitals must report data. Thus, at the time of this report, only 49 of the 59 active FHA hospitals had provided data for 2003. We conducted a literature review and interviewed numerous officials of rating agencies and hospital associations to obtain information on risks due to health care market trends. We conducted the following academic literature searches: (1) Google’s Scholar search engine using the terms “hospital mortgage insurance,” “nursing home mortgage insurance,” “hospital and default and FHA,” “nursing home and default and FHA”; (2) PubMed Web site using the terms “hospital mortgage insurance” and “nursing home mortgage insurance;” and (3) HUDuser.org Web site using the terms “hospital mortgage insurance,” “nursing home mortgage insurance,” and “Section 242.” We also searched for Inspectors General and agency reports through HUD and HHS Web sites using the terms “Hospital mortgage insurance” and “Section 242.” Finally, we conducted a search on our internal Web site to identify previous work on the Section 242 program. The terms “hospital,” “mortgage insurance,” and “Section 242” were used for the period of January 1995 through March 2005. To determine how HUD estimates the annual credit subsidy rate for the program, we interviewed program officials from HUD’s Office of Evaluation and program auditors from the Office of Management and Budget (OMB), reviewed documentation of HUD’s credit subsidy estimation procedures, and reviewed the cash-flow model for the program. We also compared the assumptions used in HUD’s cash-flow model with relevant OMB guidance and reviewed economic literature on modeling defaults to identify factors that are important for estimation. Additionally, we analyzed data provided by FHA on program hospitals’ projected debt- service coverage ratios (at the time of their loan application). HUD’s Budget Office provided the program’s annual credit subsidy rates for 1992 and 1993, and we obtained this rate for years 1994-2005 from the Federal Credit Supplement of the United States Budget. Our review did not include an evaluation of underwriting criteria, construction monitoring, or the need for the program. We conducted our work in Albany, New York; Chicago, Illinois; New York, New York; Paterson, New Jersey; Rockville, Maryland; and Washington, D.C., between February 2005 and January 2006 in accordance with generally accepted government auditing standards. Department of Health and Human Services (HHS) Administration (FHA) Conduct preliminary review of hospital proposed for insurance. Provide applicant guidance and feedback (including preapplication conference). Review and approve construction plans, specifications, and contracts. Engage independent feasibility consultant. Account Executive and review team recommend approval or disapproval to the Program Management Group (PMG). PMG recommends approval or disapproval to the FHA management. FHA management recommends approval or disapproval to the FHA Commissioner. FHA Commissioner makes final decision on whether to insure. Make final underwriting determinations, conduct any needed legal reviews, issue firm commitment, close, and initially endorse loan. Conduct preconstruction conference, monitor construction work, and process requests for advances of mortgage proceeds. Review cost certification, inform lender of maximum insurable mortgage amount, and process final advance. Arrange final closing and finally endorse mortgage. Account Executive monitors hospital’s performance by periodically reviewing financial and utilization data. Account Executive will receive, review, and recommend to FHA management approval or disapproval of special requests and loan modifications (for example, partial release of security, transfer of physical assets, bond refundings, or major capital projects) Approve or disapprove special requests and loan modifications. Develop and carry out strategies for helping a troubled hospital improve its financial condition and for preventing or curing defaults. Engage consultant to review finances and operations of a troubled hospital and to make recommendations for a financial turnaround plan. Review quality and condition of insured hospital loan portfolio. Determine the amount of liability for loan guarantees and credit subsidy rates. Receive/process assignment of loan and pay insurance claim. Review assigned hospital’s operational performance and financial condition and conduct site visits as needed. Department of Health and Human Services (HHS) Administration (FHA) Account Executives receive, review, and recommend to FHA management approval or disapproval of proposed workout agreements, mortgage modifications, or note sales. Analyze hospital’s situation, evaluate alternative uses, secure appraisal, make decision to foreclose, and arrange and hold foreclosure sale. Contract for management services and repairs, as needed, to protect asset if FHA is mortgagee-in-possession or acquires hospital through foreclosure or deed-in- lieu. Develop marketing plan, advertise, and sell hospital. Services (HHS) Administration (FHA)? Number of complaints from customers about the HHS staff's lack of helpfulness, timeliness, courtesy, understanding, etc. Number of compliments received for HHS staff's helpfulness, timeliness, courtesy, understanding, etc. Preliminary information is provided within 2 business days of inquiry. Time from receipt of complete application to decision letter. Process 75% of complete applications within 120 days of receipt. There are no instances of incomplete applications being received because applicant was not informed of application requirements. Soundness of analysis. HUD may consider the following evidence that the team's analysis was flawed: (1) deterioration, within 2 yrs of the recommendation for approval, of the financial condition of an approved applicant due to conditions that should have been detected in the review, or (2) the ability of a disapproved applicant to subsequently obtain insurance elsewhere on similar terms and conditions within 6 months of the recommendation for disapproval. Plans and specifications do not need major revisions during the construction process because of significant architectural or engineering errors or omissions made prior to or during the application process. Problems do not arise during the construction period because of significant inconsistencies between contract documents. Preconstruction meetings are thorough and do not precipitate delays in application processing for our customers. Monthly inspection reports support the items and amounts included in monthly draws. Change orders are documented and recommendations are supportable. Length of time between the team's receipt of monthly requisition package and submission of the team's analysis and payment recommendation to HUD. Hospital construction project is completed on time and within budget, unless mitigating factors outside HHS's control prevent this from happening. Services (HHS) Administration (FHA)? Length of time between completion of construction and recommendation for final endorsement. HHS accomplishes all activities in a timely manner and provides assistance and works with the hospital and contractor so as to minimize the length of time between completion of construction and recommendations for final endorsement. Final recommendation package is complete, documented, and supportable if any issues or challenges are raised in relation to the construction phase of the project. Customers with a weakening financial position should be identified early enough to allow time for the Account Executive (AE) to provide technical assistance and undertake default prevention measures before a situation becomes an emergency. Each AE will develop and maintain a file in HHS's office on each customer with all pertinent information needed to evaluate the customer's condition. AE's should not be "blindsided" by local, state, and national developments that affect the viability of customers. Each customer meeting the conditions above for inclusion on the Priority Watch List (PWL) should be included on the PWL reports provided to HUD. HHS's work should assist HUD's goal of zero claim payments. Time from receipt of request to recommendation to HUD. Performance target is to process at least 75% of complete loan modification requests within 30 days of receipt. HHS provides effective services to reduce or contain costs to the FHA insurance fund for the following activities: (1) transition into the HUD inventory; (2) stabilization of the hospital including patient, physical, and financial concerns; (3) marketing; and (4) disposition. Individuals making key contributions to this report included Alison Martin, Lisa Moore, David Pittman, Minette Richardson, Paul Schmidt, and Julie Trinder. | Under its Hospital Mortgage Insurance Program, the Department of Housing and Urban Development's (HUD) Federal Housing Administration (FHA) insures nearly $5 billion in mortgage loans for the renovation or construction of hospitals that would otherwise have difficulty accessing capital. In response to a requirement in the 2005 Consolidated Appropriations Conference Report, GAO examined (1) the design and management of the program, as compared with private insurance, (2) the nature and management of the relationship between HUD and the Department of Health and Human Services (HHS) in implementing the program, (3) the financial implications of the program to the General Insurance/Special Risk Insurance (GI/SRI) fund, including risk posed by program and market trends, and (4) how HUD estimates the annual credit subsidy for the program, including the factors and assumptions used. The Hospital Mortgage Insurance Program insures the mortgages of hospitals that are generally riskier than those that can obtain private bond insurance. While FHA's process for reviewing mortgage insurance applications includes more steps and generally takes longer, the agency monitors active loans with many of the same techniques that private bond insurers use. Under a Memorandum of Agreement, FHA and HHS work together in a variety of ways to review mortgage insurance applications and monitor active loans. However, FHA does not collect data to assess program performance against most performance measures specified in the memorandum, some of which are not objective. Further, FHA has not kept its program handbook of policies and procedures for applicants, lenders, and others up-to-date. The hospital program is small compared with other programs in the GI/SRI fund, and the losses from claims have been relatively low. Despite the program's relatively small size, some program and market trends may pose risks. For example, 61 percent of the program's total insured, outstanding loan amount is concentrated in New York, which makes the program vulnerable to state policies and regional economic conditions. While FHA has goals to diversify the hospital insurance portfolio and has made efforts to do so, it does not have a formal strategy to achieve these goals. To estimate the credit subsidy cost, or program costs, over the life of the outstanding loans insured, HUD uses a model that incorporates factors and assumptions about how loans will perform, including estimated claim and recovery rates, which are consistent with federal guidance. However, HUD's model does not explicitly consider some factors, such as the potential impacts of prepayment penalties or restrictions, which according to some economic studies, are important in modeling default risk. |
Most Army cardholders properly used their travel cards and paid amounts owed to Bank of America promptly. However, we found that the Army's delinquency rate is higher than any other DOD component or executive branch agency in the federal government. As shown in figure 1, for the eight quarters ending March 31, 2002, the Army's delinquency rate fluctuated from 10 to 18 percent, and on average was about 5 percent higher than the rest of DOD and 7 percent higher than federal civilian agencies. As of March 31, 2002, over 11,000 Army cardholders had $8.4 million in delinquent debt. We also found substantial charge-offs of Army travel card accounts. Since the inception of the travel charge card task order between DOD and Bank of America on November 30, 1998, Bank of America has charged off over 23,000 Army travel card accounts with nearly $34 million of bad debt. As shown in figure 2, the travel cardholder's grade (and associated pay) is a strong predictor of delinquency problems. We found that the Army's delinquency and charge-off problems are primarily associated with young, low- and midlevel enlisted military personnel with basic pay levels ranging from $11,000 to $26,000. A more detailed explanation of each of these grades along with their associated basic pay rates is provided in appendix II. These delinquencies and charge-offs have cost the Army millions of dollars in lost rebates, higher fees, and substantial resources spent pursuing and collecting past-due accounts. For example, we estimated that in fiscal year 2001, delinquencies and charge-offs cost the Army $2.4 million in lost rebates, and will cost $1.4 million in increased automated teller machine (ATM) fees annually. Our work also identified numerous instances of potentially fraudulent and abusive activity related to the travel card. We found that during fiscal year 2001 at least 200 Army employees wrote three or more nonsufficient funds (NSF) or “bounced” checks to Bank of America as payment for their travel card bills—a potentially fraudulent act. Appendix III provides a table summarizing 10 examples, along with more detailed descriptions, of cases in which cardholders wrote three or more NSF checks to Bank of America and had their travel card accounts subsequently charged off. For example, in one case, an Army employee from Ft. Jackson, who was convicted for writing NSF checks prior to receiving the government travel card, wrote over 86 NSF checks to Bank of America. Further, we found instances in which cardholders abused their travel cards by using them to purchase a wide variety of personal goods or services that were unrelated to official government travel. As shown in figure 3, government travel cards are clearly marked, “For Official Government Travel Only.” In addition, before receipt of their travel cards, all Army cardholders are required to sign a statement of understanding that the card is to be used only for authorized official government travel expenses. However, as part of our statistical sampling results at the four sites we audited, we estimated that personal use of the government travel card ranged from 15 percent of fiscal year 2001 transactions at one site to 45 percent at another site. Government travel cards were used to pay for such diverse goods and services as dating and escort services; casino and Internet gambling; cruises; tickets to musical and sporting events; personal clothing; closing costs on a home purchase; and, in one case, the purchase of a used automobile. For example, we were able to determine that, during fiscal year 2001, approximately $45,000 was spent Army-wide to purchase cruise packages or to pay for a variety of activities or services on cruise ships. We found that charged-off accounts included both those of (1) cardholders who were reimbursed by the Army for official travel expenses but failed to pay Bank of America for the related charges, thus pocketing the reimbursements, and (2) cardholders who used their travel cards for personal purchases for which they did not pay Bank of America. Appendix IV provides a summary table and supporting narrative describing examples of both types of abusive travel card activity. As detailed in appendix V, we also found instances in which cardholders used their travel cards for personal purposes, but paid their travel card bills when they became due. For example, we found that a Lieutenant Colonel used his travel card to purchase accommodations and tickets to attend the Tournament of Roses in Pasadena, California. These cardholders benefited by, in effect, getting interest-free loans. Personal use of the cards increases the risk of charge-offs related to abusive purchases, which are costly to the government and the taxpayer. We also found several instances of abusive travel card activity where Army cardholders used their cards at questionable establishments such as gentlemen's clubs, which provide adult entertainment. Further, these clubs were used to convert the travel card to cash by supplying cardholders with actual cash or “club cash” for a 10 percent fee. For instance, a cardholder may charge $330 to the government travel card at one of these clubs and receive $300 in cash. Subsequently, the club receives payment from Bank of America for a $330 restaurant charge. For fiscal year 2001, we identified about 200 individuals who charged almost $38,000 at these establishments. For example, we found that 1 cardholder obtained more than $5,000 in cash from these establishments. We found little evidence of documented disciplinary action against Army personnel who misused the card, or that Army travel program managers or supervisors were even aware that Army personnel were using their travel cards for personal use. For example, a civilian employee working at the Pentagon on a classified program used her travel card for personal purchases of about $3,600 and subsequently wrote four NSF checks for over $7,700 to Bank of America. The cardholder's account was subsequently charged-off when the cardholder failed to pay the bill. The employee's supervisor was not aware that the employee had any potentially fraudulent and abusive activity related to the travel card. In another example, a California National Guardsman with over $5,400 of charge-offs associated with authorized travel, for which the Army reimbursed the cardholder, was subsequently promoted from a Major to a Lieutenant Colonel. In addition, we found that 38 of 105 travel cardholders we examined that had their accounts charged-off still had active secret or top-secret clearances as of June 2002. Some of the Army personnel holding security clearances who have had difficulty paying their travel card bills may present security risks to the Army. Army regulations provide that an individual's finances are one of the key factors to be considered in whether an individual should continue to be entrusted with a secret or top-secret clearance. However, we found that Army security officials were unaware of these financial issues and consequently could not consider their potential effect on whether these individuals should continue to receive security clearances. For fiscal year 2001, the Army had significant breakdowns in key internal controls over individually billed travel cards. The breakdowns stemmed from a weak overall control environment, flawed policies and procedures, and a lack of adherence to valid policies and procedures. These breakdowns contributed to the significant delinquencies and charge-offs of Army employee account balances and potentially fraudulent and abusive activity related to the travel cards. At the four units we audited, we found management was focused primarily on delinquencies and often only after severe problems were discovered and major commands began demanding improved performance in reducing the amount of such delinquencies. There were few indications that management placed any emphasis on controls designed to prevent or provide for early detection of travel card misuse. In addition, we identified two key overall control environment weaknesses: (1) the lack of clear, sufficiently detailed policies and procedures and (2) limited travel card audit and program oversight. First, the units we audited used DOD's travel management regulations (DOD Financial Management Regulation, volume 9, chapter 3) as the primary source of policy guidance for management of Army's travel card program. However, in many areas, the existing guidance was not sufficiently detailed to provide clear, consistent travel management procedures to be followed across all Army units. Second, as recognized in the DOD Inspector General's March 2002 summary report on the DOD travel card program, “ecause of its dollar magnitude and mandated use, the DOD travel card program remains an area needing continued emphasis, oversight, and improvement. Independent internal audits should continue to be an integral component of management controls.” However, the DOD Inspector General report noted that only two internal review reports were issued from fiscal year 1999 through fiscal year 2001 concerning the Army's travel card program. We found that this overall weak control environment contributed to design flaws and weaknesses in a number of management control areas needed for an effective travel card program. For example, many problems we identified were the result of ineffective controls over issuance of travel cards. Although DOD's policy allows denial of travel cards for certain groups or individuals with poor credit histories, we found that, without exception, the Army processed all travel card applications it received, regardless of an applicant’s credit history. For the cases we reviewed, we found a significant correlation between travel card fraud, abuse, and delinquencies and individuals with substantial credit history problems. The prior and current credit problems we identified for Army travel cardholders included charged-off credit card and automobile loans, defaulted and foreclosed mortgages, bankruptcies, and convictions for writing NSF checks. Also, agency program coordinators (APCs), who have the key responsibility for managing and overseeing travel cardholders' activities, are essentially set up to fail in their duties because they are given substantial responsibility for a large number of cardholders—for example up to 1,000 cardholders per APC—and little time to do this collateral duty. Military personnel who are responsible for and rated on other job responsibilities—such as airport security—are given the APC role as “other duty as assigned.” With a high level of APC turnover (particularly military APCs, which at one of the locations we audited were reassigned about every 6 months), and only minimal time allotted to perform this collateral duty, we found that APCs generally were ineffective in carrying out their key travel card program management and oversight responsibilities. Table 1 summarizes our statistical tests of four key control activities related to basic travel transaction and voucher processing at four Army locations. Substantial delays in travel voucher reimbursements to cardholders can have a significant impact on high delinquency rates. For example, such delays at the California National Guard contributed to the high delinquency rate for that unit. We found a substantial number of California National Guard employees and several employees at other units audited who may have been due payments for late fees because their reimbursements were late. We also found errors in travel voucher processing that resulted in both overpayment and underpayment of the amounts that cardholders should have received for their official travel expenses. DOD has taken a number of actions focused on reducing delinquencies. In October 2000, the Vice Chief of Staff of the Army issued a directive to cut the Army's delinquencies by 50 percent by the end of March 2001. Further, the Vice Chief of Staff established a goal of a delinquency rate of no more than 4 percent of active cardholders as soon as possible and ordered commanders throughout the Army to provide additional attention to the government travel card program. Beginning in November 2001, DOD began a salary and military retirement offset program—similar to garnishment. As a result of these actions, Army experienced a significant drop in charged-off accounts in the first half of fiscal year 2002. In addition, DOD has encouraged cardholders to make greater use of split pay disbursements. This payment method, by which cardholders elect to have all or part of their reimbursement sent directly to Bank of America, has the potential to significantly reduce delinquencies. Split disbursements are a standard practice of many private sector companies. DOD reported that for about 27 percent of the travel vouchers paid in April 2002 at one of its major disbursing centers, cardholders elected this payment option. Further, the DOD Comptroller created a DOD Charge Card Task Force to address management issues related to DOD's purchase and travel card programs. We met with the task force in June and provided our perspectives on both programs. The task force issued its final report on June 27, 2002. However, we have not yet had an opportunity to review the report's findings in detail. To date, many of the actions that DOD has taken primarily address the symptoms or “back-end” result of delinquency and charge-offs after they have already occurred. We are encouraged by the DOD Comptroller's recent announcement concerning the cancellation of all travel cards of cardholders who have not been on official government travel within the last 12 months. Actions to implement additional “front- end” or preventive controls will be critical if DOD is to effectively address the high delinquency rates and charge-offs, as well as potentially fraudulent and abusive activity, discussed in this testimony. To that end, we will be issuing a related report in this area with specific recommendations, including a number of preventive actions that, if effectively implemented, should substantially reduce delinquencies and potentially fraudulent and abusive activity related to the travel cards. For example, we plan to include recommendations that will address actions needed in the areas of exempting individuals with a history of financial problems from the requirement to use a travel card; providing sufficient infrastructure to effectively manage and provide day-to-day monitoring of travel card activity related to the program; deactivating cards when employees are not on official travel; moving towards mandating use of split disbursements; providing strong, consistent disciplinary action to employees who commit fraud or abuse the travel cards; and ensuring that information on any financial problems related to the travel cards of any cardholders with secret or top-secret security clearances is provided to appropriate security officials to consider in determining whether such clearances should be suspended or revoked. Mr. Chairman, Members of the Subcommittee, and Senator Grassley, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For future contacts regarding this testimony, please contact Gregory D. Kutz at (202) 512-9095 or kutzg@gao.gov or John J. Ryan at (202) 512-9587 or ryanj@gao.gov. We used as our primary criteria applicable laws and regulations, including the Travel and Transportation Reform Act of 1998 (Public Law 105-264), the General Services Administration's (GSA) Federal Travel Regulation, and the Department of Defense Financial Management Regulations, Volume 9, Travel Policies and Procedures. We also used as criteria our Standards for Internal Control in Federal Government; and our Guide to Evaluating and Testing Controls Over Sensitive Payments. To assess the management control environment, we applied the fundamental concepts and standards in the GAO internal control standards to the practices followed by management in the six areas reviewed. To assess the magnitude and impact of delinquent and charged-off accounts, we compared the Army's delinquency and charge-off rates to other DOD services and the other executive branch agencies in the federal government. We also analyzed the trends in the delinquency and charge-off data from fiscal year 2000 through the first half of fiscal year 2002. We also used data mining to identify Army individually billed travel card transactions for audit. Our data mining procedures covered the universe of individually billed Army travel card activity during fiscal year 2001 and identified transactions that we believed were potentially fraudulent or abusive. However, our work was not designed to identify, and we did not determine, the extent of any potentially fraudulent or abusive activity related to the travel cards. To assess the overall control environment for the travel card program at the Department of the Army, we obtained an understanding of the travel process, including travel card management and oversight, by interviewing officials from the Office of the Undersecretary of Defense Comptroller, Department of the Army; Defense Finance and Accounting Service (DFAS); Bank of America; and GSA, and reviewing applicable policies and procedures and program guidance they provided. We visited four Army units to “walk through” the travel process including the management of travel card usage and delinquency. We visited the DFAS Orlando location to “walk through” the voucher review and payment process used for two of the four Army locations we tested. We also assessed actions taken to reduce the severity of travel card delinquencies and charge-offs. Further, we contacted one of the three largest U.S. credit bureaus to obtain credit history data and information on how credit scoring models are developed and used by the credit industry for credit reporting. At each of the Army locations we audited, we also used our review of policies and procedures and the results of our walk throughs of travel processes and other observations to assess the effectiveness of controls over segregation of duties among persons responsible for preparing travel vouchers, processing and approving travel vouchers, and certifying travel voucher payments. To test the implementation of key controls over individually billed Army travel card transactions processed through the travel system—including the travel order, travel voucher, and payment processes—we obtained and used the database of fiscal year 2001 Army travel card transactions to review random samples of transactions at four Army locations. Because our objective was to test controls over travel card expenses, we excluded credits and miscellaneous debits (such as fees) from the population of transactions used to select random samples of travel card transactions to review at each of four Army units we audited. Each sampled transaction was subsequently weighted in the analysis to account statistically for all charged transactions at each of the four units, including those transactions that were not selected for review at those locations. We selected the four Army locations for testing controls over travel card activity based on the relative size of travel card activity at the 13 Army commands and of the units under these commands, the number and percentage of delinquent accounts, and the number and percentage of accounts charged-off. We selected two units from Army's Forces Command because that command represented approximately 19 percent of travel card activity, 22 percent of the delinquent accounts, and 28 percent of accounts charged-off during fiscal year 2001 across the Army. We also selected an Army National Guard location because the Army National Guard represented 13 percent of the total travel card activity, 22 percent of the delinquent accounts, and 15 percent of charge-offs for fiscal year 2001. The Special Operations Command represents about 6 percent of Army's charge card activity, 5 percent of the delinquent accounts and 4 percent of Army travel card accounts charged-off in fiscal year 2001. Each of the units within the commands was selected because of the relative size of the unit within the respective command. Table 2 presents the sites selected and the universe of fiscal year 2001 transactions at each location. We performed tests on statistical samples of travel card transactions at each of the four case study sites to assess whether the system of internal control over the transactions was effective and to provide an estimate of the percentage of transactions that were not for official government travel by unit. For each transaction in our statistical sample, we assessed whether (1) there was an approved travel order prior to the trip, (2) the travel voucher payment was accurate, (3) the travel voucher was submitted within 5 days of the completion of travel, and (4) the travel voucher was paid within 30 days of the submission of an approved voucher. We considered transactions not related to authorized travel to be abusive and incurred for personal purposes. The results of the samples of these control attributes, as well as the estimate for personal use—or abuse—related to travel card activity, can be projected to the population of transactions at the respective test case study site only, not to the population of travel card transactions for all Army cardholders. Table 3 shows the results of our test of the key control related to the authorization of travel (approved travel orders were prepared prior to dates of travel). Table 4 shows the results of our test for effectiveness of controls in place over the accuracy of travel voucher payments. Table 5 shows the results of our tests of two key controls related to timely processing of claims for reimbursement of expenses related to government travel—timely submission of the travel voucher by the employee and timely approval and payment processing. To determine if cardholders were reimbursed within 30 days, we used payment dates provided by DFAS. We did not independently validate the accuracy of these reported payment dates. We briefed DOD managers, including officials in DOD's Defense Finance and Accounting Service, and Army Managers including Assistant Secretary of the Army (Financial Management and Comptroller) officials, Army Forces Command and Special Operations Command Unit Commanders, unit-level APCs, and Army National Guard Bureau management and the California National Guard Adjutant General, and Bank of America officials on the details of our review, including our objectives, scope, and methodology and our findings and conclusions. We incorporated their comments where appropriate. With the exception of our limited review of access controls at the California National Guard, we did not review the general or application controls associated with the electronic data processing of Army travel card transactions. We conducted our audit work from December 2001 through July 2002 in accordance with generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President's Council on Integrity and Efficiency. Following this testimony, we plan to issue a report, which will include recommendations to DOD and the Army for improving internal controls over travel card activity. Tables 6 and 7 show the grade, rank (where relevant), and the associated basic pay rates for 2001 for the Army's military and civilian personnel, respectively. | In fiscal year 2001, the Army had 430,000 individually billed travel card accounts, and about $619 million in related charges. Most Army cardholders properly used their travel cards and promptly paid amounts owed. However, the Army's delinquency rate is higher than any other Department of Defense (DOD) component or executive branch agency. GAO also identified numerous instances of potentially fraudulent and abusive activity related to the travel cards. During fiscal year 2001, at least 200 Army employees wrote three or more nonsufficient funds or "bounced" checks to Bank of America as payment for their travel bills--potentially fraudulent acts. GAO found little evidence of documented disciplinary action against Army personnel who misused the card, or that Army travel program managers or supervisors were even aware that travel cards were being used for personal use. For fiscal year 2001, the Army had significant breakdowns in key internal controls over individually billed travel cards that stemmed from a weak overall environment, flawed policies and procedures, and a lack of adherence to valid policies and procedures. These breakdowns contributed to the significant delinquencies and charge-offs of Army employee account balances and potentially fraudulent and abusive activity related to the travel cards. DOD has taken a number of actions focused on reducing delinquencies. As a result of these actions, Army experienced a significant drop in charged-off accounts in the first half of fiscal year 2002. |
Many businesses, including corporations, partnerships, and any business that has employees, are required to request an Employer Identification Number (EIN) from IRS to be used in filing returns. Entities that must file business returns include corporations, partnerships, trusts, estates of decedents, and government agencies. On its EIN application, a business gives IRS information about its structure and whether or not it has employees. Based on this information, IRS establishes an account for the business, notifies the business of its EIN and filing requirement, and records its filing requirement on IRS’s BMF. The filing requirements on the BMF are the basis for IRS’s efforts to ensure that businesses file their required returns. Businesses may have to file several types of returns. A business that does not file the return in IRS’s records by the due date including any extensions is considered by IRS to be a potential nonfiler. This is the case including where the business has filed a different return than the one IRS expects, e.g., where a partnership has restructured itself as a corporation and filed a corporate return. (For more information on selected business returns and restructuring, see app. II.) Employment taxes from employers and employees account for the largest share of revenue collected from businesses. Employment tax returns report income taxes withheld on behalf of employees, the employees’ share of Federal Insurance Contribution Act (FICA) taxes, that is, Social Security and Medicare taxes, and the employer’s matching share of FICA taxes. A business with employees, regardless of its structure, is generally required to file employment tax returns. In fiscal year 2009, IRS collected an estimated $792.8 billion in FICA taxes and $880.8 billion in individual income tax withholding, or 71.4 percent of all federal tax collections. Businesses also may be required to file an annual return reporting income and losses. Businesses structured as C corporations are generally required to file annual income tax returns including when they did not have taxable income. C corporations pay the corporate income tax. Other types of businesses including businesses structured as S corporations and partnerships are required to also file annual returns, but their income is not taxable at the business level. Rather, income and losses are generally passed through to others, e.g., to the shareholders of an S corporation or the partners of a partnership. Many businesses are also required to file third-party information returns about various payments they make. Payments subject to information reporting include interest earned from banks, mortgage interest paid, wages paid, and some payments to contractors. Over 30 types of information returns are filed on businesses. For tax year 2008, IRS received 421.5 million such returns. Of these, about 347.5 million, or about 80 percent of all information returns filed on businesses, reported broker and barter transactions (Form 1099B). IRS also receives information returns on the amount of federal contract obligations made to businesses awarded federal contracts. GAO, TIGTA, and IRS itself have documented long-standing issues with IRS’s business nonfiler compliance activities. Each year IRS identifies a large number of potential business nonfiler cases, more than IRS has the capacity to work. Many cases go unresolved, and many that IRS does pursue are closed with a determination that the business does not owe IRS a return—a generally unproductive use of IRS’s enforcement resources. In 2005 TIGTA found that IRS’s nonfiler efforts for individuals and businesses were fragmented and recommended that IRS develop a coordinated national strategy. Following TIGTA’s report, in August 2007 IRS adopted a Servicewide Nonfiler Strategy, governed by the IRS Enforcement Committee. The Strategy recognized that large inventories and pursuit of unproductive business nonfiler cases continued to present challenges. The Strategy further noted that IRS did not apply resources to more productive business nonfiler cases but rather to cases closed with a determination that the taxpayer did not owe IRS a tax return. As one of several goals, the Strategy proposed to expand the use of third-party information and research tools to enhance identification, selection and resolution of nonfiler cases. The Strategy also set a goal of developing and implementing consistent Servicewide performance and outcome measures to determine the impact of its initiatives on filing compliance. To provide Servicewide oversight for all IRS nonfiler initiatives and actions, IRS established the Nonfiler Executive Advisory Council (NFEAC), a Servicewide body chartered by the IRS Enforcement Committee and consisting of representatives from all IRS divisions. The NFEAC was to coordinate nonfiler initiatives across IRS’s operating divisions. In addition, its mission included developing, monitoring, and measuring the effectiveness of the Strategy across all IRS divisions. Following adoption of the Nonfiler Strategy, IRS developed several nonfiler initiatives affecting how it identifies and pursues nonfilers. The initiative aimed at addressing long-standing business nonfiler issues is the Business Master File Case Creation Nonfiler Identification Process (BMF CCNIP). This project, implemented in April 2009, uses third-party information data and IRS account data to select potential business nonfiler cases for pursuit based on the likelihood of securing returns and revenue. This change represents a modernization of IRS’s business nonfiler compliance activities as well as the introduction of a concept—use of information return data—we have long endorsed. (For additional information on IRS’s process for identifying and pursuing business nonfilers, see app. III.) The Nonfiler Strategy also envisioned that IRS would use state data in its nonfiler activities. IRS officials told us that IRS originally planned to expand BMF CCNIP to include use of state tax information in its business nonfiler activities, ultimately from IRS’s State Reverse File Match Initiative (SRFMI), an initiative aimed at matching state and federal taxpayer data to identify noncompliance with federal tax law by individual and business taxpayers. Another expansion was to develop business rules that would close cases where filing requirements no longer existed. At the time we finished our work, no documentation was available on the planned expansions, and IRS officials told us that these were on hold pending funding. According to IRS, the primary challenge for IRS in developing a business tax gap estimate is a lack of data. IRS officials told us IRS has no plans to develop a business nonfiler estimate due to a lack of the necessary data. They said that IRS’s tax gap estimates for individual and estate nonfiling were comprehensive, but data similar to that used in those estimates do not exist for businesses. According to IRS officials we spoke with and an expert on tax gap estimation issues we consulted, no comparable population data set of all U.S. businesses exists and developing one would be very expensive. IRS officials we spoke with identified a number of alternative methods for conducting a comprehensive study of the business nonfiler tax gap, but also stated that these studies would be costly, overly complex, or inconsistent with other estimates. We agree that a comprehensive approach may not be feasible, but there may be ways IRS could build a partial estimate of business nonfilers. A partial estimate could be based on IRS’s inventory of over 40 million potential nonfiler cases. IRS does not know what share of its inventory represents instances of actual nonfiling. On the basis of IRS’s historical experience, many of the businesses in the inventory do not have a current filing requirement. For example, they may have closed, merged with another business, no longer have employees, or filed under a different EIN. Table 1 shows IRS’s inventory for selected business return types. IRS could estimate the extent of actual nonfiling among businesses with EINs by taking a sample of each type of return, such as C and S corporation returns, from this inventory and thoroughly investigating them. The results would not be comparable to IRS’s estimate for individual nonfiling because they would not include businesses not already in the inventory, but this study would begin to quantify the extent of business nonfiling and could give IRS a better basis to decide what priority it should place on this type of noncompliance. Despite its limitations, this type of estimate could give IRS information that would be useful in its long-term strategic planning. If done with a sufficient sample size, IRS could determine the characteristics of nonfiling entities and use this information to make changes to its nonfiler compliance activities as appropriate. On the basis of the results of this work, IRS could then decide whether the benefits of a larger study to quantify the revenue impact of business nonfiling would outweigh the costs. As previously noted, IRS has already taken actions to address the long- standing issues presented by business nonfilers. The BMF CCNIP represents a significant modernization of IRS’s business nonfiler compliance program. BMF CCNIP’s use of information return and account data for the first time gives IRS a way to identify those potential nonfilers most likely to be active businesses. Prioritizing business nonfiler cases based on information return and account data could increase productivity without any increase in resources. In addition to the BMF CCNIP, IRS has developed overall performance measures that could be used to gauge the success of the full range of its business nonfiler compliance activities, from identification through case selection through pursuit by IRS’s collections functions. However, IRS does not have all the information it needs to know how well the new initiative is working. As the BMF CCNIP was being designed, IRS developed goals and measures that could be used to assess its progress. A key goal for the BMF CCNIP was a 50-percent reduction in the number of unproductive cases. This was based on an IRS research finding that where businesses had information return data, closures of cases as “not liable to file a return” were reduced by 50 percent. In 2006, IRS developed a Performance Management Plan for the BMF CCNIP which established performance measures aligned to the IRS Strategic Plan and identified sources of data that could be used to monitor the goals. This plan stated that baseline data to track progress towards goals would be from 2005. IRS officials told us that BMF CCNIP management reports and data needed to gauge program performance were not yet available. A key report that will show information on resolution type for each case, selection code, return type, whether the return was secured, and revenue collected with the return was planned. IRS plans to use the report in assessing the effectiveness of the selection codes and tracking the volume of cases closed as not liable to file a return. Officials did not know when the report would be available. As of June 2010, BMF CCNIP staff were working on developing the specifications for this report, and no deadline for its completion had been set. BMF CCNIP management reports that were operational at the time we finished our work were being used to monitor workload, for example, by return type, selection code, and the IRS service center that processed the case. In discussing plans to assess the BMF CCNIP, officials also said that using data from before the start of the CCNIP would be difficult and baseline program data to track progress would come from 2010. IRS officials said that until additional BMF CCNIP management reports are developed they were using other routine reports to monitor the response rate to business nonfiler notices as an indicator of BMF CCNIP effectiveness. Officials said the response rate to notices had doubled since the start of BMF CCNIP, increasing from about 15 percent to about 30 percent. Officials interpreted this increase as showing that the new system is having a positive impact but noted that it was too soon to identify a trend. In addition to measures specific to the BMF CCNIP, IRS has also developed four Servicewide performance and outcome measures for IRS’s nonfiler activities overall. As of December 2009 IRS had data for its four performance measures for its individual nonfiler program including trend data going back to fiscal year 2005. At the time we finished our work, IRS did not have comparable information on business nonfilers it could use to identify trends, assess how well the new initiative is working, or decide whether adjustments needed to be made. Voluntary filing rate. The voluntary filing rate is defined as the total number of required returns filed on time divided by the estimated number of returns required to be filed. At the time we finished our work IRS had not estimated a voluntary filing rate for business nonfilers. As discussed earlier in this report, no data set for the population of all U.S. businesses exists that could be used to estimate the tax gap for businesses or the total number of business returns that should have been filed. Percentage of returns secured. This measure is calculated by dividing the total number of nonfiler returns secured during the fiscal year by the total number of nonfiler cases closed. IRS originally planned to develop management reports on business nonfiler cases comparable to the management reports it uses to calculate this measure for individual nonfiler cases. Developing the business reports, however, presented technical difficulties. At the time we finished our work IRS was planning to use BMF CCNIP management reports not yet developed as the data source. Repeater rate. Under the Servicewide definition established by the NFEAC, a repeat nonfiler is defined as a current-year nonfiler that was also a nonfiler in any year of a 2-year look-back period. IRS’s automated systems do not track repeat nonfiling by businesses. IRS has identified recidivism as a significant problem among individual nonfilers but has no way to know if this is also a problem among businesses. If it is, IRS will not be able to assess the effect of current and planned changes to its business nonfiler compliance activities on repeat business nonfiling due to a lack of baseline data. Efficiency rate. The efficiency rate is calculated by summing all individual and business nonfiling closed cases and dividing by the number of staff-years expended. IRS officials told us efficiency is calculated on a combined basis because IRS does not differentiate between individual and business cases when tracking staff time expended. IRS’s Servicewide nonfiler efficiency measure does include data on business nonfilers, but without separate business and individual measures, IRS has no way to compare the relative efficiency of the two types of cases. At the time we finished our work IRS officials told us they planned to put BMF measures on the agenda for the September 2010 NFEAC meeting. However, it is not clear whether the meeting will include setting a deadline for developing such measures. Selection codes are a key feature of the BMF CCNIP because they distill the business information that IRS has on a case into a prioritized code. Since IRS seeks to reduce the number of cases closed as “not liable to file a return,” the design and priority order of the codes are important to program success. If selection codes do not accurately identify businesses with a greater potential for securing delinquent returns and generating more revenue, the proportion of unproductive cases may not decline. IRS did not test or evaluate the selection codes prior to the beginning of the BMF CCNIP in April 2009. Rather, IRS developed the selection codes using input from those in the agency with knowledge of business nonfiler activity. According to those involved in the process, selection codes were developed through discussions among IRS staff working on business nonfiler programs and included representation from multiple IRS business operating divisions. However, IRS did not conduct a formal study or pilot test to aid in designing the selection codes. Since BMF CCNIP implementation, IRS has been monitoring the selection codes including changes in the number of taxpayer responses to notices. According to IRS officials, BMF CCNIP issues are discussed at two annual meetings. One meeting concerns coordinating workflow for both business and individual nonfiler programs, and the other is a meeting of BMF CCNIP stakeholders where work plans are reviewed and any changes to the system including to the selection codes are discussed. IRS has made some changes to refine the selection codes but has not formally evaluated them. As a result, IRS does not know if the changes improved the codes, nor do they have a basis for knowing whether they now have an optimum set. A more formal and extensive evaluation could give IRS data to identify any need to change selection code priority, or create new or redefine existing selection codes. Our past work has shown that evaluations are beneficial in generating information to guide program decisions. IRS’s Performance Measures Plan for the program identifies many of the components of an evaluation including goals for the program and potential data sources to monitor it. However, this plan does not present a method for conducting an evaluation or a timeline for its completion. IRS officials told us they plan to revisit the selection codes and evaluate the BMF CCNIP in the future, but they have no formal evaluation plan or timetable. They told us that it was too early to evaluate the BMF CCNIP due to the time needed for a case to go through all the stages of pursuit. According to the officials, the earliest date when complete information would be available to analyze the effectiveness of the BMF CCNIP would be 2011. Those directly involved with the BMF CCNIP said that choices of selection codes in weekly case selection were being made with an evaluation in mind, so they attempt to select cases for pursuit from a wide variety of selection codes. In our analysis of fiscal year 2009 management report data, we found that cases had been selected from across most selection codes. This practice may provide useful data but without a formal evaluation, IRS will not know how the selection codes are affecting the program outcomes. When a nonfiler case is closed, IRS collections staff use a two-digit closing code that in some instances provides information on why a case was closed. For some types of cases closed as not liable to file a return, the closing code may explain why. For example, a case can be closed as not liable to file for the period in question because the business was identified as a subsidiary and the parent company filed the return. A case may also be closed as not liable to file if IRS determines that little or no tax is due from the business. Each of these situations has a separate and distinct two-digit closing code. In contrast, there are other closing codes that do not specify why the case was closed as not liable to file. These types of closing codes specify which collections function closed the case but do not provide any additional information. Our analysis of IRS management data shows that of the cases closed as not liable to file in fiscal year 2009, 65 percent were assigned closing codes that do not indicate the reason the case was closed. More detailed closing code information could be useful to IRS by providing information on business nonfiler cases that it currently lacks. IRS’s closing codes do not specify many of the reasons that a case could be closed as not liable to file. For example, a business may no longer be operational but may have failed to indicate to IRS that its last return was a final return. IRS does not have closing codes for other types of situations, such as when a business has changed its structure but failed to notify IRS or where a business does not have employees for a given tax period but failed to indicate on its last filed employment tax return that it is a seasonal employer. Because there are no closing codes indicating these reasons for case closure, IRS does not know the extent to which these situations are a problem and it cannot begin to identify actions that might reduce their frequency. Developing more detailed closing codes could provide data that would be valuable in program evaluation. Depending on results, the data might also lead to education and outreach activities targeted at reducing the number of identified business nonfilers. For example, better information on case closing decisions might identify a need to improve guidance or forms. Under BMF CCNIP, information returns play an important role in selecting potential nonfiler cases for investigation because they are good indicators of business activity. Information returns also play an important role in making case closure decisions on whether a business is liable to file a return. The Internal Revenue Manual (IRM) requires that, before closing a case as not liable to file a return, IRS collection staff are to do a full compliance check including checking whether information returns and other IRS records indicate business activity. For example, where a business taxpayer claims to not be operational for the tax period under investigation, collections staff are to review information returns to determine if there was business activity. If this check of information returns showed that there was business activity, staff is not to close the case until more research is performed. BMF CCNIP selection codes are concise indicators of what IRS knows about a business’s activity including its information return income. For this reason, the codes have potential to be helpful to collections staff when closing cases. Selection codes are readily available on the computer screens that IRS collections staff use to research cases and record case closings. Selection codes can therefore be used to check taxpayers’ claims that they do not owe a return. If the selection code indicates business activity, this could help guide IRS enforcement staff in doing the full compliance check. In December 2009 IRS updated the IRM to include a statement that staff should refer to the selection code to assist them in determining whether a taxpayer is liable to file a return. A 2008 tax examiner training manual also provided guidance on how to effectively use selection codes. Our analysis of nonfiler cases that were selected for work prior to the implementation of BMF CCNIP and that were mostly closed in 2008 and 2009 suggests that full compliance checks may not have been done to the fullest extent possible, since cases with information return income were closed as not liable to file a return. As shown in table 2, 39,931 tax year 2007 partnership and corporation cases with information return income totaling over $193 billion were closed as not liable to file returns. It is difficult to determine whether knowing that a business had information return income would have led to different case closure decisions. About 90 percent of the cases shown in table 2 were closed without any explanation. Although these results in table 2 do not indicate that these cases were closed inaccurately, they do call into question the extent to which IRS staff took into consideration information return income data when making decisions to close cases. While information return income does not indicate the amount of tax due, it does indicate business activity, meaning that some of these businesses may have been required to file returns and pay taxes. Our observations shortly after BMF CCNIP implementation and prior to the IRM update at one of the five IRS service centers that process business nonfiler cases suggest selection codes were not being used in closing cases. Tax examiners we spoke with had mixed awareness of the BMF CCNIP and selection codes. Although the staff was able to view the codes, tax examiners we observed during our site visits did not use selection codes nor view information returns when making decisions to close cases. With the December 2009 revisions to the IRM, tax examiners are instructed to use selection codes as indicators of business activity when doing their full compliance checks. However, in the past they were instructed to use information returns for these checks. Our analysis shows that cases were closed in spite of the fact that information returns showed business activity. To the extent that staff do not make full use of the potential of selection codes and information returns, opportunities may likely be missed to secure tax returns and collect revenue from business nonfilers. As discussed earlier, information return data are good indicators of business activity, but not all payments for goods and services are subject to information return reporting and not all businesses receive information return income. According to IRS data, about 19 percent of its business nonfiler inventory had selection codes that reflect third-party information. This number should increase once information return requirements for reporting businesses credit card payments go into effect in 2012 and requirements for reporting service payments made to corporations go into effect in 2013. Even after these payments are reported to IRS, certain other payments made with cash or by check will not be subject to information reporting. However, there are a number of private sector companies that maintain business activity data, such as data on a business’s gross sales and number of employees, which might help IRS identify business nonfilers and help it determine whether a business is required to file tax returns. While IRS does not use private sector data to help it determine whether a business should file a tax return, it does have contracts with private sector companies for locating taxpayers’ assets and obtaining credit reports on taxpayers that can be used by its collection field staff during their investigations. To test whether private sector data on business activity could be useful for determining whether businesses are liable for filing tax returns, we matched tax year 2007 nonfiler cases that IRS closed as not liable to file returns with a calendar year 2007 Dun & Bradstreet (D&B) database of businesses located in California and Illinois. Our test results showed that there were a total of 40,223 cases in those two states that IRS closed as not liable to file returns where there was a match on name and address between IRS and D&B records. Of the 40,223 cases, 9,740 were for corporation and partnership delinquent returns and the remaining 30,483 were for delinquent employment tax returns. Of the 9,740 partnership and corporation cases, 7,688 cases had either little or no information return income but, as shown in table 3, had D&B total sales of about $4.1 billion. Since these 7,688 cases had little or no information return income, IRS would have had little if any business activity data on which to make decisions on whether the businesses were liable to file returns. Private sector data, such as the D&B sales data, could fill that void. Of the 30,483 employment tax cases that were closed as not liable to file employment (Form 941) and unemployment (Form 940) tax returns, 4,523 cases had employees according to D&B data. Table 4 shows that these 4,523 businesses had a total of 11,418 employees in calendar year 2007, which indicates that they may have been required to file employment tax returns. Under BMF CCNIP, IRS identifies potential nonfilers when the Business Master File records for the businesses indicate that they have a requirement to file returns. If the BMF does not indicate a filing requirement, then a potential nonfiler case would not be developed. To determine whether businesses with no BMF filing requirements may be liable for filing returns, we matched business entities that had been established on IRS’s Business Master File in 2006 that had no filing requirements to D&B calendar year 2007 records of businesses that had California and Illinois addresses. The results of the match showed that 39,920 cases matched the names and addresses on both the IRS and D&B records. Table 5 shows that 39,920 cases had total sales of $29.5 billion and 4,185 of the 39,920 cases had a total of 16,869 employees. These data indicate that the businesses were active in 2007 and that the businesses might have been liable for filing income tax or employment tax returns. Taxpayer contact would have to be made in order to determine whether the businesses were liable to file returns. In addition to examining potential private sector data, we also examined the Central Contractor Registration (CCR) file, which contains self- reported revenue and employment data on businesses that register annually to be awarded federal contracts, to determine whether it could be used by IRS in its business nonfiler program. This database generally dealt with federal contracts so its usefulness would be limited to the subset of the total business nonfiler population that had registered for federal contract consideration. We matched the calendar year 2007 CCR file, which contained over 400,000 registrants nationwide, to the tax year 2007 partnership, corporation, and employment tax cases that were closed as not liable to file returns. This match showed that there were 3,589 entities on the CCR file with reported revenue that were closed as not liable to file partnership (1,210 cases) or corporation (2,379 cases) returns. The match also found that 10,263 entities on the CCR file reported that they had employees that were closed as not liable to file either Forms 941 (8,694 cases) or Forms 940 (1,569 cases). The above data show that there are a number of federal contractors with income that IRS closed as not liable to file returns. As noted earlier, in many cases IRS’s records do not indicate the specific reason for closing a nonfiler case; therefore, we do not know why these cases were closed when the CCR data indicate that they may have been required to file returns because they had indications of business activity. IRS does not give a high priority to potential business nonfilers that receive federal contracts when it selects business nonfiler cases for review but does so for federal workers and retirees who fail to file tax returns under its Federal Employee/Retiree Delinquency Initiative (FERDI) program. This program was developed in 1993 by IRS to promote federal tax compliance among current and retired federal employees. FERDI cases are given a specific priority selection code and are subject to the full range of compliance treatments, including return delinquency notices and field investigations. According to IRS data, in fiscal year 2009, IRS closed over 100,000 FERDI cases. IRS recognizes that businesses receiving federal contracts should be identified and that appropriate and timely actions should be taken to ensure they remain in full compliance with federal tax laws. IRS also has delinquent return procedures that address federal contractors. IRS has special procedures for investigating federal contractors who have been or will be awarded a contract by the IRS and who owe both outstanding taxes and tax returns. These procedures do not apply to federal contractors who only have unfiled returns. Also, according to IRS, during field return delinquency investigations, revenue officers are instructed to determine on initial contact with all taxpayers if the taxpayer is a federal contractor and, if so, to take prompt action to secure any delinquent business returns including their delinquent taxes. Also, unlike federal employees and retirees covered by the FERDI program, federal contractor cases do not have a specific nonfiler selection code, which could give them a priority ranking at the beginning of the investigation process. Currently, IRS has an indicator on its Business Master File that identifies businesses that have federal contracts, but it is not used to prioritize federal contractor nonfiler cases. The source of the BMF federal contractor indicator is Form 8596 (Information Return for Federal Contracts), which certain federal executive agencies are required to file quarterly to report information about persons with whom they have entered into contracts. Since IRS already has a federal contractor indicator on its Master File records, it may be able to cost-effectively develop a specific nonfiler selection code that would give these cases a higher priority in its nonfiler program. Identifying and pursuing nonfilers including businesses is a key part of IRS’s enforcement efforts. Absent a robust nonfiler program, compliant taxpayers will not have confidence that others are paying their fair share. IRS has faced several challenges in its business nonfiler program. IRS generally identifies more potential nonfilers than it can thoroughly investigate, and many of those it does investigate turn out not to owe the return IRS expects based on its records. Our analyses suggest IRS cannot be sure these types of cases are all being closed correctly. IRS has significantly improved its business nonfiler efforts by developing and implementing the BMF CCNIP. This initiative gives IRS for the first time a way to set priorities among its voluminous inventory by making use of information return and other IRS data to predict the likelihood that IRS will secure additional returns and revenue. This initiative should help IRS choose cases to work, but without an estimate of the business nonfiler tax gap, IRS does not have a data-driven basis for allocating resources to its business nonfiler efforts. While IRS has made good progress in implementing BMF CCNIP, it has not calculated the performance measures or planned the evaluations it would need to assess success. IRS also has little data on why it identifies millions of potential business nonfilers only to find that many of them do not owe IRS the return IRS is expecting based on its records. Absent better information on cause, IRS may continue to expend resources on too many unproductive cases, leading to unnecessary taxpayer burden. Until and unless IRS has better information, it will not be able to measure its success or identify the best ways to continue to move in the right direction. While IRS is gathering data needed to manage the program, it can also explore opportunities to build on what it has already achieved. IRS could leverage the information in the BMF CCNIP selection codes by using them to help verify taxpayers’ claims that they do not owe a return because they have gone out of business. IRS could also explore adding non-IRS data to the BMF CCNIP. Private sector and federal contractor data on business activity would give IRS more third-party information and enlarge the capacity of the BMF CCNIP to identify active businesses, thereby potentially leading to fewer cases being closed as not liable to file a return. We recommend that the Commissioner of Internal Revenue take the following eight actions: Understanding the Scope of the Business Nonfiler Population Estimate the magnitude of business nonfiling among businesses registered with IRS, using data from its operational files to select cases for further investigation. Based on the results of this work IRS should develop a tax gap estimate for the impact of business nonfiling insofar as doing so is cost-effective. Monitoring the Performance of Business Nonfiler Activities Set a deadline for developing data that can be used to measure the performance of the BMF CCNIP and its business nonfiler compliance activities overall. Develop a separate efficiency measure for business nonfilers insofar as doing so is cost-effective. Develop an evaluation plan for the BMF CCNIP selection codes, including both an initial evaluation and an ongoing monitoring plan, and conduct an evaluation based on this plan. Results from the study and the ongoing monitoring could be used to refine the selection codes to improve the effectiveness of the program. Identifying Additional Actions to Help Achieve the Goal of Fewer Unproductive Cases Add closing codes that would better indicate all known causes for “not liable to file” determinations and use this information to analyze causes of unproductive cases and use them as appropriate to identify any actions IRS could take either administratively or through education and outreach that could reduce the number of business nonfiler cases where the filing requirement in IRS’s records is not applicable. Ensuring That IRS Does Not Inappropriately Close Cases as Not Liable to File Returns Reinforce to collections staff the need to check for business activity using information return data and selection codes. Study the feasibility and cost-effectiveness of using private sector business activity data and federal contract data to make a determination of whether federal contractors and other businesses are liable for filing tax returns. Ensuring Federal Contractors Comply with Filing Requirements Establish a process similar to the FERDI program for federal workers and retirees that will give a high priority to businesses identified as potential nonfilers that have federal contracts. We provided a draft of this report to the Commissioner of Internal Revenue. We received written comments from the Deputy Commissioner which are reprinted in appendix IV. IRS agreed that identifying and pursuing active business nonfilers is key to enforcement efforts and acknowledged that our recommendations could assist these efforts. IRS agreed with four of our eight recommendations and indicated as discussed below some steps it would take to address the other four. With respect to our recommendation that IRS should estimate the magnitude of business nonfiling by selecting cases from its operational files for further investigation, IRS agreed to collect and report additional data on the number of delinquent business returns identified by its operational programs and the dollars assessed. IRS indicated that such data may overstate the extent of nonfiling because they would include cases such as businesses that filed returns under a parent entity. The intent of our recommendation, however, was to have IRS develop an estimate of the number of businesses that were actually liable for filing returns, which would exclude businesses that were not liable to file returns. It is not clear from IRS’s response whether it intends to do the study we recommended or if IRS plans to report only the results currently available from its business nonfiler program. Our recommendation was that IRS draw a sample of potential business nonfilers and thoroughly investigate those cases to estimate the number of actual business nonfilers in IRS’s business nonfiler inventory. With respect to our recommendation that IRS develop an evaluation plan for the BMF CCNIP selection codes, IRS identified monitoring activities for its new BMF CCNIP report as well as additional information that could be used to evaluate the program. IRS also said that data should not be studied until they are complete and available, which IRS estimates to be by the end of fiscal year 2011. We acknowledged many of IRS’s monitoring activities in our report but these do not constitute an evaluation plan that would identify a method for conducting an evaluation and a timeline for its completion. We recognize that time will be needed for cases to complete the collections process and did not propose a timeline for IRS to complete its evaluation. Our recommendation addressed the need for IRS to develop an evaluation plan for the BMF CCNIP selection codes, including both an initial evaluation and an ongoing monitoring plan, and conduct a study based on this plan. With respect to our recommendation that IRS should study the feasibility and cost-effectiveness of using private sector business activity data and federal contract data, IRS agreed to evaluate the effectiveness of data mining using the Central Contractor Registration database but did not agree to study the feasibility of using private sector data. IRS stated that a study initiated in fiscal year 2009—which IRS did not provide to us during the course of our audit work—concluded that it would be difficult to quantify benefits because there is not an automated way to effectively match Taxpayer Identification Numbers to purchased lists of business names. IRS’s response, however, does not address our analysis illustrating the use of private sector data in our report, which showed that using such data was not only possible but potentially beneficial. While we cannot determine the revenue implications of these cases including whether potential revenue would exceed IRS’s threshold, our analysis shows that private data can provide information not now available to IRS on the business activity of potential nonfilers. For this reason, we continue to recommend that IRS further explore the feasibility and cost-effectiveness of private sector business activity data. With respect to our recommendation that IRS establish a process for federal contractors similar to the process established by its FERDI program for individuals, IRS agreed to explore the feasibility of establishing a system for prioritizing and routing federal contractor nonfiler cases through its current Inventory Delivery System. IRS also stated that it is working on further actions—including implementing legislative changes—that will identify noncompliant federal contractors. IRS stated that a federal contractor with an unfiled employment tax return is a high priority in the case selection process. While employment tax cases are prioritized in IRS’s case selection process, federal contractors do not receive higher priority than nonfederal contractors because there is no selection code specifically aimed at federal contractors. Since IRS already has a federal contractor indicator on its Master File records, our recommendation was based on the assumption that IRS could cost effectively develop a specific nonfiler selection code that would give these cases a higher priority. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Chairman and Ranking Member, House Committee on Ways and Means; the Secretary of the Treasury; the Commissioner of Internal Revenue; and other interested parties. This report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-9110 or whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The objectives of this report were to assess (1) the data challenges of estimating the business nonfiler tax gap, (2) how recent program changes in the Internal Revenue Service’s (IRS) processes and procedures have affected its capacity to identify and pursue business nonfilers, and (3) what opportunities exist for IRS to improve its use of third-party information returns or other sources to identify and pursue business nonfilers. To assess the data challenges of estimating the business nonfiler tax gap, we reviewed IRS documents and prior GAO and Treasury Inspector General for Tax Administration (TIGTA) reports that dealt with tax gap measurement and IRS’s National Research Program, which develops data for use in making estimates of the tax gap relating to tax reporting noncompliance. We analyzed Business Master File Case Creation Nonfiler Identification Process (BMF CCNIP) inventory data to determine the number of potential business nonfilers IRS identifies and analyzed IRS’s fiscal year 2009 Collection Activity Reports to determine the number of business nonfiler cases IRS closed as not liable to file returns. We interviewed IRS research officials from the Small Business/Self-Employed (SBSE) and the Research, Analysis, and Statistics (RAS) Divisions on the types of data that would be needed to develop a business nonfiler tax gap estimate and the problems associated with obtaining such data. To assess how recent program changes in IRS’s processes and procedures have affected its capacity to identify and pursue business nonfilers, we reviewed program documents pertaining to BMF CCNIP. These documents dealt with the cost and benefits of the program; program evaluation and performance measurement processes; and procedures for identifying, prioritizing, selecting, working, and closing business nonfiler cases. We also reviewed Internal Revenue Manual (IRM) sections dealing with handling taxpayer responses to delinquent return notices and procedures for closing business nonfiler cases and IRS documents on its Nonfiler Strategy and its implementation. We interviewed IRS officials from SBSE to understand the various operational features and processes associated with the BMF CCNIP. To understand how IRS handles responses to delinquent return notices from businesses, we observed IRS’s collections functions at IRS’s Philadelphia service center. To assess what opportunities exist for IRS to improve its use of third-party information returns or other sources to identify and pursue business nonfilers, we identified non-IRS data sources—including government contractor data and private sector data—that could have information on business nonfilers and assessed the potential of this information to help IRS better identify and pursue business nonfilers. To test whether information return income could be useful in making case closure decisions under BMF CCNIP, we matched IRS’s calendar year 2007 Aggregated Information Return (AIR) file, which is used in BMF CCNIP and contained summaries of information returns that were received by IRS, to IRS’s Nonfiler Measurement file that contained data on all tax year 2007 cases that were closed as not liable to file partnership (Form 1065) and corporation (Form 1120) returns. We limited our analysis to tax year 2007 cases that had information return income of $1,000 or more. According to IRS officials, tax year 2007 business nonfiler cases were selected prior to the implementation of BMF CCNIP. Our analysis showed that about 96 percent of the cases were closed in 2008 and 2009 while the remaining 4 percent were closed in 2007 and 2010. In doing this match we eliminated all cases that were closed because they were subsidiaries of other businesses and thus would not have been required to file returns under their Employer Identification Numbers (EIN). We did not follow up on any of the closed cases to determine whether using the information return income data would have resulted in a closure different than not liable to file a return. To determine whether these businesses would have been liable to file returns would have required IRS to reinvestigate the cases. To determine whether the data in IRS’s Nonfiler Measurement file were of sufficient reliability for our analysis, we reviewed the program documentation associated with the file and discussed the various data elements with the IRS staff responsible for the file. As a result of our review and discussions, we determined that the data in this file were of sufficient reliability to be used in our analysis. To test whether private sector data on business activity could be useful for determining whether businesses may be required to file partnership, corporation, and employment tax returns, we matched IRS’s tax year 2007 Nonfiler Measurement file of nonfiler cases that IRS closed as not liable to file returns to a Dun and Bradstreet (D&B) file of businesses located in California and Illinois. We judgmentally selected these two states to get a geographic mix of states that had sufficient cases that were closed as not liable to file tax year 2007 returns to test the viability of using private sector data. The D&B file contained various data on business activity including name, address, sales, and employment information. Combined, California and Illinois had 130,336 or about 14.3 percent of the 914,505 corporations, partnerships, and employment tax cases that were closed as not liable to file tax year 2007 returns. Since the D&B files did not include the businesses’ EINs, the match was made on the businesses’ name and address, which included the street address, state, and ZIP code. To make the name and address matches, we used D&B’s onsite matching software program, which can be used to associate records with differences in name and addresses to a particular entity. Each match is assigned a confidence code from 0 to 10, with 10 being the highest confidence score and 0 the lowest or no match. According to D&B documents, scores of 8 to 10 are considered high-quality matches and were the matches we used for our analysis. Our match of the 130,336 California and Illinois cases resulted in 40,223 high-quality matches (9,740 corporations and partnerships and 30,483 employment tax cases). Of the 9,740 partnership and corporation cases, 7,688 cases had either little or no information return income. Of the 30,483 employment tax cases, 4,523 had employees. Also, to determine whether private sector data could be useful in identifying active businesses that IRS had not identified as nonfilers, we matched the D&B data files of California and Illinois businesses to a Business Master File (BMF) extract of 176,061 entities on the BMF that had been established in calendar year 2006 but had no filing requirements as of September 2009, which was when IRS produced the extract for us. As a data reliability check on this no filing requirement extract, both GAO and IRS staff spot checked selected output from this extract to IRS’s National Accounts Profile (NAP) file, which contains all valid taxpayer names, addresses, taxpayer identification numbers, and filing requirements. These checks showed that the names were valid and that the businesses did not have any filing requirements. The match of D&B data to the no filing requirement extract produced 39,920 cases that were considered to be high-quality matches (i.e., they had confidence scores of 8 to 10) and were the ones we used for our analysis. Since D&B is a commercial business, we were not able to validate the sales and employment data contained in the file. However, according to data D&B officials provided to us, D&B collects its data through direct investigations, such as phone calls to businesses, and reviews of trade records on payment and financial data, public records, and government registries, and Web sources and directories. Also, since D&B data are used by various federal agencies, we determined that the data were of sufficient reliability to be used in our analysis. To test whether other federal data sources could be useful for identifying business nonfilers, we analyzed the 2007 Central Contractor Registration (CCR) file, which we received from the General Services Administration, which contains data on businesses that must register at least annually to compete for federal contracts. We tested this file because it contains various entity data such as name, address, and EIN, which could be readily matched to IRS’s records. Businesses that want to vie for federal contracts must submit a valid EIN for inclusion onto the CCR. The EINs are validated against IRS’s records before they are included in the CCR. Also, the CCR file contains various self-reported data on business activity data, such as total revenue and number of employees for each business, which could be useful for making decisions on whether a business would be required to file returns. We matched the calendar year 2007 CCR file, which consisted of 441,467 records, to IRS’s tax year 2007 Nonfiler Measurement file of cases that were closed as not liable to file returns to determine whether CCR data would identify potential federal contractors that had business activity data that would indicate that they may have been required to file returns. The match identified 3,589 entities on the CCR file with reported revenue that were closed as not liable to file partnership (1,210 cases) or corporation (2,379 cases) returns. The match also found that 10,263 entities on the CCR file reported that they had employees that were closed as not liable to file either Form 941 (8,694 cases) or Form 940 (1,569 cases). We did not verify the accuracy of the data on the CCR file because these data are self-reported by businesses entering the data onto the CCR database. However, since the EINs on the CCR are validated to IRS’s records, we determined that the CCR data we used for our analysis were sufficiently reliable to use in our assessment. We conducted this performance audit from March 2009 through August 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Annually. Return due on February 1st after the end of the calendar year. Those who made full payments prior to filing may file by February 10. An extension may be requested via letter. Extensions are not to exceed 90 days. Businesses with one or more employees file Form 941 to report information on employees including wages paid, federal income tax withheld, and Social Security and Medicare taxes paid by employers and employees. Quarterly. Return due the last day of the month following the end of the quarter. Extension requests are not allowed. Form 941 has a 10-day extended due date if 100% of the tax amount has been deposited on time. Domestic corporations—unless corporation meets the criteria and has elected to be treated as an S corporation. The return is used to report income, gains, losses, deductions, credits, and to figure the income tax liability of a corporation. Annually. Return due by the 15th day of the third month following end of corporation’s tax year. For example, if tax year is equivalent to calendar year, filing would be due March 15. Business can file IRS Form 7004 to be granted a 6- month extension. S corporations. An eligible domestic corporation can avoid double taxation (once to the shareholders and again to the corporation) by electing to be treated as an S corporation. Annually. Return due by the 15th day of the third month following end of corporation’s tax year. For example, if tax year is equivalent to calendar year, filing would be due March 15. Business can file IRS Form 7004 to be granted a 6-month extension. Partnerships. A partnership is the relationship existing between two or more persons who join to carry on a trade or business. A partnership must file an annual information return to report the income, deductions, gains, losses, etc., from its operations, but it does not pay income tax. Instead, it generally “passes through” any profits or losses to its partners. Annually. Return and Schedule K-1 information returns (which report income shares to partners) due on the 15th day of the 4th month following the close of its tax year. Business can file IRS Form 7004 to be granted a 5-month extension. When a business changes its structure or hires employees, the business is required to notify IRS and in some cases may need a new EIN. A business is required to notify IRS if its structure changes, for example if it restructures as an S corporation, a partnership, or a subsidiary of another company. A subsidiary that elects to have its income, losses, and deductions included in the parent business’s consolidated income tax return is not required to file an annual return. A business that ceases to operate is expected to inform IRS including by sending a letter and checking a “final return” box on its income tax return. A business is also required to notify IRS if it stops paying wages or is a seasonal employer. A business that fails to notify IRS of a change affecting its filing requirement risks being identified as a potential nonfiler by IRS when it matches its records against returns filed. IRS identifies potential business nonfilers primarily using its return delinquency check process. Under its new Business Master File Case Creation Nonfiler Identification Process (BMF CCNIP), IRS prioritizes which of these potential business nonfilers will be pursued using information return and historical account data in IRS’s records on the business entity. Once a case has been selected for pursuit, IRS mails the taxpayer a notice of delinquency. If IRS is not successful in resolving a case through this taxpayer correspondence, the case may proceed to one of IRS’s collections functions. Figure 1 shows IRS’s process for business nonfiler cases through the notice stage. IRS’s return delinquency program checks the filing requirement of each business against the returns filed by that business for a given tax period. This process is completed every week for all return types. If IRS identifies a business that has not filed a return for a filing requirement on IRS’s BMF a specified number of weeks after the due date for the return including any extensions, a delinquency module is created for the missing return. Previously, the program identified as delinquent only those businesses tha had filed in the past and then ceased filing or had made payments to IRS. However, since the introduction of the BMF CCNIP, IRS now includes some entities that have an income tax filing requirement but have never filed. The BMF CCNIP has changed IRS’s business nonfiler activities by usin several types of IRS taxpayer data provided by businesses and about businesses to create indicators of business activity and prioritize these businesses for pursuit based on the likelihood of generating revenue. T goal of the BMF CCNIP is for IRS to pursue more productive cases by reducing the number of these cases it pursues where the business is not liable to file a return, e.g., because it is no longer active. In this way, IRS aims to better use its limited resources for pursuing business nonfilers. Selection codes are the feature of the BMF CCNIP that assists IRS in prioritizing the inventory and determining which cases should be pursued. to Specifically, selection codes are used to determine which cases are sent IRS’s campuses, which are the locations of the IRS service centers tha handle initial pursuit activities. The campuses will send a taxpayer a notice of delinquency. This notice details the delinquent tax form and period and requests the taxpayer file the form. The third-party infor from three sources: mation IRS uses to assign the selection codes comes The Aggregated Information Return (AIR) file contains data fro forms such as the Form 1099 series. This file is information return updated annually. The Payer Master File (PMF) contains information on those who these information ret information returns. urns and make payments documented by the The Combined Annual Wage Reporting (CAWR) file contains information on business payme Social Security and Medicare. nts for employment taxes including Selection codes range from 01 to 99 and represent IRS’s priority for working cases. Cases with a lower number selection code have a higher priority. Each code indicates characteristics of the information IRS has about the case. Examples of selection codes are “high dollar credits,” “hi information return income without broker sales,” and “broker sales.” Selection codes 97 through 99—the lowest priority codes—are typically for those cases with no indication of business activity. In addition to a selection code, a primary code is also assigned to each case. These codes indicate the number of delinquency notices a case should receive once it has been selected and whether the case will be pursued further after the notice stage. Primary codes are determined based upon compliance history and the type of return. At the first stage of the collections process, the notice stage, IRS firs attempts taxpayer contact by mailing a delinquency notice. This notice informs the taxpayer of the identified delinquency, and provides information on how to respond to the delinquency. According to IRS data, in fiscal year 2009, IRS issued 2.6 million initial notices to business nonfiler cases. If a response (either a return or an explanation of why no return is due) i s received from the taxpayer, the response is forwarded to a tax examiner in IRS’s Compliance Services Collection Operations (CSCO) function. The tax examiner is responsible for verifying that a return filed matches the filing requirement or that the response otherwise justifies closure of the case. In some cases where taxpayers claim that they do not owe a return, tax examine method to verify the taxpayer’s response and to ensure that there are no other outstanding modules. If the taxpayer response does not resolve the rs are required to perform a full compliance check, which is a delinquency, tax examiners will sometimes contact the taxpayer to discu the matter. The primary code assigned to the case determines what happens next if the taxpayer does not respond. Business nonfiler ca Primary Code B, A, or X. If the case r before being forwarded for further pursuit. eceived a Primary Code A, it will receive another notice Primary Code X cases receive one notice and if IRS receives no response, the case is forwarded for further pursuit after 6 weeks. Primary Code X is reserved for employment tax cases where the tax due by the business for the previous year was above a certain threshold. In some instances, if a delinquent module is identified and the taxpayer already has other modules further along in the pursuit process, the new identified delinquent module is moved after receiving the first not collection function with other modules from the same business; this process is called “association.” If the notice or notices do not elicit a response from the taxpayer, IRS guidelines and a routing program are used to determine the next destination for the case. If a case meets criteria established in the Internal Revenue Manu (IRM), it will go directly to the destination prescribed. One example of the criteria for a case to use the rules is where the last return amount—the tax liability from the last return—is above a certain threshold. Alternatively, a case is routed further by the Inventory Delivery System (IDS). IDS governs movement to, from, and between IRS pursuit functions. The system makes these determinations based on risk and business rules. These rules include a set of criteria used to score a case based on the following factors: age of case, balance due, number of modules for the entity, the type of return, credit balances, the tax due from the prior year’s tax return, and prior year net tax. In addition to these risk scores, IDS also uses predictive models to generate probability scores. These models predict the likelihood of certain outcomes, including securing a return and securing the full amount money due. IDS moves cases to one of the following functions after a predetermined number of weeks from when the notice was sent: The Automated Collection System (ACS) is responsible for making telephone contact with taxpayers who have not responded to notices In some cases, the call site operators who make this contact must research contact information for the delinquent taxpayer. . The automated 6020(b) (a6020(b)) program can be used to prepare a substitute return for business nonfilers without the intervention ACS or the field. This program is limited to employment tax cases with an amount below a certain threshold. This program automatically prepares a return for certain businesses that have not filed based on information that IRS has. The automatically prepared return is then sent to the entity, which has the ability to respond with its own return if it does not accept the prepared return. If no response is received, IRS has the authority to create an assessment for all taxes and penalties due. The Queue is a holding area f or cases. Cases can move from the Queue to the field. In certain circumstances, cases have been routed from the Queue to a6020(b) or CSCO. Revenue officers in the Collection Field Function (the field) make in- person contact with delinquent taxpayers in efforts t o secure returns. In addition, if IDS criteria determine that the case is of low enough priority, IDS can close the case. Revenue officers and some call-site operators have the ability to use the 6020(b) program to prepare a return for business nonfilers as well. does not send it to a6020(b), the case may then be moved to the Queue where it will be available for the field to pursue further. When a case goes to the Queue, it is assigned a level of risk and a probability score. The risk level—high, medium, or low—takes into account dollar amount, age of case, and type of return. Cases are assign ed to one of four priority groups in the Queue based upon these scores. Those cases that a greater likelihood of collecting revenue—become the highest priority cases. The other groups—high risk, medium risk, and low risk—are based solely on risk scores. When group managers need cases for the field, they review cases based on priority for potential selection to t he field. If a case remains in the Queue for 52 weeks, it is reevaluated by IDS. Based on this e case has become a low-risk case based on the reevaluation—the case can be closed. re “high risk” and have a high probability score—indicating a valuation, it can be sent back to ACS, remain in the Queue, or—if the Figure 2 provides an overview of the coding systems and automated systems that govern the path of a business nonfiler case that proceeds beyond the notice stage into IRS’s collections functions. In addition to the contact named above, Ralph Block, Assistant Director; Linda Baker; Amy Spiehler; Donna Miller; Jeffrey Niblack; A.J. Stephens; James Ungvarsky; and John Zombro made key contributions to this report. | The Internal Revenue Service (IRS) does not know how many businesses failed to file required returns, nor does it have an estimate of the associated lost tax revenue--the business nonfiling tax gap. Many cases it does investigate are unproductive because the business does not owe the return IRS expects. GAO was asked to assess (1) the data challenges of estimating the business nonfiler tax gap, (2) how recent program changes have affected IRS's capacity to identify and pursue business nonfilers, and (3) additional opportunities for IRS to use third-party data. GAO reviewed IRS's tax gap estimates, nonfiler program processes and procedures, and matched closed nonfiler cases with various other data. IRS cannot develop a comprehensive estimate of the business nonfiling rate and associated tax gap because it lacks data about the population of all businesses. However, IRS could develop a partial estimate using its business nonfiler inventory. IRS identifies several million potential business nonfilers each year, more than it can thoroughly investigate. IRS could take a random sample of its inventory, thoroughly investigate those cases, and use the results to estimate the proportion of actual nonfilers in its inventory of potential nonfilers. Until recently IRS has not had a way to prioritize cases in its large inventory. IRS modernized its business nonfiler program in 2009 by incorporating income and other data in its records indicating business activity. Active businesses generally have an obligation to file a return. IRS's Business Master File Case Creation Nonfiler Identification Process (BMF CCNIP) now assigns each case a code based on this data. IRS uses the code to select cases to work with the goal of securing tax returns from nonfilers and collecting additional revenue. This is a significant modernization, but IRS lacks a formal plan to evaluate how well the codes are working. IRS has performance information on its individual nonfiler program but less on its business nonfiler program. Key management reports needed to provide program data are under development but no deadline has been set. IRS could also use more information on why many nonfiler cases are unproductive. This could potentially lead IRS to identify actions that could reduce IRS resources used on these cases and associated taxpayer burden. GAO identified several opportunities including the following to enhance IRS's identification and pursuit of business nonfilers. (1) The new BMF CCNIP selection codes provide a quick way to verify taxpayer statements that a business has ceased operations and does not need to file a return. Collections staff have been instructed to use the codes when making case closure decisions. They were previously instructed to use other income data but GAO's analysis indicated this may not have been done in all cases. (2) Non-IRS data on businesses including federal contractors could be used to verify taxpayer statements about whether a tax return should have been filed. GAO's analysis of cases in two states that were closed as not liable to file a return found 7,688 businesses where non-IRS data showed business activity as measured by sales totaling $4.1 billion. GAO also found cases closed as not liable to file a return involving 13,852 businesses on the federal contractor registry. GAO's analyses illustrated the potential value of non-IRS data but GAO did not assess which non-IRS data would be most useful nor examine the capacity of IRS's systems to use such data on a large scale. GAO recommends that the Commissioner of Internal Revenue develop a partial business nonfiler rate estimate; set a deadline for developing performance data; develop a plan for evaluating the selection codes; reinforce the need to use income data and selection codes in verifying taxpayer statements; and study the feasibility and cost-effectiveness of using non-IRS data to verify taxpayer statements. In written comments on a draft of this report IRS agreed that identifying and pursuing active business nonfilers is key to enforcement efforts and acknowledged that our recommendations could assist these efforts. IRS agreed with four of GAO's recommendations and indicated some steps it would take to address the other four. |
Today, the Social Security program does not face an immediate crisis, but it does face a long-range financing problem driven primarily by known demographic trends that is growing rapidly. While the crisis is not immediate, the challenge is more urgent than it may appear. Acting soon to address these problems reduces the likelihood that the Congress will have to choose between imposing severe benefit cuts and unfairly burdening future generations with the program’s rising costs. Acting soon would also allow changes to be phased in so that the individuals who are most likely to be affected, namely younger and future workers, will have time to adjust their retirement planning while helping to avoid related “expectation gaps.” On the other hand, failure to take remedial action will, in combination with other entitlement spending, lead to a situation unsustainable for both the federal government and, ultimately, the economy. The Social Security system has required changes in the past to ensure future solvency. Indeed, the Congress has always taken the actions necessary to do this when faced with an immediate solvency crisis. I would like to spend some time describing the nature, timing, and extent of Social Security’s financing problem. As you all know, Social Security has always been a largely pay-as-you-go system. This means that the system’s financial condition is directly affected by the relative size of the populations of covered workers and beneficiaries. Historically, this relationship has been favorable. Now, however, people are living longer, and spending more time in retirement. As shown in figure 1, the U.S. elderly dependency ratio is expected to continue to increase. The proportion of the elderly population relative to the working-age population in the U.S. rose from 13 percent in 1950 to 19 percent in 2000. By 2050, there is projected to be almost 1 elderly dependent for every 3 people of working age—a ratio of 32 percent. Additionally, the average life expectancy of males at birth has increased from 66.6 in 1960 to 74.3 in 2000, with females at birth experiencing a rise of 6.6 years from 73.1 to 79.7 over the same period. As general life expectancy has increased in the United States, there has also been an increase in the number of years spent in retirement. Improvements in life expectancy have extended the average amount of time spent by workers in retirement from 11.5 years in 1950 to 18 years for the average male worker as of 2003. A falling fertility rate is the other principal factor underlying the growth in the elderly’s share of the population. In the 1960s, the fertility rate was an average of 3 children per woman. Today it is a little over 2, and by 2030 it is expected to fall to 1.95—a rate that is below what it takes to maintain a stable population. Taken together, these trends threaten the financial solvency and sustainability of this important program. The result of these trends is that labor force growth will continue to decline in 2006 and by 2025 is expected to be less than a fifth of what it is today, as shown in figure 2. Relatively fewer U.S. workers will be available to produce goods and services. Without a major increase in productivity or increases in immigration, low labor force growth will lead to slower growth in the economy and to slower growth of federal revenues. This in turn will only accentuate the overall pressure on the federal budget. This slowing labor force growth has important implications for the Social Security system. Social Security’s retirement eligibility dates are often the subject of discussion and debate and can have a direct effect on both labor force growth and the condition of the Social Security retirement program. It is also appropriate to consider whether and how changes in pension and/or other government policies could encourage longer workforce participation. To the extent that people choose to work longer as they live longer, the increase in the amount of time spent in retirement could be diminished. This could improve the finances of Social Security and mitigate the expected slowdown in labor force growth. The Social Security program’s situation is one symptom of this larger demographic trend that will have broad and profound effects on our nation’s future in other ways as well. The aging of the labor force and the reduced growth in the number of workers will have important implications for the size and composition of the labor force, as well as the characteristics of many jobs in our increasingly knowledge-based economy, throughout the 21st century. The U.S. workforce of the 21st century will be facing very different opportunities and challenges than those of previous generations. Today, the Social Security Trust Funds take in more in taxes than they spend. Largely because of the demographic trends I have described, this situation will change. Although the trustees’ 2004 intermediate estimates project that the combined Social Security Trust Funds will be solvent until 2042, program spending will constitute a rapidly growing share of the budget and the economy well before that date. Under the trustees’ 2004 intermediate estimates, Social Security’s cash surplus—the difference between program tax income and the costs of paying scheduled benefits— will begin to decline in 2008. By 2018, the program’s cash flow is projected to turn negative—its tax income will fall below benefit payments. At that time, the program will begin to experience a negative cash flow, which will accelerate over time. Social Security will join Medicare’s Hospital Insurance Trust Fund, whose outlays exceeded cash income in 2004, as a net claimant on the rest of the federal budget. (See figure 3.) In 2018, the combined OASDI Trust Funds will begin drawing on its Treasury securities to cover the cash shortfall. At this point, Treasury will need to obtain cash for these redeemed securities either through increased taxes, spending cuts, and/or more borrowing from the public than would have been the case had Social Security’s cash flow remained positive. Whatever the means of financing, the shift from positive to negative cash flow will place increased pressure on the federal budget to raise the resources necessary to meet the program’s ongoing costs. There are different ways to describe the magnitude of the problem. A case can be made for a range of different measures, as well as different time horizons. For instance, the actuarial deficit can be measured in present value, as a percentage of GDP, or as a percentage of taxable payroll in the future. The Social Security Administration (SSA) and CBO have both made projections of Social Security’s future actuarial deficit using different horizons. (See table 1.) CBO uses a 100-year horizon to project Social Security’s future actuarial deficit, while the Social Security Administration utilizes both 75-year and infinite horizon projections to estimate the future deficit. In addition, both the Social Security Administration and CBO have different economic assumptions for variables such as real earnings, real interest rates, inflation, and unemployment. While their estimates vary due to different horizons and economic assumptions, each identifies the same long-term challenge: The Social Security system is unsustainable in its present form over the long run. Taking action soon on Social Security would not only make the necessary action less dramatic than if we wait but would also promote increased budgetary flexibility in the future and stronger economic growth. Some of the benefits of early action—and the costs of delay—can be seen in figure 4. This figure compares what it would take to keep Social Security solvent through 2078, if action were taken at three different points in time, by either raising payroll taxes or reducing benefits. If we did nothing until 2042—the year SSA estimates the Trust Funds will be exhausted— achieving actuarial balance would require changes in benefits of 30 percent or changes in taxes of 43 percent. As figure 4 shows, earlier action shrinks the size of the necessary adjustment. As I have already discussed, reducing the relative future burdens of Social Security and health programs is essential to a sustainable budget policy for the longer term. It is also critical if we are to avoid putting unsupportable financial pressures on Americans in the future. Reforming Social Security and health programs is essential to reclaiming our future fiscal flexibility to address other national priorities. Changes in the composition of federal spending over the past several decades have reduced budgetary flexibility, and our current fiscal path will reduce it even further. During this time, spending on mandatory programs has consumed an ever-increasing share of the federal budget. In 1964, prior to the creation of the Medicare and Medicaid programs, spending for mandatory programs plus net interest accounted for about 33 percent of total federal spending. By 2004, this share had almost doubled to approximately 61 percent of the budget. If you look ahead in the federal budget, the Social Security programs (Old- Age and Survivors Insurance and Disability Insurance), together with the rapidly growing health programs (Medicare and Medicaid), will dominate the federal government’s future fiscal outlook. Absent reform, the nation will ultimately have to choose among persistent, escalating federal deficits and debt, huge tax increases and/or dramatic budget cuts. GAO’s long-term budget simulations show that to move into the future with no changes in federal retirement and health programs is to envision a very different role for the federal government. Assuming that discretionary spending grows with inflation and all existing tax cuts are allowed to expire when scheduled under current law, spending for Social Security and health care programs would grow to consume over three-quarters of federal revenue by 2040. Moreover, if all expiring tax provisions are extended and discretionary spending keeps pace with the economy, by 2040 total federal revenues may be adequate to pay little more than interest on the federal debt. (See figure 5.) Alternatively, taking action soon on Social Security would not only promote increased budgetary flexibility in the future and stronger economic growth but would also make the necessary action less dramatic than if we wait. Indeed, long-term budget flexibility is about more than Social Security and Medicare. While these programs dominate the long- term outlook, they are not the only federal programs or activities that bind the future. The federal government undertakes a wide range of programs, responsibilities, and activities that obligate it to future spending or create an expectation for spending. GAO has described the range and measurement of such fiscal exposures—from explicit liabilities such as environmental cleanup requirements to the more implicit obligations presented by life-cycle costs of capital acquisition or disaster assistance. Making government fit the challenges of the future will require not only dealing with the drivers— entitlements for the elderly—but also looking at the range of federal activities. A fundamental review of what the federal government does and how it does it will be needed. Also, at the same time it is important to look beyond the federal budget to the economy as a whole. Under the 2004 Trustees’ intermediate estimates and CBO’s long-term Medicaid estimates, spending for Social Security, Medicare, and Medicaid combined will grow to 15.6 percent of GDP in 2030 from today’s 8.5 percent (See figure 6.) Taken together, Social Security, Medicare, and Medicaid represent an unsustainable burden on future generations. As important as financial stability may be for Social Security, it cannot be the only consideration. As a former public trustee of Social Security and Medicare, I am well aware of the central role these programs play in the lives of millions of Americans. Social Security remains the foundation of the nation’s retirement system. It is also much more than just a retirement program; it pays benefits to disabled workers and their dependents, spouses and children of retired workers, and survivors of deceased workers. In 2004, Social Security paid almost $493 billion in benefits to more than 47 million people. Since its inception, the program has successfully reduced poverty among the elderly. In 1959, 35 percent of the elderly were poor. In 2000, about 8 percent of beneficiaries aged 65 or older were poor, and 48 percent would have been poor without Social Security. It is precisely because the program is so deeply woven into the fabric of our nation that any proposed reform must consider the program in its entirety, rather than one aspect alone. Thus, GAO has developed a broad framework for evaluating reform proposals that considers not only solvency but other aspects of the program as well. The analytic framework GAO has developed to assess proposals comprises three basic criteria: Financing Sustainable Solvency—the extent to which a proposal achieves sustainable solvency and how it would affect the economy and the federal budget. Our sustainable solvency standard encompasses several different ways of looking at the Social Security program’s financing needs. While a 75-year actuarial balance has generally been used in evaluating the long-term financial outlook of the Social Security program and reform proposals, it is not sufficient in gauging the program’s solvency after the 75th year. For example, under the trustees’ intermediate assumptions, each year the 75-year actuarial period changes, and a year with a surplus is replaced by a new 75th year that has a significant deficit. As a result, changes made to restore trust fund solvency only for the 75-year period can result in future actuarial imbalances almost immediately. Reform plans that lead to sustainable solvency would be those that consider the broader issues of fiscal sustainability and affordability over the long term. Specifically, a standard of sustainable solvency also involves looking at (1) the balance between program income and costs beyond the 75th year and (2) the share of the budget and economy consumed by Social Security spending. Balancing Adequacy and Equity—the relative balance struck between the goals of individual equity and income adequacy. The current Social Security system’s benefit structure attempts to strike a balance between the goals of retirement income adequacy and individual equity. From the beginning, Social Security benefits were set in a way that focused especially on replacing some portion of workers’ pre-retirement earnings. Over time other changes were made that were intended to enhance the program’s role in helping ensure adequate incomes. Retirement income adequacy, therefore, is addressed in part through the program’s progressive benefit structure, providing proportionately larger benefits to lower earners and certain household types, such as those with dependents. Individual equity refers to the relationship between contributions made and benefits received. This can be thought of as the rate of return on individual contributions. Balancing these seemingly conflicting objectives through the political process has resulted in the design of the current Social Security program and should still be taken into account in any proposed reforms. Implementing and Administering Proposed Reforms—how readily a proposal could be implemented, administered, and explained to the public. Program complexity makes implementation and administration both more difficult and harder to explain to the public. Some degree of implementation and administrative complexity arises in virtually all proposed changes to Social Security, even those that make incremental changes in the already existing structure. Although these issues may appear technical or routine on the surface, they are important issues because they have the potential to delay—if not derail—reform if they are not considered early enough for planning purposes. Moreover, issues such as feasibility and cost can, and should, influence policy choices. Continued public acceptance of and confidence in the Social Security program require that any reforms and their implications for benefits be well understood. This means that the American people must understand why change is necessary, what the reforms are, why they are needed, how they are to be implemented and administered, and how they will affect their own retirement income. All reform proposals will require some additional outreach to the public so that future beneficiaries can adjust their retirement planning accordingly. The more transparent the implementation and administration of reform, and the more carefully such reform is phased in, the more likely it will be understood and accepted by the American people. The weight that different policy makers may place on different criteria will vary, depending on how they value different attributes. For example, if offering individual choice and control is less important than maintaining replacement rates for low-income workers, then a reform proposal emphasizing adequacy considerations might be preferred. As they fashion a comprehensive proposal, however, policy makers will ultimately have to balance the relative importance they place on each of these criteria. As we have noted in the past before this committee and elsewhere, a comprehensive evaluation is needed that considers a range of effects together. Focusing on comprehensive packages of reforms will enable us to foster credibility and acceptance. This will help us avoid getting mired in the details and losing sight of important interactive effects. It will help build the bridges necessary to achieve consensus. One issue that often arises within the Social Security debate concerns the appropriate comparisons or benchmarks to be used when assessing a particular proposal. While this issue may seem to be somewhat abstract, it has critical implications, for depending on the comparisons chosen, a proposal can be made more or less attractive. Some analyses compare proposals to a single benchmark and as a result can lead to incomplete or misleading conclusions. For that reason, GAO has used several benchmarks in assessing reform proposals. Currently promised benefits are not fully financed, and so any analysis that seeks to fairly evaluate reform proposals should rely on benchmarks that reflect a policy of an adequately financed system. Similarly, it is important to have benchmarks that are consistent with each other. Using one that relies on action relatively soon versus one that posits no action at all are not consistent and could also lead to misleading conclusions. Estimating future effects on Social Security benefits should reflect the fact that the program faces a long-term actuarial deficit and that conscious policies of benefit reduction and/or revenue increases will be necessary to restore solvency and sustain it over time. A variety of proposals have been offered to address Social Security’s financial problems. Many proposals contain reforms that would alter benefits or revenues within the structure of the current defined benefits system. Some would reduce benefits by modifying the benefit formula (such as increasing the number of years used to calculate benefits or using price-indexing instead of wage-indexing), reduce cost-of-living adjustments (COLA), raise the normal and/or early retirement ages, or revise dependent benefits. Some of the proposals also include measures or benefit changes that seek to strengthen progressivity (e.g., replacement rates) in an effort to mitigate the effect on low-income workers. Others have proposed revenue increases, including raising the payroll tax or expanding the Social Security taxable wage base that finances the system; increasing the taxation of benefits; or covering those few remaining workers not currently required to participate in Social Security, such as older state and local government employees. A number of proposals also seek to restructure the program through the creation of individual accounts. Under a system of individual accounts, workers would manage a portion of their own Social Security contributions to varying degrees. This would expose workers to a greater degree of risk in return for both greater individual choice in retirement investments and the possibility of a higher rate of return on contributions than available under current law. There are many different ways that an individual account system could be set up. For example, contributions to individual accounts could be mandatory or they could be voluntary. Proposals also differ in the manner in which accounts would be financed, the extent of choice and flexibility concerning investment options, the way in which benefits are paid out, and the way the accounts would interact with the existing Social Security program—individual accounts could serve either as an addition to or as a replacement for part of the current benefit structure. In addition, the timing and impact of individual accounts on the solvency, sustainability, adequacy, equity, net savings, and rate of return associated with the Social Security system varies depending on the structure of the total reform package. Individual accounts by themselves will not lead the system to sustainable solvency. Achieving sustainable solvency requires more revenue, lower benefits, or both. Furthermore, incorporating a system of individual accounts may involve significant transition costs. These costs come about because the Social Security system would have to continue paying out benefits to current and near-term retirees concurrently with establishing new individual accounts. Individual accounts can contribute to sustainability as they could provide a mechanism to prefund retirement benefits that would be immune to demographic booms and busts. However, if such accounts are funded through borrowing, no such prefunding is achieved. An additional important consideration in adopting a reform package that contains individual accounts would be the level of benefit adequacy achieved by the reform. To the extent that benefits are not adequate, it may result in the government eventually providing additional revenues to make up the difference. Also, some degree of implementation and administrative complexity arises in virtually all proposed changes to Social Security. The greatest potential implementation and administrative challenges are associated with proposals that would create individual accounts. These include, for example, issues concerning the management of the information and money flow needed to maintain such a system, the degree of choice and flexibility individuals would have over investment options and access to their accounts, investment education and transitional efforts, and the mechanisms that would be used to pay out benefits upon retirement. The Federal Thrift Savings Plan (TSP) could serve as a model for providing a limited amount of options that reduce risk and administrative costs while still providing some degree of choice. However, a system of accounts that spans the entire national workforce and millions of employers would be significantly larger and more complex than the TSP or any other system we have in place today. Harmonizing a system that includes individual accounts with the regulatory framework that governs our nation’s private pension system would also be a complicated endeavor. However, the complexity of meshing these systems should be weighed against the potential benefits of extending participation in individual accounts to millions of workers who currently lack private pension coverage. Another important consideration for Social Security reform is assessing a proposal’s effect on national saving. Individual account proposals that fund accounts through redirection of payroll taxes or general revenue do not increase national saving on a first order basis. The redirection of payroll taxes or general revenue reduces government saving by the same amount that the individual accounts increase private saving. Beyond these first order effects, the actual net effect of a proposal on national saving is difficult to estimate due to uncertainties in predicting changes in future spending and revenue policies of the government as well as changes in the saving behavior of private households and individuals. For example, the lower surpluses and higher deficits that result from redirecting payroll taxes to individual accounts could lead to changes in federal fiscal policy that would increase national saving. On the other hand, households may respond by reducing their other saving in response to the creation of individual accounts. No expert consensus exists on how Social Security reform proposals would affect the saving behavior of private households and businesses. Finally, the effort to reform Social Security is occurring as our nation’s private pension system is also facing serious challenges. Only about half of the private sector workforce is covered by a pension plan. A number of large underfunded traditional defined benefit plans—plans where the employer bears the risk of investment—have been terminated by bankrupt firms, including household names like Bethlehem Steel, US Airways, and Polaroid. These terminations have resulted in thousands of workers losing promised benefits and have saddled the Pension Benefit Guaranty Corporation, the government corporation that partially insures certain defined benefit pension benefits, with billions of dollars in liabilities that threaten its long-term solvency. Meanwhile, the number of traditional defined benefit pension plans continues to decline as employers increasingly offer workers defined contribution plans like 401(k) plans where, like individual accounts, workers face the potential of both greater return and greater risk. These challenges serve to reinforce the imperative to place Social Security on a sound financial footing. Regardless of what type of Social Security reform package is adopted, continued confidence in the Social Security program is essential. This means that the American people must understand why change is necessary, what the reforms are, why they are needed, how they are to be implemented and administered, and how they will affect their own retirement income. All reform proposals will require some additional outreach to the public so that future beneficiaries can adjust their retirement planning accordingly. The more transparent the implementation and administration of reform, and the more carefully such reform is phased in, the more likely it will be understood and accepted by the American people. Social Security does not face an immediate crisis but it does face a large and growing financial problem. In addition, our Social Security challenge is only part of a much broader challenge that includes, among other things, the need to reform Medicare, Medicaid and our overall health care system. Today many retirees and near retirees fear cuts that would affect them in the immediate future while young people believe they will get little or no Social Security benefits in the longer term. I believe that it is possible to reform Social Security in a way that will ensure the program’s solvency, sustainability, and security while exceeding the expectations of all generations of Americans. In my view, there is a window of opportunity to reform Social Security; however, this window of opportunity will begin to close as the baby boom generation begins to retire. Furthermore, it would be prudent to move forward to address Social Security now because we have much larger challenges confronting us that will take years to resolve. The fact is, compared to addressing our long-range health care financing problem, reforming Social Security should be easy lifting. We at GAO look forward to continuing to work with this Committee and the Congress in addressing this and other important issues facing our nation. In doing so, we will be true to our core values of accountability, integrity, and reliability. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Social Security is the foundation of the nation's retirement income system, helping to protect the vast majority of American workers and their families from poverty in old age. However, it is much more than a retirement program and also provides millions of Americans with disability insurance and survivors' benefits. Over the long term, as the baby boom generation retires and as Americans continue to live longer and have fewer children, Social Security's financing shortfall presents a major program solvency and sustainability challenge that is growing as time passes. The Chairman and Ranking Member of the Senate Special Committee on Aging asked GAO to discuss the future of the Social Security program. This testimony will address the nature of Social Security's long-term financing problem and why it is preferable for Congress to take action sooner rather than later, as well as the broader context in which reform proposals should be considered. Although the Social Security system in not in crisis today, it faces serious and growing solvency and sustainability challenges. Furthermore, Social Security's problems are a subset of our nation's overall fiscal challenge. Absent reform, the nation will ultimately have to choose among escalating federal deficits and debt, huge tax increases and/or dramatic budget cuts. GAO's long-term budget simulations show that to move into the future with no changes in federal retirement and health programs is to envision a very different role for the federal government. With regard to Social Security, if we did nothing until 2042, achieving actuarial balance would require a reduction in benefits of 30 percent or an increase in payroll taxes of 43 percent. In contrast, taking action soon will serve to reduce the amount of change needed to ensure that Social Security is solvent, sustainable, and secure for current and future generations. Acting sooner will also serve to improve the federal government's credibility with the markets and the confidence of the American people in the government's ability to address long-range challenges before they reach crisis proportions. However, financial stability should not be the only consideration when evaluating reform proposals. Other important objectives, such as balancing the adequacy and equity of the benefits structure need to be considered. Furthermore, any changes to Social Security should be considered in the context of the broader challenges facing our nation, such as the changing nature of the private pension system, escalating health care costs, and the need to reform Medicare and Medicaid. |
Since 1955, the executive branch has encouraged federal agencies to obtain commercially available goods and services from the private sector when the agency determines that it is cost-effective. In 1966, OMB formalized this policy in its Circular A-76 and, in 1979, issued a handbook with procedures for determining whether commercial activities should be performed in-house, by another federal agency, or by the private sector. Administrative and legislative constraints from the late 1980s through 1995 resulted in a lull in awarding contracts under A-76 competitions. In 1995, when congressional and administration initiatives placed greater emphasis on public-private competitions to achieve economies and efficiency of operations, DOD gave competitive sourcing renewed emphasis. In our past work, we have found that DOD achieved savings through competitive sourcing, although it is difficult to estimate precisely the amount of savings. By including competitive sourcing as one of five governmentwide initiatives announced in August 2001, the administration directed agencies to implement competitive sourcing programs to achieve increased savings and to improve performance. The administration continues to advocate the use of competitive sourcing, which is addressed in the President’s budget for fiscal year 2005. Competitive sourcing has met with considerable controversy in both the public and private sectors. Each sector expressed concerned that, in general, the process was unfair and did not provide for holding the winner of the competition accountable for performance. In response to this controversy, in 2000, the Congress mandated a study of the government’s competitive sourcing process under A-76—a study conducted by the Commercial Activities Panel, chaired by the Comptroller General of the United States. The panel included representatives from OMB, DOD, the Office of Personnel Management, private industry, academia, a trade association, and unions. In April 2002, the panel released its report with recommendations that included 10 sourcing principles to provide a better foundation for competitive sourcing decisions in the federal government (see app. II). In particular, the panel stressed the importance of linking sourcing policy with agency missions, promoting sourcing decisions that provide value to the taxpayer regardless of the service provider selected, and ensuring greater accountability for performance. The panel also addressed an area of particular importance for all affected partieshow the government’s sourcing policies are implemented. In this regard, one of the sourcing principles was that the government should avoid arbitrary numerical or full-time equivalent (FTE) goals. This principle is based on the concept that success in government programs should be measured in terms of providing value to the taxpayer, not the size of the in-house or contractor workforce. The panel, in one of its 10 sourcing principles, also endorsed creating incentives and processes to foster high-performing, efficient, and effective organizations and continuous improvement throughout the federal government. On November 6, 2003, the Comptroller General hosted a forum to discuss what it means for a federal agency to be high-performing in an environment where results and outcomes are increasingly accomplished through partnerships that cut across different levels of government and different sectors of the economy. There was broad agreement among participants at the forum on the key characteristics and capabilities of high-performing organizations, which are organized around four broad themes. These four themes are (1) clear, well-articulated, and compelling missions; (2) strategic use of partnerships; (3) a focus on the needs of clients and customers; and (4) strategic management of people. The competitive sourcing process starts with agencies developing inventories of their commercial positions in accordance with the Federal Activities Inventory Reform (FAIR) Act of 1998. Additionally, OMB requires agencies to identify activities that are inherently governmental, as well as commercial positions that are exempt from competition because of legislative prohibitions, agency restructuring, or other reasons. Only activities classified as “commercial” and not otherwise exempt are potentially competable. In the 2002 FAIR Act inventories, the proportion of competable commercial, non-competable commercial, and inherently governmental FTE positions varied widely among the agencies we reviewed. Governmentwide, competable commercial positions in 2002 accounted for approximately 26 percent of the total federal workforce. Except for the Education Department’s 62 percent, the percentage of competable commercial positions in each of our selected agencies was less than 50 percent of the agency’s total FTEs (see app. III). After agencies identify competable commercial positions under the FAIR Act and OMB guidance, they select from these positions which ones to compete. Resulting public-private competitions are guided by OMB Circular A-76. In May 2003, OMB released a revised Circular A-76. Under this revised circular, agencies must use a standard competition process for functions with more than 65 FTEs. As part of the standard process, agencies identify the work to be performed in a performance work statement, establish a team to prepare an in-house proposal to perform the work based on a “most efficient organization” (MEO), and evaluate that proposal along with those submitted by private companies and/or public reimbursable sources. For activities with 65 or fewer FTEs, agencies may use either a streamlined or standard competition. Streamlined competitions require fewer steps than the standard process and enable agencies to complete a cost comparison more quickly. When the President announced competitive sourcing as one of five governmentwide management agenda items in August 2001, few agencies other than DOD had an established competitive sourcing infrastructure—a key component of OMB’s strategy for institutionalizing competitive sourcing. Few of the other departments and agencies that we reviewed had competitive sourcing experience. Since that time, all six civilian agencies we reviewed have established a basic competitive sourcing program infrastructure. Leadership involvement and an established infrastructure have enabled each agency that we reviewed to develop competitive sourcing plans and complete a number of initial competitions. Interagency forums for sharing information also have been established. Although they lack DOD’s A-76 experience, the civilian agencies we reviewed have made significant progress toward establishing a competitive sourcing infrastructure with such actions as establishing an office, hiring staff, obtaining contractor support, creating policies and procedures, and providing training to agency staff involved in the competitive sourcing process. Table 1 provides an overview of civilian agency infrastructure development. In addition, DOD, which has the most competitive sourcing experience in the federal government, has issued numerous policies, procedures, and guidance for implementing OMB’s Circular A-76. DOD also has established a management structure to oversee the department’s A-76 activities. In carrying out its competitive sourcing program, DOD uses both in-house personnel and contractors to provide assistance within the department in developing performance work statements and MEOs. In response to our previous recommendation, DOD also has established a Web site to share competitive sourcing knowledge and experience. This Web site is available governmentwide. The site contains resources such as A-76 policy and procedures, best practices, sample documents, bid protests, and links to other sites with information on Circular A-76. The civilian agencies we reviewed completed their initial rounds of competitive sourcing studies in fiscal years 2002 and 2003 (see app IV). Based on data given to us by five of the six civilian departments, 602 studies were completed in fiscal year 2003. Of these 602 studies, 363 were streamlined competitions and 130 were direct conversions to performance by a contractor. In addition, DOD completed 126 studies, including 54 direct conversions and 7 streamlined competitions. Collectively, these studies involved over 17,000 FTEs, with almost 57 percent of the FTEs studied by DOD and the remaining 43 percent studied by the civilian agencies. According to agency data, in-house teams won many of the competitions, retaining almost 76 percent of the FTEs covered by the studies. (See app. V for details on the outcome of these studies.) While agencies have been able to complete these studies while establishing their infrastructures, it is too early to assess the impact of the studies in terms of efficiencies or performance improvements achieved. A number of initiatives have been undertaken to share competitive sourcing information across agencies. In addition to DOD’s Web site, at least two interagency forums have been established to facilitate interagency information sharing. For example, staff working in competitive sourcing offices in various agencies and subagencies meet monthly at the civilian agencies’ competitive sourcing working group to exchange ideas and information. The Federal Acquisition Council—composed of senior acquisition officials in the Executive Branch—also promotes acquisition-related aspects of the President’s Management Agenda by providing a forum for monitoring and improving the federal acquisition system. The Council has published a guide on frequently asked questions and a manager’s guide to competitive sourcing. In addition, OMB is developing a competitive sourcing data tracking system to provide consistent information and to facilitate the sharing of competitive sourcing information by allowing agencies to identify planned, ongoing, and completed competitions across the government. According to OMB officials, future refinements to the system may allow agencies to track and manage their own sourcing activities—a problem for most agencies—as well as provide OMB with consistent information. OMB plans to use the system to monitor agency implementation of the competitive sourcing initiative and generate more consistent and accurate statistics, including costs and related savings, for reporting to the Congress. Despite their progress in establishing a competitive sourcing infrastructure and conducting initial competitions in varying degrees, the agencies we reviewed continue to face significant challenges in four areas. First, agencies have been challenged to develop and use FAIR Act inventory data to identify and group positions for competition. Second, agencies are operating in a continually changing environment and under OMB guidance focused more on meeting milestones rather than achieving desired outcomes. Third, agencies have reported that they lack the staff needed to carry out the numerous additional tasks required under the new Circular A-76. Finally, agencies have reported that they lack the funding needed to cover the substantial costs associated with implementing their programs. The development of accurate FAIR Act inventories is the foundation for determining which functions agencies compete. Agencies reported difficulty in classifying positions as inherently governmental or commercial and in applying OMB-assigned codes to categorize activities, making it challenging for them to identify potential candidates for competitions. This has been a persistent problem as we have reported in the past. Despite changes made to OMB’s guidance for constructing FAIR Act inventories, the guidance has not alleviated the difficulties some agencies have had in developing and maintaining useful inventory data. Under the FAIR Act and OMB guidance, agencies annually review and classify positions as either inherently governmental or commercial. This classification process is done using an OMB-provided coding schedule containing nearly 700 functional codes in 23 major categories, such as health services, grants management, and installation services. Civilian agencies are having difficulty applying these functional codes, which were developed by DOD. While intended to promote consistency, the codes are not always applicable to civilian agencies, requiring some to create supplemental codes to match their missions. As we have previously reported, selecting and grouping functions and positions to compete can be difficult. For example, the Army has determined that many functions, such as making eyeglasses for troops located in a war zone, are core to its mission even though this function may not be classified as inherently governmental when performed in the United States. Also, some functions may involve both “commercial” and “inherently governmental” tasks. While agencies have had difficulty classifying mixed positions, OMB’s guidance allows agencies to take a variety of approaches to address this difficulty. For example, according to agency officials, the Internal Revenue Service classifies mixed positions on a case-by-case basis considering how critical the position is to its mission, not just the percentage of tasks related to that position that may be inherently governmental or commercial. The process also can be resource intensive. For example, according to agency officials, to determine whether positions should be classified as inherently governmental or commercial, the National Park Service—the largest bureau in the Department of the Interior—used an employee team of approximately 30 individuals that represented all occupational areas, as well as its human resources and acquisition staff. The team used the analysis, in conjunction with payroll system data showing employee time usage, to determine the number of commercial and inherently governmental FTEs. Accuracy of inventories depends on agency classification of positions, based on OMB guidance, as well as consistent OMB review of inventories. OMB has updated its FAIR Act inventory guidance annually to address issues identified by agencies (see app. VI) and it consults with agencies to resolve issues identified. For example, in April 2001, OMB created a new requirement to report civilian positions designated as inherently governmental. OMB’s guidance gives agencies considerable latitude in preparing their inventories to determine if an activity is commercial. OMB officials told us they have provided training on Circular A-76 procedures to its budget examiners, who act as liaisons between OMB and each participating agency. The examiners address questions and provide guidance on an agency-by-agency basis. OMB does not have formal written guidance for reviewing FAIR Act data. Examiners provide verbal guidance on an on-going basis to agencies and discuss concerns agencies have with the FAIR Act and the related competitive sourcing program. Once agencies submit their inventories, OMB officials review the inventories looking for “red flags”—that is, deviations from the norm, such as one agency listing a position as inherently governmental while others classify the same position as commercial—and then consult with agency officials as necessary on these deviations. However, a number of competitive sourcing officials at two interagency forums expressed concern about the process. For example, one official told us that an OMB program examiner said there were too many function codes in one agency’s inventory. Then, after the agency resubmitted its inventory, the same examiner said the inventory had too few codes. An official from another agency told us that its OMB examiners did not appear familiar with OMB’s own guidance for applying the function codes. Given the lack of formal written OMB guidance on reviewing the FAIR Act inventory data, there is little assurance that OMB’s review of inventories will be consistent across agencies. According to a number of agency officials, implementation of OMB guidance is further complicated due to time constraints. OMB inventory guidance is typically issued in the spring, and agency inventories are due to OMB by June 30. Officials contend that more time is needed to properly implement the guidance. In response, OMB officials pointed out that the basic guidance for developing inventories is set forth in Circular A-76 and agencies can undertake significant steps to prepare their inventories based on the Circular’s guidance. The ultimate goal of the competitive sourcing initiative is to improve government performance and efficiency. To date, however, OMB’s competitive sourcing guidance to federal agencies has focused more on targets and milestones for conducting competitions than on the outcomes the competitions are designed to produce: savings, innovation, and performance improvements. Although recent OMB guidance has stressed the need for agencies to be more strategic, the emphasis in the guidance is still more on process than results. The President’s Management Agenda established expected results for the competitive sourcing initiative to encourage innovation, increase efficiency, and improve performance of agencies. The Commercial Activities Panel similarly stated that the success of government programs, such as competitive sourcing, should be measured by the results achieved in terms of providing value to the taxpayer. Since the inception of the competitive sourcing initiative in 2001, agencies have faced continual changes to OMB’s targets and guidance for conducting public-private competitions. OMB initially set a target for agencies to compete or directly convert at least 5 percent of their full-time equivalent commercial positions by the end of fiscal year 2002, and an additional 10 percent by the end of fiscal year 2003. They also set a long-term target for agencies to compete at least 50 percent of commercial FTEs. OMB later moved to agency-specific plans that reflect each agency’s own mission and workforce mix. OMB also developed a traffic light system (red, yellow, green) for evaluating the progress agencies are making in implementing these plans. Table 2 shows the chronology of these changes. As shown in table 2, in December 2003, OMB released a memorandum with guidance on developing competitive sourcing plans that would receive a “green” rating under its traffic light evaluation system (see app. VII). The guidance notes the need for a long-range vision, strategic action by agencies, and public-private competitions tailored to the agency’s unique mission and goals. The memorandum also advises agencies to include in their plans their general decision-making process for selecting activities to compete, identification of activities to be competed, potential constraints, and plans for handling activities suitable for competition that the agency does not intend to compete. Neither OMB’s initial FTE-based goals nor its revised competitive sourcing goals and traffic light evaluation system calls for agencies to assess how their plans for competitive sourcing could achieve the broader improvements envisioned by the President’s Management Agenda or the Commercial Activities Panel. In this regard, the Panel said that arbitrary competition goals should be avoided. In testimony before the Congress, the Comptroller General has stated that OMB’s initial competition targets were inappropriate. Similarly, OMB’s revised goals continue to emphasize process milestones such as competitions completed more than enhancing value through performance improvements and efficiencies. For example, for an agency to receive a “green” rating on OMB’s scorecard, it must have developed an OMB-approved green competition plan, have publicly announced standard competitions in accordance with the schedule in its green plan, and have completed 95 percent of streamlined competitions in 90 days. The emphasis throughout OMB’s most recent guidance is similarly more on process than on results. Agencies have used a range of criteria to select positions for competition. For most agencies, selection criteria have been based on the size and composition of the workforce, such as attrition rates, skill needs, and difficulty in hiring, as well as the agency’s capability to manage the competitions. Because these agencies have focused on meeting targets to announce and complete competitions, they have not assessed broader issues, such as weighing potential improvements against the costs and risks associated with performing the competitions. Some agencies, however, used a broader set of factors such as the function’s contribution to the mission, risks associated with the function being contracted out, and the potential return on investment. (See app. VIII for further discussion on the criteria these agencies have used to select positions for competition.) Officials in most of the agencies we reviewed expressed concern that they lack sufficient staff to perform the additional tasks included in the recently revised Circular A-76. To address this challenge, the Federal Acquisition Council is currently studying agency staffing and skill requirements. As we previously reported, agencies need to build and maintain capacity to manage competitions, build the in-house MEO, and oversee the implementation of competition decisions—skills that the Commercial Activities Panel recognized may require additional capacity. Adding to this complexity is agencies’ need to consider their competitive sourcing staffing capacity in the context of their strategic human capital management, an area we have identified as high-risk governmentwide and one of the five President’s Management Agenda governmentwide initiatives. For example, we recently reported that DOD’s civilian human capital strategic plan does not address the respective roles of civilian and contractor personnel or how DOD plans to link its human capital initiatives with its sourcing plans, such as efforts to outsource non-core responsibilities. Finally, ensuring and maintaining employee morale is also a challenge for agencies. OMB’s revised Circular A-76 emphasizes the following key competitive sourcing phases: preparing an inventory of agency’s activities, preliminary planning, announcing and conducting the competition, conducting the competition using either a streamlined or standard competition process, implementing the performance decision, and conducting post-competition accountability activities (see fig. 1). Each phase involves a number of tasks. According to agency officials, many of these tasks require skills and human capital resources beyond those currently available. As we reported in December 2002, in the current environment, acquisition staff can no longer simply be purchasers or process managers. Rather, they need to be adept at analyzing business problems and in helping to develop acquisition strategies. For example, human capital, job, and market analysis skills are needed to inventory agency activities; benchmarking, and strategic and workforce planning skills are needed to conduct the preliminary planning; organizational analysis, contract management and cost analysis skills are needed to conduct competitions; and financial management and oversight skills are needed in the implementation and post-competition phase. Some skills, such as labor relations and information technology, are required throughout the competitive sourcing process. Despite these additional personnel requirements, many department-level offices in the civilian agencies we reviewed have only one or two full-time staff to complete FAIR Act inventories, interpret new laws and regulations, and oversee agency selection of positions to compete and the competitions. Officials at the six civilian agencies we reviewed stated it would be helpful to have additional personnel well versed in the use of Circular A-76. Even DOD, the leader among federal agencies in competitive sourcing and A-76, may face human capital challenges in running its competition program. According to a cognizant Army competitive sourcing official who has analyzed this issue, the Army’s implementation of the revised Circular A-76 will require approximately 100 to 150 additional personnel, including attorneys, human resources specialists, and contracting officials. A final determination on Army staffing requirements and capabilities has not been made. As we reported in June 2003, building the capacity to conduct competitions as fairly, effectively, and efficiently as possible will likely be a challenge for all agencies, but particularly those that have not previously been invested in competitive sourcing. The Commercial Activities Panel also recognized in its recommendations that accurate cost comparisons, accountability, and fairness would require high-level commitment from leadership; adequate, sustained attention and resources; and technical and other assistance in structuring the MEO, as well as centralized teams of trained personnel to conduct the cost comparisons. According to officials of the Federal Acquisition Council, its competitive sourcing working group is now inventorying agency resources, skill sets and training needs required to address this challenge. At the same time, agencies we reviewed are challenged to maintain employee morale. While most agencies have established vehicles for communicating their competitive sourcing goals internally—such as work groups and Web sites—officials from OMB report that it is difficult to convince employees that the current competitive sourcing program is designed to create value and improve efficiency, not to reduce the size of the federal workforce—as was the case with past competitive sourcing efforts. Funding their competitive sourcing programs also has been cited as a challenge for agencies. Officials in some of the agencies we reviewed cited limited funding as a barrier to implementing their competitive sourcing programs. Such program costs can be significant—at both the department and agency levels. For example, USDA reported spending a total of $36.3 million in fiscal years 2002 and 2003 on its competitive sourcing program. The Forest Service, part of USDA, accounted for $18.7 million of USDA’s $36.3 million on competitive sourcing. In fiscal year 2003, NIH reported spending approximately $3.5 million on contract support for two competitions involving more than 1,400 positions. The National Park Services’ financial needs prompted the agency to ask the Congress for permission to reprogram $1.1 million to help pay for its competitive sourcing program. Other agency officials stated that funding to finance their competitive sourcing initiatives was taken from other agency activities. As we have previously reported, DOD has also been challenged to ensure adequate funding for implementing competitive sourcing under Circular A-76. Finally, in August 2003, the Department of Veterans Affairs terminated all competitive sourcing studies as its General Counsel determined that the prohibition regarding funds from the three health care appropriation accounts under 38 U.S. C. 8110 (a)(5) is applicable. According to officials from most of the agencies we reviewed, they have funded their competitive sourcing programs using existing funds. However, some officials told us that OMB recently instructed their agencies to include a line item in their fiscal year 2005 budget request for their competitive sourcing programs. Doing so should provide agencies with a more stable fiscal environment in which to plan and conduct competitions. Several agencies have developed strategic and transparent competitive sourcing approaches by integrating their strategic and human capital plans with their competitive sourcing plans—an approach encouraged by the Commercial Activities Panel. These approaches have gone beyond the requirement to identify positions for competition as called for in OMB’s initial FTE targets. These approaches employ broader functional assessments of FAIR Act inventories and more comprehensive analysis of factors such as mission impact, potential savings, risks, current level of efficiency, market conditions, and current and projected workforce profiles. Not only do these agencies’ processes identify viable activities for competition, they also provide greater transparency in this critical part of the process. Some of these approaches are summarized below. Appendix VIII contains a more detailed discussion of these approaches. While it is too early to tell whether the various agencies’ approaches will be effective, a key to success will be learning from them and adapting them to each agency’s unique circumstances. OMB has recognized the challenges that agencies have faced in implementing their competitive sourcing programs and recently publicly endorsed agencies’ use of a more strategic approach to competitive sourcing. For example, OMB supported the innovative approaches some agencies have taken to ensure sound planning and effective use of public-private competitions. OMB further stated that consulting with program, human resources, acquisition, budget, and legal professionals facilitates effective communication and a broad-based understanding of competitive sourcing actions within the agency. Officials from HHS’ National Institutes of Health told us they used a steering committee of senior-level officials to determine the activities to be competed under its competitive sourcing program. This committee used a systematic approach that considered FAIR Act inventory data, the knowledge and experience of program managers, and a decision support software application to capture the judgments of managers familiar with the commercial activity under study. The software application used a set of evaluation questions that assessed a function regarding NIH’s mission, human capital and risk, and recorded and scored managers’ responses. Committee officials then reviewed the scores produced by the software, considering factors such as (1) the activity’s impact on NIH’s mission, (2) costs, (3) socioeconomic impacts, and (4) potential advantages to competing the activity. NIH officials also stated that once a decision has been made to compete an activity, consideration is given to re-engineering the applicable business process, whether the activity remains in-house or undergoes a public-private competition. Officials from the Internal Revenue Service, a bureau of the Department of the Treasury, told us they used business case analysis and an enterprisewide approach to determine if a commercial function has the potential to create significant business process improvements and a sizable return on investment. The business case analysis, which is completed in approximately 4 to 6 months, calculates the economic benefits of potential alternatives based on IRS responses to critical questions such as: Is the function core to the mission? What does the function cost? Is there potential to reduce cost and/or improve productivity by competing the function? How does the function fit into other current or planned strategic projects? An IRS competitive sourcing official cited several benefits from the business case approach used during the planning stage up-front consideration of major decision variables such as economics, market research, and risk; involvement of top-level management and leadership; the ability to test candidate projects against strategic goals and performance improvement objectives; and low investment of resources to qualify or reject an activity as a competitive sourcing project. The Army’s “core, non-core concept” for assessing functions employed a more strategic approach. Initially, the Army’s approach for classifying positions for its inventory focused on determining whether functions were core or non-core to the agency’s mission. However, the Army found that such a distinction did not, by itself, provide a good basis for a decision, and that other factors, such as risk and operational considerations, also must be considered. A cognizant Army official told us that focusing on positions does not consider how well the function is being performed or who should perform the function—military, civilian, contractor, or some combination of these. In contrast, the Army learned that looking at broader functional areas, such as utilities and family housing, as opposed to positions, should allow them to better identify potential positions for competition. For example, functions such as childcare and equal employment opportunity operations, while not inherently governmental, are exempt from competitive sourcing because they are important for reasons such as military morale and quality of life. According to a DOD competitive sourcing official, the Army’s approach is evolving and is unique within DOD. Officials at four civilian agencies in our review expressed similar concerns that the Army official expressed on developing their inventories. Officials told us that given the investment of time and resources required to develop an inventory, agencies should focus on mission-related functions rather than individual positions. The Department of Education’s “One-ED” initiative also used strategic approaches in identifying candidates for competition. One-ED covers all elements of major departmental operations, and seeks management changes through integrated human capital reform, competitive sourcing, and organizational restructuring. As part of its broader approach, the department developed its FAIR Act inventory by analyzing key processes in the activities under consideration. It then used the results of this process to restructure positions as either commercial or inherently governmental and frame a broader analysis of the function’s activities. The ultimate success of the administration’s competitive sourcing initiative hinges on the extent to which agencies achieve the efficiencies, innovation, and improved performance envisioned by the President’s Management Agenda. Successful implementation of this initiative requires results-oriented goals and strategies; clear criteria and analysis to support agency decisions; and adequate resources. OMB, in its leadership role, has a difficult task in guiding this initiative and must balance the need for transparency and consistency with the flexibility agencies need in implementing significant changes to operations. While OMB is addressing the funding and human capital challenges that agencies face, it needs to ensure that the FAIR Act inventory and goal-setting process is more strategic and helpful to agencies in carrying out their competitive sourcing responsibilities. Recognizing that agency missions, organizational structures, and workforce composition vary widely, the Commercial Activities Panel provided a framework of sourcing principles that provide an implementation roadmap for this initiative. However, OMB’s current emphasis on meeting implementation milestones and targets does not fully align with these principles or ensure achievement of the ultimate goal of increasing efficiency and improving the performance of commercial activities. OMB needs to work with agencies to ensure their long-range plans are strategically focused. A more strategic approach focused on achieving improvement outcomes would help focus agency efforts and better achieve the results envisioned at the outset of the competitive sourcing initiative. To complement efforts already underway that address funding and human capital challenges and to help agencies realize the potential benefits of competitive sourcing and ensure greater transparency and accountability, we recommend that the Director of OMB take the following three actions: ensure greater consistency in the classification of positions as commercial or inherently governmental when positions contain a mix of commercial and inherently governmental tasks by reviewing current guidance and developing additional guidelines, as necessary, for agencies and OMB examiners; work with agencies to ensure they are more strategic in their sourcing decisions and are identifying broader functional areas and/or enterprisewide activities, as appropriate, for possible public-private competition; and require agencies to develop competition plans that focus on achieving measurable efficiency and performance improvement outcomes. We provided a draft of this report to OMB and the seven agencies for their review and comment. OMB provided oral comments concurring with our three recommendations, but disagreed with our conclusion that OMB’s recent guidance on competitive sourcing emphasized process more than results. Based on our review of the factors OMB considers in its review of agency plans, we continue to believe that factors such as the agency’s ability to conduct competitions are emphasized more than results such as expected savings and the potential for improved performance as called for in the President’s Management Agenda. On the first recommendation, OMB officials concurred that there needs to be consistency in the classification of positions and stated that OMB will review its current guidance in light of the findings in this report to determine how best to help agencies that have had difficulties in classifying their activities. OMB officials stated that they would consider additional guidelines as necessary. OMB officials, while agreeing with the second and third recommendations, emphasized that long-range “green” plans are intended to ensure that agencies think strategically in choosing activities for review and routinely take into account the type of factors that will ensure successful application of competition. OMB reiterated that before an agency may receive a green score on the President’s Management Agenda scorecard, the agency must have an approved green competition plan. OMB stated that its evaluation of plans will not be one-dimensional, but instead will account for each agency’s unique mission and workforce needs and demonstrated ability to conduct reviews in a reasonable and responsible manner. OMB will also review agency plans to understand how the agency has selected activities and their potential for savings and performance improvements. However, while OMB’s guidance mentions the importance of improving the cost effectiveness and quality of commercial operations, we note that the guidance does not cite the potential for savings or improved performance as factors OMB will look for when reviewing agency green plans. The Department of Agriculture and the Department of the Interior concurred with our report. The Department of the Treasury stated that the report’s recommendations were timely. The Department of Education and DOD did not have any comments. The Department of the Interior, HHS, OMB and VA provided technical comments, which were incorporated as appropriate. We are sending copies of this report to other interested congressional committees; the Director, Office of Management and Budget; the Administrator, Office of Federal Procurement Policy; and the Secretaries of Agriculture, Defense, Education, Health and Human Services, the Interior, the Treasury, and Veterans Affairs. We also will provide copies to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-4841 or John K. Needham at (202) 512-5274. Other major contributors to this report were Robert L. Ackley, Christina M. Cromley, Thomas A. Flaherty, Rosa M. Johnson, Nancy T. Lively, William M. McPhail, Karen M. Sloan, Marilyn K. Wasleski, and Anthony J. Wysocki. To describe the progress DOD and the civilian agencies have made in establishing the competitive sourcing program in response to the President’s Management Agenda, we interviewed officials at the Department of Agriculture; DOD; and the Departments of Education, Health and Human Services, the Interior, the Treasury, and Veterans Affairs. We selected the agencies based on the number of commercial positions in their 2001 FAIR Act inventories. The agencies selected represent 84 percent of the 2002 FAIR Act inventory of commercial positions among the 26 executive branch agencies implementing the President’s Management Agenda. We selected the Department of Education because OMB highlighted its unique approach to implementing the competitive sourcing initiative. We obtained and reviewed pertinent documents from the seven government agencies. We also met with members of the Civilian Agency Competitive Sourcing Working Group, executive members of the Federal Acquisition Council and its Working Group on Competitive Sourcing, and attended several competitive sourcing conferences and workshops. We reviewed statutes and circulars governing this program and reports on competitive sourcing. We also reviewed reports on related subjects such as human capital, costs, and savings that were issued by academic and independent research organizations. To identify what, if any, challenges exist for the agencies in implementing the competitive sourcing initiative, we interviewed senior-level officials at the seven competitive sourcing programs. In identifying the challenges agencies face, we also reviewed OMB and agency guidance as well as criteria and data used to develop inventories and select the activities to study and compete. We discussed management expertise, training requirements, planned contract support and contract oversight, timeline and budget impact to achieve fiscal year 2003 goals as well as intra-agency interactions, including budget and human resources offices. To identify strategies agencies are using to identify activities for competition, we discussed extensively the alternatives and strategies agencies used to take a more strategic approach and obtained contractor documents, if available. These studies, conducted in support of a “compete/no compete” decision, gave us insight regarding decision criteria, competitive sourcing strategies, and costs involved. We did not evaluate savings from completed competitions during this review as the program is new and such data are limited. The FAIR Act inventory data used in this report have been reviewed by OMB, reported to Congress, and made available to the public and covers the years 2000, 2001, and 2002. We did not independently verify this information. OMB-reviewed data for 2003 were not available for all agencies at the time of our review. We performed our review between April and December 2003 in accordance with generally accepted government auditing standards. In 2000, Congress enacted legislation creating the Commercial Activities Panel and mandating a study of the government’s competitive sourcing process. The Commercial Activities Panel’s mission was to devise a set of recommendations that would improve the government’s sourcing framework and processes so that they would reflect a balance among the taxpayer interests, government needs, employee rights, and contractor concerns. In April 2002, the panel released its report with recommendations that included 10 sourcing principles to guide federal sourcing policy. The panel believed that federal sourcing policy should support agency missions, goals, and objectives; be consistent with human capital practices designed to attract, motivate, retain, and reward a high-performing federal workforce; recognize that inherently governmental functions and certain others should be performed by federal workers; create incentives and processes that foster high-performing, efficient, and effective organizations throughout the federal government; be based on a clear, transparent, and consistently applied process; avoid arbitrary FTE or other arbitrary numerical goals; establish a process that, for activities that may be competitively sourced, would permit public and private sources to participate in competitions for work currently performed in-house and work currently contracted to the private sector as well as new work; ensure that competitions are conducted fairly, effectively, and efficiently; ensure that competitions involve a process that considers both quality and provide for accountability in all sourcing decisions. Appendix III: 2002 FAIR Act Inventories According to DOD, these FAIR Act inventory numbers do not include military, foreign nationals. depot-level maintenance and repair commercial activities, DOD Inspector General, civilian performance of any commercial activities that have already been contracted out, and the DOD intelligence community. Results of Completed Studies (FTEs) Positions Studied (FTEs) Interior provided only aggregated data for 2002 and 2003. Over this 2-year period, 2,483 FTEs were studied. Of those FTEs, 968 remained in-house and 1,515 were contracted out. Data represent bureaus remaining after transfer made to the Department of Homeland Security. Actions on 3,449 FTEs are underway; some are in the planning stage, while others await senior management approval before results are announced. Management Services. This study began in 1999, competition was announced in 2001, and the contract was awarded in August 2003. In addition, VA did not initiate any studies in 2002. In addition, VA did not initiate any studies in 2002. This activity had 270 FTEs at the time the study was announced in 1999. The Most Efficient Organization provided for 120 FTEs if the work was retained in-house. VA awarded the contract to the private sector in 2003. considered inherently governmental. The first submission of inventory data was 1999. Directed agencies to also submit a separate report listing their inherently governmental positions. Directed agencies to provide a single inventory submission that reflects both the agency’s inherently governmental FTE positions and its commercial FTE positions. Once reviewed by OMB, agencies must provide a listing of their commercial FTE positions to the Congress and the public. Instructed agencies that they should anticipate the possibility that after their list of inherently governmental positions has been reviewed, it too may be released to the public. Directed agencies to submit their FAIR Act inventory in two parts—(1) a list of commercial activities performed by FTE civilian personnel and (2) a list of inherently governmental activities performed by FTE civilian personnel. After OMB reviews these lists, both will be released to the Congress and the public. FTE civilian personnel. After OMB reviews these lists, both will be released to the Congress and the public. Instructed agencies in developing their 2003 inventories to justify in writing all commercial positions that they consider as not being appropriate for private sector performance. Provided guidance for preparing inventories, directs agencies to annually submit inventories of (1) their commercial activities performed by government personnel, (2) inherently governmental activities performed by government personnel and (3) a summary report that identifies aggregate commercial and inherently governmental inventory data. (Contained in revised Circular A-76) Instructed agencies to justify in writing all inherently governmental positions and all commercial positions classified as not appropriate for private sector performance. (Contained in revised Circular A-76) Appendix VII: OMB Scorecard Criteria for the Competitive Sourcing Initiative an OMB approved “yellow” competition plan to compete an OMB approved “green” competition plan to compete commercial activities available for competition; commercial activities available for competition; completed one standard competition or publicly announced standard competitions that exceed the number of positions identified for competition in the agency’s yellow competition plan; publicly announced standard competitions in accordance with the schedule outlined in the agency “green” competition plan; since January 2001, completed at least 10 competitions (no minimum number of positions required per competition); in the past two quarters, completed 75% of streamlined competitions in a 90-day timeframe; and in the past year, completed 90% of all standard competitions in a 12-month time frame; in the past two quarters, canceled less than 20% of publicly announced standard and streamlined competitions. in the past year, completed 95% of all streamlined competitions in a 90-day timeframe; in the past year, canceled fewer than 10% of publicly announced standard and streamlined competitions; and OMB-approved justifications for all categories of commercial activities exempt from competition. Several agencies used approaches that considered and balanced multiple agency interests within the competitive sourcing environment. The following discussion provides a more detailed description of these approaches. NIH has developed a more strategic competitive sourcing approach that includes use of software and integration of the agency’s human capital and strategic plans. According to NIH officials, in 2002, NIH appointed a Commercial Activities Steering Committee, comprised of 14 senior level officials, to work with NIH’s 27 centers to determine the activities to be competed under its competitive sourcing program. The committee used FAIR Act inventory data, knowledge and experience, and a decision support software application that provides objective and analytical results. The software, enabled managers to respond to NIH-developed questions related to mission effectiveness, human capital, as well as demand and risk. Percent of staff in function who are true for function (i.e. stability of demand) Openness of staff in the function toward sensitive information if the function were outsourced. The software assigns weights to each response—using NIH-developed values—and generates scores for each activity under study. Committee officials then review the scores, considering factors such as (1) the activity’s impact on NIH’s mission, (2) costs, (3) socioeconomic impacts, and (4) potential advantages to competing the activity. NIH officials also stated that once a decision has been made to compete an activity, consideration might be given to re-engineering the applicable business process, whether it remains in-house or undergoes a public-private competition. Once the Steering Committee has made its competitive sourcing decision, the Commercial Activities Review Team, with contractor assistance, implements the committee’s decisions. Further, in an effort to add rigor to its competitive sourcing process, NIH in a recent competition used a contractor to mitigate potential risks. NIH convened a panel of nine experts from the Georgia Institute of Technology to analyze and evaluate a request for proposal and its related performance work statement concerning real estate property management services at six installations—the estimated value of which exceed $100 million each year. In light of the risks it could encounter if the contract were deficient from a scope, technical, business, and/or legal standpoint, NIH asked the panel to review the request for proposal developed in-house and determine whether or not the contract documents were properly conceived, logically organized, clearly written, and sufficiently complete and accurate. As a result of its analysis, the panel identified several areas where the request for proposal and performance work statement subjected NIH to risks. NIH officials reviewed the risk and made appropriate changes to these documents. Finally, NIH officials sought advice and coordinated with HHS’ Office of Strategic Management and Planning and Human Capital Office to link their competitive sourcing program to HHS’ strategic and human capital plans. According to an IRS official, IRS, a bureau within the Department of the Treasury, developed a strategic approach to competitive sourcing, using a business case analysis methodology used by leading industry firms to determine if commercial function(s) within a business division have the potential to create significant business process improvements along with a sizeable return on investment. Based on the results of the business case analyses, the Strategy and Resources Committee, headed by the Deputy Commissioner of Operations and Support decide to compete (public- private competition) or not compete the functions. According to IRS officials, this process enhances the opportunities to make smart business decisions aligned and supportive of the IRS Strategic Business Plan. IRS has focused its competitive sourcing efforts primarily on more strategic and enterprise-wide competitions because it has determined that this approach makes more economic sense than identifying candidates in smaller groups. The official stated that the IRS’s initial step for identifying the functions that will be considered to undergo a business case analysis is its review of the FAIR Act inventory, which has been merged with IRS personnel staffing database in a software application. This application, unique in terms of the agencies that we reviewed, crosswalks the FAIR Act inventory data with personnel staffing data to provide a comprehensive analysis of the various commercial function groupings across the IRS. After identifying these groupings, the bureau’s subject matter experts and high-level managers along with hired contractors conduct business case analyses of these positions. As we reported, the business case analyses, which are completed in approximately 4 to 6 months, calculate the economic benefits of potential alternatives based on IRS responses to a number of critical questions: Is the function core to the mission? How much does the function cost? Is there potential to reduce cost and/or improve productivity by competing the function? How does the function fit into other current or planned strategic projects? Based on the responses to these questions, and analyses of current operations, market research and an MEO design, IRS calculates and considers the economic benefits of each potential alternative and the upfront and recurring investments required to achieve and maintain efficiencies. IRS then makes a decision to compete or not compete based on weighted values assigned to IRS strategic business alignment, investment risks, return on investment, FAIR Act goal alignment, and alignment with President’s Management Agenda goals. A key success factor to this approach is an expert validation of the assumptions used in the business case as well as the inclusion of significant direct and indirect costs associated with the function. According to an IRS official, if competing a function makes the best business sense, IRS appoints a team leader who selects a team and obtains contractor support to plan and develop the performance work statement. Throughout the entire business case analysis and competitive sourcing lifecycle, the IRS Office of Competitive Sourcing is engaged and provides support to the various teams. Officials from IRS’ competitive sourcing program cited many benefits from the business case approach used during the preliminary planning stage: up-front consideration of major decision variables such as economics, market research and risk; involvement of top level management and leadership at the very early stages of the process; an opportunity to test candidate projects against strategic goals and performance improvement objectives; and a low investment requirement to qualify or reject an activity as a competitive sourcing project. According to an IRS official, while the time and cost to make a decision to compete or not to compete may seem excessive, once IRS conducts a public - private competition, they have confidence in the business case projected return-on- investment and an understanding of why conducting a particular set of business functions fits into the IRS strategic plan for business improvements and human capital goals. The Army’s experience in using a strategic approach to classify positions offers lessons for other agencies in identifying positions for competitive sourcing studies. The Army’s attempt to focus on determining whether functions were core or non-core to the agency’s mission found that such a distinction did not, by itself, adequately inform sourcing decisions. For example, the Army’s core competency review showed that designating a function as “core” does not necessarily mean that in-house employees should perform a function or necessarily preclude competitive sourcing of the function. As we reported, Army officials found that other factors, such as risk and operational considerations, must also be considered. The Army’s effort assumed that all commercial positions were non-core to its mission and thus potential candidates for performance by the private sector or other government agencies. However, Army officials recognized that, in many instances, these “non-core” functions would require additional analysis to determine potential risks if the function were contracted. There are four categories of risk analysis: force management, operational, future challenges, and institutional. For example, Army officials determined that many medical functions, which are not classified as inherently governmental, could be considered core in some circumstances. Although medical functions typically do not require unique military knowledge or skills, medical activities in theater need to be performed by in-house personnel because contracting for medical support in host nations could present significant risk to U.S. armed forces. Consequently, the Army has determined that the in-theater medical mission is a critical element of the Army’s ability to accomplish its core competencies. Other medical functions could be considered both core and non-core. For example, optical fabrication—the ability to produce replacement spectacles and protective mask inserts—is considered a core competency in support of the operational forces close to the point of need in the area of engagement. However, the same function performed in the United States is not core. The Army also determined that its casualty and mortuary affairs function is not a core or an inherently governmental function. However, national policy dictates that Army officials notify families of a casualty in-person. In June 2002, the Department of Education launched, with OMB approval, an ambitious management reform known as the “One-ED” concept. One-ED seeks to transform departmental operations through the integration of human capital reform, competitive sourcing, and organizational restructuring. As part of its One-ED approach the Department developed its FAIR Act inventory by first analyzing key processes. It then used the results of this process to restructure positions as either commercial or inherently governmental. As a result of this process, Education’s reported inventory data have changed significantly in the past few years, and according to senior officials, the data are now more accurate and concise. One-ED reviews cover selected elements of major departmental operations and are being implemented in four phases over a period of three years. In each phase, the Department (1) identifies specific business functions for review, (2) conducts a business case analysis of each function, and (3) decides whether to re-engineer the function or compete it with the private sector. Phase I, which concluded in mid-2003, focused on agency-wide support functions, such as human resources, payment processing, and legal review. As a result, five agency-wide support functions will be competed with the private sector and four will be re-engineered and retained in-house. In making this decision, nine teams—comprised of approximately sixty employees knowledgeable about the function being studied and assisted by contractor personnel trained in developing business case analyses reviewed the functions and reported their findings to senior management. These teams considered such factors as the skill sets and competencies required to perform the functions being reviewed, potential risks associated with outsourcing the position, and relationship of the business function to the Department’s strategic planning. An Executive Management Team—chaired by the Deputy Secretary and staffed by senior Department officials—made the final determination using the information developed by the teams as well as other data. The Department initiated four standard competitions and one streamlined competition in fiscal year 2003. In addition, the Department is in the process of implementing proposals related to those business functions that were identified for in-house re-engineering. These projects were not completed at the time of our review. The Department’s Office of Inspector General will report on its assessment on the implementation of the One-ED initiative in early 2004. | In August 2001, the administration announced competitive sourcing as one of five initiatives in the President's Management Agenda. Under competitive sourcing, federal agencies open their commercial activities to competition among public and private sector sources. While competitive sourcing is expected to encourage innovation and improve efficiency and performance, it represents a major management change for most agencies. This report describes the progress selected agencies have made in establishing a competitive sourcing program, identifies major challenges these agencies are facing, and discusses strategies they are using to select activities for competition. Since the President announced competitive sourcing as a governmentwide initiative, the six civilian agencies GAO reviewed created a basic infrastructure for their competitive sourcing programs, including establishing offices, appointing officials, hiring staff and consultants, issuing guidance, and conducting training. With infrastructures in place and leadership involvement, each agency has developed competitive sourcing plans and conducted some competitions. The Department of Defense (DOD) has had an extensive competitive sourcing program since the mid-1990s. Interagency forums for sharing competitive sourcing information also have been established. While such activities are underway, each agency GAO reviewed, including DOD, cited several significant challenges in achieving its competitive sourcing goals. Key among these is maintaining workforce inventories that distinguish inherently governmental positions from commercial positions--a prerequisite to identifying potential positions to compete. Agencies also have been challenged to develop competitive sourcing approaches that would improve efficiency, in part because agencies have focused more on following OMB guidance on the number of positions to compete--not on achieving savings and improving performance. Ensuring adequate personnel with the skills needed to run a competitive sourcing program also challenged agencies. Many civilian department-level offices have only one or two full-time staff to interpret new laws, implement new OMB guidance, maintain inventories of competable positions and activities, and oversee agency competitions. The Federal Acquisition Council is currently identifying agency staffing needs to address this challenge. Finally, some of the civilian agencies we reviewed reported funding challenges in implementing their competitive sourcing programs. OMB told agencies to include a line item for competitive sourcing activities in their fiscal year 2005 budget requests. Several agencies integrated their strategic, human capital, and competitive sourcing plans--an approach encouraged by the Commercial Activities Panel, which was convened to conduct a congressionally mandated study of the competitive sourcing process. For example, the Internal Revenue Service (IRS) used business case analyses to assess the economic benefits of various sourcing alternatives. An IRS official said this approach required minimal investment to determine an activity's suitability for competitive sourcing. The National Institutes of Health, the Army, and the Department of Education also took a strategic approach to competitive sourcing. OMB's task in balancing the need for transparency and consistency with the flexibility agencies need is not an easy one. While OMB is addressing funding and human capital challenges, it needs to do more to assure that the agencies' inventories of commercial positions and goal-setting processes are more strategic and helpful to agencies in achieving savings and improving performance. |
Used only sparingly in past military operations, UAVs are now making national headlines as they are used in ways normally reserved for manned aircraft. UAVs come in a variety of sizes and configurations, ranging from as small as an insect to as large as a small commercial airliner. In our work, we focused on mini, tactical, and strategic UAVs. According to available analysis, mini and tactical UAVs constituted the vast majority of the UAV systems in operation from 2005 to 2011, while strategic UAVs included some of the most versatile UAVs, typically capable of operating up to 30,000 to 45,000 feet in altitude with a maximum endurance of more than 20 hours. Figure 1 briefly describes these three types of UAVs. The two principal multilateral regimes that address exports of UAVs are the MTCR and Wassenaar. MTCR, established in 1987, is a voluntary association of 34 countries that share the goal of limiting the spread of ballistic and cruise missiles and UAVs capable of delivering weapons of mass destruction. Wassenaar, established in 1996, is a voluntary association of 41 countries that share the goal of limiting the spread of certain conventional weapons and sensitive dual-use items having both civilian and military applications. Both are consensus-based, requiring that all members must agree to any proposed changes in regime documents or activities. In both instances, members agree to restrict exports of sensitive technologies by placing them on commonly agreed to lists and incorporating these lists into their national export control laws and regulations. Members also conduct activities in support of the regimes, such as sharing information about denied license applications and conducting outreach to countries that are not members of the regimes. Wassenaar has two control lists: a munitions list and a dual-use list. The MTCR members control a common list of items, which is contained in the MTCR Annex. The Annex covers complete missile systems, including rocket systems and UAVs, as well as a broad range of equipment, software, and technology. The MTCR Annex consists of two categories of items: Category I and Category II. Under MTCR, complete UAV systems can be controlled as either a Category I or a Category II system, depending on their range and payload capacity. Category I UAVs are considered the most sensitive, and include strategic UAVs capable of delivering a payload of at least 500 kilograms (about 1,100 pounds) to a range of at least 300 kilometers (approximately 186 miles). MTCR member nations considering the export of these UAVs commit to apply a “strong presumption of denial” standard regardless of purpose, meaning that such transfers should occur only on rare occasions and only in instances that are well justified under the MTCR Guidelines. Category II UAVs are considered less sensitive, consisting primarily of UAVs that do not meet Category I criteria, but are capable of flying at least 300 kilometers. While these items require review through national export control systems, these items are not subject to the MTCR “strong presumption of denial,” except for exports judged by the exporting country to be intended for use in delivering weapons of mass destruction. MTCR members have agreed to a “no undercut” policy for all MTCR- controlled items, meaning that MTCR members have agreed to consult with each other before considering exporting an item on the list that has been notified as denied by another member pursuant to the MTCR Guidelines. Several U.S. laws authorize the sale or transfer of export controlled technologies from U.S. companies or the U.S. government to foreign countries, or in certain cases foreign entities. The Arms Export Control Act of 1976, as amended, provides the President the authority to control the sale or transfer of defense articles and services. Under the Arms Export Control Act, State’s Directorate of Defense Trade Controls (DDTC) licenses direct commercial sale (DCS) exports of defense articles and services on the U.S. Munitions List, while DOD’s Defense Security Cooperation Agency (DSCA) administers the FMS program under the supervision and general direction of State. In addition, the Arms Export Control Act, as amended, also requires end-use monitoring for the sale or export of defense articles and services, and delegates these responsibilities to the same agencies that administer the program. DDTC administers the Blue Lantern program to conduct end-use monitoring for defense articles exported under DCS, while DSCA administers the Golden Sentry program to monitor the end-use of defense articles transferred through FMS. 50 U.S.C. App. §§ 2401-2420. The Export Administration Act of 1979, as amended, is not permanent legislation. Since August 21, 2001, the Export Administration Act has been in lapse. However, the President has continued the regulations in effect through Executive Order 13222 of August 17, 2001 (3 C.F.R., 2001 Comp 783 (2002)), which most recently was extended by Presidential Notice on August 12, 2011, under the authority provided by section 202(d) of the National Emergencies Act (50 U.S.C. § 1622(d)). administers Commerce’s end-use monitoring program for technologies covered by the Commerce Control List. The U.S. export control enforcement system consists of multiple agencies. Within DHS, Immigration and Customs Enforcement (ICE) investigates suspected export control violations involving both U.S. Munitions List and Commerce Control List items. In addition, DHS’ Customs and Border Protection (CBP) inspects selected exports to determine whether proper licenses were obtained prior to shipment and may interdict suspicious items being shipped. Within Commerce, BIS’ Office of Export Enforcement has authority to investigate violations involving Commerce Control List items. Within DOJ, the FBI can take the lead in certain export control investigations involving counterintelligence and counterterrorism. DOJ prosecutes suspected export control violations. Investigations can result in criminal prosecutions; fines; and imprisonment or administrative penalties, such as export denial orders barring a party from exporting any U.S. items for a specific period of time. Figure 2 shows the principal agencies that have a role in the export control process. There are also U.S. government agencies that gather and analyze information on the proliferation of UAV systems and related technologies and produce UAV-related threat assessments and other UAV-related information. The Director of National Intelligence serves as the head of the intelligence community, establishing objectives and priorities for collection, analysis, production, and dissemination of national intelligence. Moreover, the Defense Security Service (DSS) provides threat assessments in support of its mission to oversee the protection of U.S. classified information and data in the hands of cleared DOD contractors. The executive branch is currently considering reforms to the U.S. export control system in an Export Control Reform Initiative, including the creation of a single control list and a single information technology system. This initiative could affect export control licensing and enforcement efforts involving UAVs and related technologies and components. There has been rapid growth globally in UAV acquisition, development, and military applications. From 2005 to 2011, nations, including countries of proliferation concern and key allies, sought to improve their intelligence gathering and military aviation capabilities by developing and fielding their own UAV systems. Furthermore, militaries across the globe sought to expand the uses for UAVs, particularly in the area of armed strike missions. UAVs are also increasingly used in a number of civil and commercial applications, such as law enforcement, but national and international regulations place restrictions on most of these applications. Our analysis of open source information shows a significant increase in the number of countries that acquired a UAV system since 2005. In 2004, we reported that approximately 41 countries had acquired a UAV.review of current U.S. export licensing data and open source materials found that this number grew over the intervening period to at least 76 countries. Figure 3 provides a global picture of the countries that have acquired UAVs. The United States likely faces increasing risks as additional countries of concern and terrorist organizations acquire UAV technology. UAVs can provide countries and terrorists organizations with increased abilities to gather intelligence on and conduct attacks against U.S. interests. Alternatively, selected transfers of U.S. UAV technology support U.S. objectives by increasing allies’ capabilities and by strengthening the industrial base for UAV production in the United States. Available analysis has determined that foreign countries’ acquisition of UAVs can pose a threat because it puts U.S. military assets at increased risk of intelligence collection and attack. We were told that the significant growth in the number of countries that have acquired UAVs, including key countries of concern, has increased the threat to the United States. Because some types of UAVs are relatively inexpensive and have short development cycles, they offer even less wealthy countries a cost- effective way of obtaining new or improved military capabilities that can pose risks to the United States and its allies. We were informed that currently, the potential threat to the United States primarily involves tactical UAVs, rather than more sophisticated, strategic systems. However, according to available analysis, countries of concern are pursuing more advanced UAVs through acquisitions from foreign suppliers and indigenous development. Such UAVs would be capable of flying higher, longer, and further and would be capable of a wider range of missions. According to a publicly released DSS report, many countries of concern seek to illegally obtain U.S. UAV technology as part of their strategy to advance their UAV capabilities. DSS reported in 2009 that foreign targeting of U.S. UAV technology through both overt and covert collection efforts had increased dramatically in recent years. According to DSS, the United States’ acknowledged status as a global leader in UAV development makes the U.S. defense industry a primary focus of foreign collection attempts. The targeted technologies included engines, optics sensors, communications gear, and guidance and navigation systems. We were informed that by acquiring UAVs, countries can enhance their capability to gather intelligence, surveillance, and reconnaissance information on U.S. forces and other assets. UAVs can allow countries to collect potentially harmful data on the location, strength, and movement of U.S. troops that can be used to more effectively plan or conduct attacks against U.S. interests. Available analysis also suggests that the use of UAVs by foreign parties to gather information on U.S. military activities has already taken place. We were informed that as more countries acquire UAVs, such intelligence, surveillance, and reconnaissance collection efforts are likely to increase. Hostile countries could also use UAVs to attack U.S. interests. While only a limited number of countries have fielded lethal or weaponized UAVs, this threat is anticipated to grow, given the number of countries pursuing the acquisition or development of such systems, including countries of According to others’ analysis, as the number of countries with concern.such capabilities increases, it will likely alter the nature of future conflicts because countries will be able to field a larger number of strike assets without risking their manned aircraft. Available analysis has also shown that terrorist organizations’ acquisition of UAVs to harm U.S. interests poses a risk for the United States. Certain terrorist organizations have acquired or are developing some form of UAV technology. For the most part, these organizations are currently limited to using smaller, more rudimentary UAVs, such as radio-controlled aircraft that are available worldwide from hobby shops or through the Internet. Hezbollah is one terrorist organization that has acquired and used UAV technology to date. Although no terrorist organization has successfully carried out an attack with a UAV to date, available analysis has found that there are likely some terrorist organizations interested in using UAVs to deliver both conventional and unconventional weapons. For example, in September 2011, the FBI arrested an individual in the United States on charges that he planned to crash radio-controlled unmanned airplanes loaded with explosives into the U.S. Capitol and the Pentagon. Available analysis has noted that there are likely advantages to using UAVs in terrorist attacks, but also factors that may limit the near-term risk. For instance, in certain situations, small UAVs could potentially be more precise in conducting terrorist attacks than using other items, such as mortars or rockets. The impact of such attacks might be lessened though, given the inability of small UAVs to carry large explosives. However, if terrorists were able to equip UAVs with even a small quantity of chemical or biological weapons an attack could potentially produce lethal results. Certain challenges were cited in acquiring the technology and expertise necessary to field a UAV sophisticated enough to carry out more destructive attacks with conventional weapons. Larger, more sophisticated systems would potentially also be harder to operate without detection. Although UAV proliferation poses risks, the U.S. government has determined that selected transfers of UAV technology can further national security objectives. The transfer of U.S. UAV systems to allies provides these countries with increased capabilities to contribute to U.S. efforts globally. It also helps ensure that allies’ military equipment is interoperable with that of U.S. forces. Allies have used UAVs acquired from the United States to support a variety of U.S. objectives. For instance, coalition partners have successfully deployed U.S. UAVs to assist in the wars in Afghanistan and Iraq. The U.S. Air Force reported that Italy effectively used Predators purchased from the United States to locate roadside bombs and weapons caches in Iraq, supporting coalition efforts to stabilize the country in advance of national elections. Italy and the United Kingdom also successfully deployed U.S. UAVs in Afghanistan to collect intelligence, surveillance, and reconnaissance data on Taliban activity. State officials said that allowing such sales improved Italy’s and the United Kingdom’s abilities to function with the United States in an interoperable manner and provided U.S. and NATO commanders with additional assets. Allies also used UAVs purchased from the United States in support of such U.S. security objectives as counternarcotics and counterterrorism operations. Additionally, DOD has noted the importance of allowing selected transfers of UAV technology in order to strengthen the U.S. industrial base for UAV production. According to some U.S. government officials, the ability to sell American UAVs to foreign purchasers helps defray the U.S. government’s acquisition costs. U.S. government officials also noted that opening larger potential markets to American UAV producers provides additional incentives for producers to invest resources in the research and development of UAV systems, and helps the United States retain a technological lead over foreign UAV producers. According to private sector representatives, UAVs are one of the most important growth sectors in the defense industry and provide significant opportunities for economic benefits if U.S. companies can remain competitive in the global UAV market. The United States has used multilateral and bilateral diplomacy to address UAV technology advances and proliferation concerns. For instance, to address advances in UAV technology, the United States proposed several changes to the MTCR; however, MTCR members agreed to only one change. Moreover, nonmembers continue to acquire, develop, and export UAV technology. In addition to multilateral diplomacy, the United States used bilateral diplomacy in the form of demarches to foreign governments to address specific UAV proliferation concerns with countries. The United States proposed changes to address how the MTCR applies to UAVs, but MTCR members only reached a consensus to accept one of the changes. The United States principally focused these efforts through the MTCR because it addresses the potential use of UAVs to deliver weapons of mass destruction, according to State. According to documents provided by State and State officials, the United States proposed six UAV-related changes to the MTCR Annex and members accepted one. The five U.S.-sponsored UAV-related proposals that were not adopted were closely related. They were significant since they would have resulted in moving some UAVs currently categorized under MTCR Category I to Category II, according to State documents and State and DOD officials. However, MTCR members could not achieve a consensus to adopt the proposals. As we reported in 2004, both MTCR and Wassenaar use a consensus process that makes decision making difficult. MTCR last discussed these U.S. proposals in 2008 and removed them from its agenda the following year, pursuant to MTCR rules. MTCR members have adopted a total of 22 UAV-related technical changes during the 2005 to 2011 period, according to State. For instance, MTCR members adopted controls on turboprop systems used in Category I UAVs and inertial navigation systems in Category II UAVs, according to State officials. However, according to available analysis, only 7 percent of UAV systems are subject to MTCR’s strictest controls. The United States proposed three major changes to the Wassenaar control list, which members adopted. The first, adopted in 2005, added to the control list equipment and components specially designed to convert manned aircraft to UAVs, as well as equipment specially designed to control UAVs and guidance and control systems for integration into UAVs, among other things. The second, adopted in 2007, added to the control list, engines designed or modified to power a UAV above 50,000 feet. The third, adopted in 2008, refined the control policy on navigation, altitude, and guidance and control systems for UAVs. While Wassenaar applies to the export of some military and dual-use systems used on UAVs, it does not apply to other dual-use enabling technologies, according to available analysis. Some of these dual-use technologies are critical to the development of UAV programs in certain countries of concern; however, they are difficult to control because they have other commercial applications. Regime members agree to provide greater scrutiny to trade in technologies identified as sensitive by the regimes through their national laws and regulations. Regime members also share license application denial and other information. Our most recent work shows that some countries that produce and export UAVs do not belong to MTCR or Wassenaar. This fact raises concerns about the potential for nonmembers to undermine the regimes’ ability to limit UAV proliferation. In addition to employing multilateral diplomacy to address UAV proliferation concerns, the United States employed bilateral diplomacy, chiefly in the form of demarches, to address specific concerns with foreign governments. State provided to us approximately 70 cables containing UAV-related demarches issued to 20 foreign governments and a multilateral regime during the period from January 2005 to September 2011. Over 75 percent of the cables provided responded to efforts by a small number of countries of concern to obtain controlled and uncontrolled technologies for use in their UAV programs. While the regimes do not control the proliferation of all enabling technologies used by countries of concern to develop UAVs, the United States has issued demarches to foreign governments even for exports of certain uncontrolled technologies when these were clearly to be used for a military purpose. In addition, State cables show that several countries took actions in response to U.S. demarches. U.S. agencies coordinate in a variety of ways to control the spread of UAV technology, but could strengthen their processes for approving, monitoring, and enforcing export control requirements on UAVs. First, U.S. agencies have established procedures for coordinating the review and approval of UAV transfers, but limitations in information sharing hamper these efforts. Second, DOD, State, and Commerce each conduct end-use monitoring of some UAV technology, but differences in the agencies’ programs may result in similar items being subject to different levels of oversight. Third, U.S. agencies have coordinated UAV-related prosecutions and other enforcement actions, but the nature of UAV technology and general issues with export control investigations present enforcement challenges. Various U.S. government agencies, including Commerce, State, and DOD, play a role in the process to review and approve transfers of U.S. UAV technology to foreign purchasers. These agencies’ decisions are guided by regulatory controls that have been established to govern the transfer of both military and dual-use UAV technology. Controls on military UAV systems and related technology are outlined in the U.S. Munitions List, while controls on dual-use UAV systems and related technology are listed in the Commerce Control List. The Commerce Control List contains three Export Control Classification Numbers (ECCNs) exclusively dealing with UAV systems and related items: 9A012, 9A120, and 9B010. Additionally, we identified at least 29 other ECCNs that include controls on components or materials that can be used in UAVs. Unlike the Commerce Control List, the U.S. Munitions List does not include sections that outline controls for UAVs specifically. Rather, controls for military UAV technology fall under several more general U.S. Munitions List categories. For instance, applicable controls for complete UAV systems are contained in Category VIII of the U.S. Munitions List, which deals with aircraft and associated equipment more broadly. According to State and Commerce officials, U.S. controls on UAVs are primarily based upon the MTCR and Wassenaar control lists. In addition, U.S. law establishes unilateral controls that limit the transfer of various items, including UAV technology, to particular countries. For instance, State noted that the U.S. trade embargos on countries such as Iran, North Korea, and Syria cover UAV technology, along with a wide array of other items. Additionally, the U.S. government has enacted laws that suspend the approval of any transfers of items on the U.S. Munitions List, including military UAV technology, to China. While State and Commerce are responsible for reviewing and approving export licenses for military and dual-use UAV technology respectively, the U.S. government has established several mechanisms to coordinate these decisions with other relevant agencies. For instance, State and Commerce, as the lead licensing agencies, “staff” out license applications to other relevant agencies, including DOD’s Defense Technology Security Administration, for their review. State and Commerce officials noted that it is particularly important to provide licenses to DOD for review since DOD officials often have the technical expertise regarding particular items. Additionally, many UAV-related license applications are reviewed by the Missile Technology Export Control Group (MTEC). The MTEC is an interagency body that is chaired by State’s Bureau of International Security and Nonproliferation. It includes representatives from State’s DDTC, DOD, Commerce, NASA, and the Department of Energy. During the weekly MTEC meetings, participants can make recommendations to approve or deny licenses or propose conditions to be placed on these licenses. According to State, the MTEC assesses whether license applications are consistent with U.S. laws and regulations, nonproliferation policy, and international commitments. For instance, in one case, the MTEC and the Missile Annex Review Committeewith a U.S. UAV producer to determine what modifications the company needed to make to one of its existing UAV systems to ensure that it was not inherently capable of delivering at least a 500 kilogram payload to a range of at least 300 kilometers. The resulting design ensured that the UAV was classified as an MTCR Category II system and thus not subject to the “strong presumption of denial,” if the company sought to export the system. State and DOD also coordinate decisions regarding the transfer of military UAV technology through the FMS program. For instance, DOD procedures in its Security Assistance Management Manual specify that DSCA or State may initiate coordination to approve or disapprove a transfer within 5 days of receiving the information copy of the Letter of Request, which is a formal request from a country to purchase an item through FMS. DSCA consults with State on these requests in order to determine if there are any immediate objections to the proposed sale within the U.S. government. Further, State must approve any arms transfer through FMS. The U.S. government has authorized the export of a range of UAV technology, but database limitations impair its ability to oversee the release of such technology. The U.S. government approved the export or transfer of a range of complete military and dual-use UAV systems, as well as key UAV components, from fiscal years 2005 through 2010, but it has no comprehensive view of the volume of UAV technology it authorized for export. Specifically, State’s licensing database was not designed to produce complete data on the number, types, and value of UAV technology that State has licensed for export. Since State’s database organizes items by U.S. Munitions List category and subcategory, and the list has no dedicated category or subcategory for UAV technology, State lacks an effective means of querying the database to identify UAV-related licenses. In July 2009, State issued a request that exporters list in the “purpose” field of their export license application if an item was a “UAV-related license,” covered under certain subcategories within Category VIII of the U.S. Munitions List. State issued this request to assist it in routing license applications to the appropriate internal unit for review, rather than to facilitate monitoring of the volume of UAV technology authorized for export, according to State officials. Although State has issued the request to exporters, it does not have procedures to ensure that exporters comply with this request and the request does not apply to UAV-related licenses involving items not covered by Category VIII. In announcing the request, State noted its intention to automate this process, but had not done so as of February 2012. In contrast, Commerce’s database does allow for identification of UAV- related items falling under the Commerce Control List’s three UAV- specific ECCNs. However, it has limitations in determining the extent to which certain UAV components have been authorized for export. The Commerce Control List contains at least 29 other ECCNs that control items that are used in UAVs, but can also be used for other purposes. For items controlled under these 29 ECCNs, Commerce’s database does not provide a means for easily determining which items authorized for export are to be used in UAVs and which are to be used for other purposes, such as in manned aircraft. DOD’s system for recording FMS cases is better able to provide a complete picture of UAV technology that has been transferred overseas via FMS. These limitations in the U.S. government’s licensing data impair the ability of U.S. agencies and Congress to oversee the release of sensitive UAV technology. As a result, U.S. agencies may face additional challenges in working to effectively counter UAV proliferation. For instance, U.S. officials may lack complete information on relevant, past licensing decisions, when determining whether or not to grant an export license for a particular UAV item. Additionally, these data issues reduce U.S. agencies’ ability to conduct analysis of denied UAV-related license applications to determine if there are particular trends in questionable parties’ attempts to acquire UAV technology, according to U.S. government officials. Despite these limitations, we analyzed State and Commerce licensing data, as well as FMS data, to estimate the extent to which the U.S. government authorized the export of UAV technology in fiscal years 2005 through 2010. In total, the U.S. government approved FMS transfers of complete UAV systems in 15 cases over the period. Additionally, we identified 1,278 UAV-related licenses that State processed over the period. Of these, State approved 90 percent, denied 3 percent, and returned to the applicant without action 7 percent. We could not accurately determine the number of approved licenses that were for complete UAV systems, given limitations in State’s database, but the data indicate that State authorized the export of several complete UAV systems including the Desert Hawk, the ScanEagle, and the Raven. From fiscal years 2005 through 2010, we identified 134 licenses to export dual- use UAV technology that Commerce processed. It approved 74 percent of these applications, denied 2 percent and returned without action 24 percent. Of the 99 licenses that Commerce approved, we identified at least 55 that appeared to involve complete dual-use UAV systems based upon the descriptions in Commerce’s data. In addition to complete UAV systems, the U.S. government authorized the export of an array of UAV components and subsystems. Table 1 shows a breakdown of the estimated number of UAV-related licenses for fiscal years 2005 through 2010. The U.S. government authorized the transfer of UAV systems to a variety of countries over fiscal years 2005 through 2010. For instance, it authorized the transfer of military UAVs to NATO allies such as Denmark, Italy, Lithuania, and the United Kingdom, as well as other countries such as Australia, Colombia, Israel, and Singapore. In addition to the U.S. government’s limited ability to determine the volume of authorized UAV exports, U.S. licensing agencies have limited information sharing mechanisms with the intelligence community. Both State and Commerce officials stated that the intelligence community does not have a formal process in place to directly provide them timely and relevant intelligence to assist in the licensing process. For instance, intelligence agencies may be consulted by the MTEC on occasion, but they are not routinely represented at weekly meetings. Some intelligence agencies participate in the interagency Missile Trade Analysis Group, which is a State-chaired interagency working group responsible for stopping specific shipments of missile and UAV proliferation concern worldwide. Although the group is not directly involved in licensing issues, State officials noted that representatives from State’s DDTC and Commerce’s BIS attend the group’s meetings to help ensure a strong working relationship with licensing agencies. Moreover, State officials stated that because both the MTEC and the Missile Trade Analysis Group are chaired by State’s Bureau of International Security and Nonproliferation it helps ensure coordination and information-sharing on issues affecting both groups. According to State and Commerce officials, certain intelligence agencies previously had a more formalized role in the licensing process, but chose to remove themselves from it in 2008. For instance, State officials stated that certain intelligence agencies had previously participated in the MTEC and helped validate the bona fides of foreign parties in license transactions. Additionally, Commerce officials reported one intelligence agency had previously hired contactors to screen foreign parties in Commerce export license applications against intelligence reporting. According to Commerce officials, this agency decided to end its formalized support for the licensing process due to budget cuts and other priorities. State officials said that, since 2008, State has struggled to get timely and relevant intelligence information to assist in licensing decisions. Additionally, Commerce officials stated that they did not believe they were getting access to all pertinent intelligence information as part of their license review process. Some DOD officials also expressed concern with the lack of official mechanisms for the intelligence and licensing agencies to coordinate and noted that some derogatory information available to them on parties listed on license applications may not be getting factored into licensing decisions. According to U.S. government officials, the administration is currently discussing how the intelligence community can provide better support to the licensing agencies. Additionally, Commerce noted that it has received funding to establish its own intelligence center, known as the Strategic Intelligence Liaison Center, within BIS, to fill the gaps caused when the intelligence community stopped reviewing Commerce export licenses. The center will, among other things, check the names of parties in license applications against intelligence systems, as was previously done by the intelligence community. While the focus of the center will be on Commerce export licenses, Commerce officials stated that they are working with other relevant agencies to ensure that the information the center generates is available to them, as appropriate. Commerce stated that the center was established as of the end of 2011. State, Commerce, and DOD each conducts end-use monitoring on some UAV-related exports and transfers. Since our previous report on UAV proliferation, all three agencies have taken some steps to increase their end-use monitoring of UAVs and related items. In 2004, the director of the Office of Enforcement Analysis within Commerce’s BIS issued a memo to his staff that highlighted the need to focus greater attention on conducting end-use monitoring of UAV exports. The memo identified certain types of items that should have priority for end-use monitoring, given their utility in developing UAVs. Unlike Commerce, State issued no specific guidance on how to target its end- use monitoring of military UAV technology. Although State has not issued UAV-specific end-use monitoring guidance, State has identified UAVs as an example of a sensitive commodity that might trigger a Blue Lantern check, given the negative impact on national security if the item were to be diverted or illicitly retransferred. State officials said that they consider a variety of factors when making a determination as to whether end-use checks on sensitive items, including UAV technology, are warranted. For instance, State may be more likely to do a Blue Lantern check if the end- user has no established history with controlled items, if the number of items ordered by the end-user is more than would reasonably be needed, if the shipment involves an illogical routing, or if the purchaser is paying in cash or at above market rates. Shortly after our 2004 report, DOD took steps to strengthen its end-use monitoring of UAV technology transferred via FMS. In March 2004, DOD announced that MTCR Category I UAVs would be among those items subject to enhanced end-use monitoring under the Golden Sentry program. For those items subject to enhanced end-use monitoring, DOD officials stationed in the host country are required to conduct inventories of transferred items following delivery and at regular intervals thereafter to verify that the items are accounted for and being used in accordance with the terms and conditions of the transfer. DOD can also require enhanced end-use monitoring on non-Category I UAVs, if the transfer is deemed to be of significant risk to warrant such a step. DOD officials reported that, as of February 2012, there had only been one instance where DOD required enhanced end-use monitoring for a non- Category I UAV. Items not requiring enhanced end-use monitoring are subject to routine end-use monitoring under the Golden Sentry program. Routine end-use monitoring is conducted in conjunction with other required security-related duties. For example, U.S. officials might observe how a host country’s military is using U.S. equipment when visiting a military installation on other business. Given the large volume of defense articles transferred through FMS, DSCA officials have instructed DOD personnel to concentrate routine end-use monitoring efforts on a “watch list” of specific categories of items. DOD has included UAVs among the items on the watch list. However, some DOD officials that we interviewed, as well as officials interviewed by other GAO teams in 2011, noted that there was not clear guidance on the activities that constitute routine end- use monitoring and how to document these efforts. The majority of end-use monitoring done for UAV-related items has had favorable results, but agencies found problems in some cases. From fiscal years 2005 through 2010, State identified 45 UAV-related Blue Lantern checks that it conducted and Commerce identified 201 UAV- related end-use checks that it conducted. Of the checks State identified as being UAV-related, 66 percent resulted in favorable findings, 16 percent in unfavorable findings, and another 18 percent were inconclusive. Of the checks Commerce identified as being UAV-related, 58 percent were favorable, 6 percent were unfavorable, and the remaining 36 percent had limited or inconclusive results. Of the checks that were unfavorable, some identified significant concerns related to unauthorized end-users or end-uses. For instance, State conducted a Blue Lantern check as part of a request to amend a license application to allow for the provision of additional services to one country in support of a U.S. UAV it had already purchased. State found that the country was basing and operating the UAV in a manner that violated the U.S. government’s prohibition against using U.S. Munitions List items in internationally disputed territory. Thus, the check was deemed unfavorable. All three agencies have conducted end-use monitoring on UAV technology, but differences in their respective end-use monitoring programs may result in similar types of items being subject to different levels of oversight. Further details of these differences in U.S. agencies’ end-use monitoring programs for UAVs are addressed in the classified version of the report. U.S. agencies may also have differing levels of access to facilities and equipment when conducting end-use monitoring, contributing to differences in the level of oversight of exported items. Although DOD requires that countries agree to permit inventories and physical inspections as a condition of FMS transfers, State sometimes lacks this type of agreement from countries for items exported through DCS. In fact, U.S. government officials noted that some bilateral agreements prohibit U.S. officials from directly conducting end-use monitoring on State- licensed items. Even when State does have such authority, it inconsistently visits end-users to verify compliance with license conditions, in at least some countries. For instance, we reported in November 2011 that State infrequently visited end-users in Persian Gulf countries when conducting Blue Lantern post-shipment checks on night vision devices. U.S. agencies coordinated their UAV enforcement actions through several mechanisms, including the National Export Enforcement Coordination Network, and the Exodus Command Center, but officials acknowledged limitations with each. We have previously reported on challenges in enforcing export control laws and regulations more generally. Among other things, we found enforcement agencies have had difficulty coordinating cases and agreeing on how to proceed on investigations. The National Export Enforcement Coordination Network (NEECN) was designed to be a hub for coordination on export control investigations. Among other things, NEECN assisted law enforcement agencies in apprising each other of investigative leads, disseminating investigative leads to law enforcement field offices, providing support to ongoing investigations, and identifying proliferation trends. As of November 2011, NEECN was replaced by the new Export Enforcement Coordination Center, as part of the administration’s export control reform initiative. To help ensure greater coordination, the administration has required key agencies to partner in this effort in contrast with NEECN, which was a voluntary effort and at times suffered from a lack of agency participation, according to some law enforcement officials. Another key coordination mechanism is the ICE-led Exodus Command Center. Enforcement agencies, including ICE and CBP, submit license determination requests through the center to confirm with State or Commerce whether a particular item requires a license, and if so, whether the required license has been obtained. During fiscal years 2005 through 2010, law enforcement officials used the Exodus Command Center for license determination requests involving UAV-related technology; however, details of these requests are designated as sensitive but unclassified and are not reported here. Law enforcement officials noted that while the Exodus Command Center is a key tool, license determination requests can take a significant amount of time, thus impacting their ability to move forward on investigations or other enforcement actions. In March 2012, we issued a report that explores in more detail the challenges that law enforcement agencies face in investigating illicit transshipments, including license determination delays. U.S. agencies have worked together to take certain enforcement actions against violators of export control laws and regulations on UAV technology. Based on our analysis of DOJ reporting on export control enforcement prosecutions from October 2006 through June 2011, we identified at least seven prosecutions involving attempts to illegally export UAV-related technology. For instance, in 2009, a District of Columbia couple pleaded guilty to making false statements regarding the export of autopilots for mini UAVs to China. According to U.S. enforcement officials, they encountered certain difficulties enforcing export laws and regulations on UAVs that are common across all export control investigations. For instance, of the 34 closed investigations that ICE identified for us as being UAV-related in fiscal years 2005 through 2010, none of the cases resulted in a criminal prosecution. In the majority of the 34 cases, the investigations were closed as a result of investigators losing touch with the suspects outside of the country. We previously reported that many suspects in export control violation cases are located outside of the country and foreign governments may not always choose to cooperate with U.S. law enforcement officials. Law enforcement officials also identified two issues that make UAV cases particularly difficult to pursue. DOJ officials noted that it can be difficult to prosecute a case involving an export control violation, particularly those involving dual-use technologies, because proving the violation took place typically involves showing that the commodity in question was specifically designed for use in a technology or application requiring an export control license. For instance, in the 2009 case discussed previously, DOJ ultimately prosecuted the District of Columbia couple for making false statements and not for illegally exporting the autopilots to China. DOJ did this because prosecutors could not prove that the autopilots were specially designed for use in military UAVs, despite evidence that this was their intended use, according to DOJ officials. As part of the administration’s efforts to move items on the U.S. Munitions List to the Commerce Control List, Commerce issued a proposed rule in the Federal Register in July 2011 defining what is meant by specially designed and requesting public comment on the proposed definition. The comment period for this proposed rule closed on September 13, 2011. After reviewing the comments submitted and further review of the issue, Commerce issued another proposed rule further revising the definition of “specially designed” in the Federal Register in June 2012. The comment period for this proposed rule will close on August 3, 2012. In addition, ICE, CBP, and Commerce officials noted that it is often difficult for law enforcement officials to determine whether violations are occurring because many law enforcement officials lack the technical skills to differentiate controlled UAV components from similar components used in model aircraft or ultralights, which are not subject to export control restrictions. Commerce officials also noted that the rapidly evolving nature of the technologies for use in UAVs could make it more difficult for law enforcement to readily identify these technologies in the future. According to ICE officials, to provide law enforcement officials with the technical skills to identify UAV-related technologies, ICE has provided UAV training to its agents in multiple locations throughout the country. Commerce has also provided technical training to law enforcement officials; however, this training did not specifically focus on UAV technologies, according to Commerce officials. Multiple factors highlight past and likely persistent limitations of U.S. efforts to control the proliferation of UAV technology through the export control process. First, the key trends in the acquisition, development, and applications of UAV technology globally show enormous growth in demand for military uses of UAVs, including for lethal applications, and an increasing ability of countries to acquire or develop their own systems. While only a few countries will have a near-term ability to develop and field the most sophisticated systems, many are expected to have sufficiently useful UAVs. These could threaten U.S. forces and interests. Second, the U.S. government recognizes the risks related to the proliferation of UAV technology, but faces difficulties setting controls on systems and components that countries of concern are interested in obtaining. Third, the U.S. government used multilateral and bilateral mechanisms to restrict the proliferation of UAV technology to a great extent, but as we reported in 2004, the nonbinding and consensual nature of multilateral export control regimes can challenge the U.S. government’s ability to achieve its objectives in these forums. While technological advances and the consensual nature of the multilateral export control regimes complicate the task of avoiding widespread proliferation to U.S. adversaries, the U.S. government can take steps to better coordinate its efforts to address national security considerations through its controls on the transfer and export of UAV technology. For instance, some agencies have routine and formal roles in reviewing licenses, but others have no formal mechanism to share significant information with each other. In fact, the role of some agencies with potentially important information to provide has diminished in recent years. Furthermore, U.S. government efforts to provide reasonable assurance that UAV exports and transfers are used as intended are marked by differing levels of protection through State and DOD end-use monitoring activities. As we previously reported on a similar situation involving night vision devices for Persian Gulf countries, major differences in the two agencies’ monitoring programs need to be harmonized. Finally, certain information that would be useful to executive branch and congressional decision-making is unavailable because State’s licensing database cannot readily identify all licenses authorizing military UAV exports. Thus the U.S. government cannot readily identify the full range of UAVs it has authorized for export to foreign countries. We are making three recommendations: As part of the Administration’s export control reform initiative, we recommend that the Secretary of State establish a mechanism in the licensing database to better enable the identification of licenses authorizing the export of UAVs and related components and technologies. We recommend that all U.S. agencies with information relevant to the export licensing process should seek to improve mechanisms for information sharing. To close gaps in the implementation of UAV end-use monitoring programs that may limit the ability of DOD and State to adequately safeguard defense articles upon their arrival and basing, we recommend that the Secretaries of State and Defense take steps to harmonize their approaches to end-use monitoring. Such steps might include developing a plan for how and when each agency’s end-use monitoring approaches would be harmonized. We provided a draft of our February 2012 classified report to State, DOD, Commerce, DHS, DOJ, and the CIA for their review and comment. State, DOD, Commerce, and DHS provided written comments. We have reprinted DHS’ written comments in appendix IV. State’s, DOD’s, and Commerce’s comments discussed classified information and cannot be publicly released; however, we have included an unclassified summary of their comments, as well as those of DHS. State, Commerce, and DOD also provided technical comments, as did the CIA, which we incorporated in the report as appropriate. State agreed with our recommendation to establish a mechanism in the licensing database to better identify licenses authorizing the export of UAVs and related components and technologies. According to State, the U.S. Munitions List is being rewritten to redefine its controls on UAVs and better differentiate them from controls on other military aircraft. State noted that these changes to the U.S. Munitions List, along with the introduction of USXports as the U.S. government’s export control licensing case management system, will provide an opportunity to improve database collection and facilitate the identification of UAV licenses. State, DOD, and Commerce agreed with our recommendation to take additional steps to establish better interagency information sharing. According to State, the administration is currently trying to address such concerns as part of its export control reform initiative. Both DOD and Commerce noted that as part of the administration’s export control reform initiative, a new unit, known as the Information Triage Unit, is being established to facilitate information sharing among various U.S. agencies. To begin implementing the functions of the Information Triage Unit, Commerce noted that it has established a Strategic Intelligence Liaison Center. DHS and Commerce noted the role of the Export Enforcement Coordination Center with respect to the exchange of export control- related information among certain U.S. agencies. State and DOD also agreed with our recommendation to harmonize their approaches to the end-use monitoring of UAVs. State said that it has and will continue to make improvements in its end-use monitoring program. State also said that the report lacks some critical perspective on the number and scope of transfers involving the most sophisticated UAVs. We acknowledge that the United States has to date transferred only a limited number of more sophisticated UAVs, but this does not lessen the importance of ensuring that UAVs the United States transfers to foreign recipients are well protected. Additionally, we note U.S. government officials we met with anticipate that the number of such UAVs transferred will increase in the future. Thus, the importance of effective U.S. end-use monitoring of UAVs will likely continue to increase over time. DOD stated that it welcomes the opportunity to work with State on the end-use monitoring issues raised in our recommendation. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the secretaries and agency heads of the departments addressed in this report, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-9601 or at melitot@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To assess the global trends in the development, acquisition, and application of unmanned aerial vehicle (UAV) technology worldwide since 2005, we obtained, analyzed, and corroborated private sector open source and U.S. government reporting on U.S. and foreign UAV activities from various sources and spoke to representatives of U.S. private sector associations representing companies that manufacture UAVs. For this report, we defined the term “acquisition” to mean those countries that have obtained complete UAV systems, as well as the countries from which they acquired these UAVs. We defined the term “development” to mean those countries producing and supplying UAVs and the systems they are building. The term “applications” addressed the tasks that UAVs perform and the limitations in their capacity to achieve these tasks. Private sector associations we met with included the Association for Unmanned Vehicle Systems International and the Aerospace Industries Association. We also interviewed representatives of UAV manufacturers in the United States, as well as various analysts within the U.S. government who track UAV issues. We also obtained copies of their briefs as well as some of their reports. In addition, to get a better understanding of the regulatory and technological limitations that affect UAV development, we met with officials from the Federal Aviation Administration. As our trend assessment dealt with global trends, we also met with industry, trade association, and foreign government officials in three countries—Israel, Italy, and the United Kingdom—and obtained reports on their UAV programs. We selected these countries based on analyses of open source reporting and Department of Defense (DOD) and Department of State (State) data showing that these countries either have extensive experience operating U.S.-made UAV systems or are important producers of UAVs and related components. We traveled to Patuxent Naval Air Station in Patuxent, Maryland, to gain a firsthand understanding of the current state of UAV technology, observing the U.S. Navy’s Broad Area Maritime Surveillance-Demonstrator system and the Shadow 200. The Broad Area Maritime Surveillance-Demonstrator UAV is based on the Global Hawk platform—a strategic UAV—while the Shadow 200 is a tactical UAV in use by the U.S. Army that is currently undergoing modification for use by the U.S. Marine Corps. To assess the national security considerations associated with the proliferation of UAV technology, we met with private sector and U.S. government analysts knowledgeable about UAVs. We also obtained and analyzed a range of private sector and unclassified and classified reports and briefings by the intelligence community discussing the threats associated with the spread of UAV technology to countries of concern and terrorist organizations. Additionally, we interviewed officials from State, the Department of Commerce (Commerce), and DOD to gather information on the key risks and benefits associated with the spread of UAV technology. To better understand the security considerations associated with transfers of U.S. UAV technology to U.S. allies, we met with foreign government and U.S. embassy officials in Italy to document the Italian Ministry of Defense’s experience purchasing and operating U.S.-made systems. While in London, we were unable to meet with United Kingdom military officials knowledgeable about their experience purchasing and operating U.S. systems, but we obtained written responses to questions from the United Kingdom’s Ministry of Defense and met with U.S. embassy officials familiar with the United Kingdom’s experience. To assess the extent to which the U.S. government used the multilateral regimes and bilateral demarches to foreign countries to address UAV technology proliferation, we obtained and analyzed classified State reporting cables documenting the results of the 2005 through 2009 Missile Technology Control Regime (MTCR) plenaries and other meetings. We also reviewed various MTCR and Wassenaar Arrangement (Wassenaar) documents, including the two regime’s control lists and the various U.S. proposals submitted to the MTCR and Wassenaar. Additionally, we interviewed State, Commerce, and DOD officials to gather information on the steps that the U.S. government has taken through the regimes to work with other participants to control UAV proliferation. We also met with officials of the Wassenaar Arrangement Secretariat. We attempted to meet with MTCR officials, but were not able to due to scheduling limitations. To better understand the limitations of the multilateral regimes, we met with officials from State, Commerce, DOD, and other agencies. To assess the extent to which the United States used bilateral diplomacy to address UAV proliferation concerns, we obtained and analyzed approximately 70 demarches presented to foreign countries during the January 2005 to September 2011 timeframe that State provided to us. We also interviewed State officials knowledgeable about the demarches. We did not conduct an independent assessment to determine whether our sample contained all the UAV-related demarches that State presented to foreign countries during this timeframe. To assess the extent to which the U.S. government has coordinated its export control efforts to limit the spread of UAV technology, we obtained and analyzed fiscal years 2005 through 2010 export licensing and end- use monitoring data from Commerce and State. We also obtained DOD fiscal years 2005 through 2010 Foreign Military Sales program and end- use monitoring data. To assess the reliability of these various data sets, we conducted interviews with relevant agency officials, reviewed agency documentation, reviewed past GAO assessments of the databases used to produce this data, and conducted our own reviews of the data provided by the agencies. We determined that the data were sufficiently reliable for our use; however, we identified certain limitations, including with State’s licensing database in particular, which are discussed further below. We also reviewed Commerce, State, and DOD documents and reports and met with officials in Washington, D.C., involved in licensing, transfer, and end-use monitoring activities from these three agencies. We also met with agency officials from Commerce, Immigration and Customs Enforcement, Customs and Border Protection, the Federal Bureau of Investigation, and the Department of Justice responsible for enforcing export control laws and regulations. To analyze Commerce’s UAV-related export control licensing data, we identified the 3 principal export control classification numbers (ECCNs) that exclusively control UAV systems and technology, as well as 29 additional ECCNs that include technology that could be used in UAVs, but can also be used for other purposes. To identify these ECCNs, we first conducted a search of the Commerce Control List to determine which ECCNs contained the terms: “unmanned aerial vehicle,” “UAV,” “unmanned aerial system,” and “UAS.” We also reviewed Commerce documentation discussing UAV-related ECCNs. Finally, we validated the choice of these ECCNs with officials from Commerce and the Defense Technology Security Administration and made modifications to our list based upon their input. We validated our list of ECCNs with Commerce and the Defense Technology Security Administration because Commerce manages the database used to track dual-use license applications and the Defense Technology Security Administration is the main agency that Commerce uses for technical assistance in conducting license reviews. We then analyzed Commerce export licensing data and quantified the number of license applications associated with each of these ECCNs during fiscal years 2005 through 2010. However, in the final report, we chose to limit our discussion to only those licenses involving the three ECCNS that are UAV-specific. We chose to do so because, through our own analysis and interviews with Commerce and Defense Technology Security Administration officials, we determined there was not a reliable way of identifying which of the more than 7,000 license applications involving the other 29 ECCNs included items that were to be used in UAVs, versus those licenses that included items to be used for other purposes, such as in manned aircraft. Because our final analysis does not include any license applications involving these 29 ECCNs, our results may not have captured some UAV-related licenses; however, we believe the results are sufficiently reliable to provide a reasonable estimate of the number of UAV-related licenses submitted to Commerce in fiscal years 2005 through 2010. State’s licensing database is organized according to U.S. Munitions List category and subcategory and there is no specific category or subcategory for UAVs and related technology. Thus, to analyze State’s UAV-related licensing data, we obtained data for more than 7,000 license applications that State had submitted to the Missile Technology Export Control Group (MTEC) during fiscal years 2005 through 2010. While the majority of UAV licenses go before the MTEC, certain UAV-related licenses may not be captured within the data State provided, according to State officials. For instance, certain sensors or other types of payloads used in UAVs, but also used in other types of aircraft, might not be reviewed by the MTEC because they are not considered missile technology controlled by the MTCR, according to State officials. Additionally, the data provided by State included a significant number of licenses that were not UAV-related and instead pertained to other types of missile technology. To better identify the UAV-related licenses, we identified 34 key terms to use in filtering the data. These terms included both general terms that are commonly used to describe UAVs, such as “unmanned aerial system” and “UAS,” and also specific terms that are the names of key UAV systems that are produced in the United States and abroad, such as “Predator” and “ScanEagle.” We validated these terms with State and the Defense Technology Security Administration. We validated the choice of these terms with State and the Defense Technology Security Administration because State manages the database used to track U.S. Munitions List-related license applications and the Defense Technology Security Administration is the main agency that State uses for technical assistance in conducting license reviews. We used these terms to assist in separating out those licenses that were UAV-related from those that were not. Nonetheless, we found that Direct Commercial Sales (DCS) data could not identify with certainty all licenses authorizing UAVs and related components without a manual review of tens of thousands of licenses. As a consequence, we could not accurately report the magnitude of DCS arms transfer authorizations for UAVs; however, we believe the results are sufficiently reliable to provide a general estimate of the number of UAV-related licenses submitted to State in fiscal years 2005 through 2010. To analyze State and Commerce end-use monitoring of UAV-related exports, we obtained end-use monitoring data from both agencies identifying the number, location, and type of UAV-related end-use monitoring checks conducted in fiscal years 2005 through 2010. Both State’s and Commerce’s end-use data have limitations because the agencies’ databases are not designed to provide a means of automatically identifying end-use checks that are UAV-related. As a result, both agencies developed queries using terms such as “UAV” and “unmanned aerial vehicle.” Based upon our discussions with agency officials, we believe that these queries identified the majority of relevant end-use check records, but some UAV-related checks may not have been captured in the queries. However, we determined that the agencies’ end- use monitoring data is sufficiently reliable to provide a reasonable estimate of the number and types of checks performed by the two agencies. To analyze DOD’s transfers of UAV technology via the Foreign Military Sales (FMS) program, we obtained from the Defense Security Cooperation Agency (DSCA) a breakdown of the number, country, and type of UAV technology transferred during fiscal years 2005 through 2010. To produce this data, DSCA developed a query of its 1200 system to identify relevant FMS transfers involving UAV technology. We also obtained Golden Sentry UAV-related end-use monitoring data from DSCA for the same period. To ensure the accuracy of the information contained in appendix III, we provided a copy of this appendix to Israeli government officials, who provided technical comments. We have incorporated their comments as appropriate. DOD, State, the Department of Homeland Security, the Federal Bureau of Investigation, and the Central Intelligence Agency deemed some of the information in our February 2012 report as classified, which must be protected from public disclosure. Therefore, this report omits sensitive information about efforts by countries of concern and terrorists to obtain and use sensitive UAV technology, as well as details about the U.S. proposals that the multilateral regimes did not adopt. This report also omits sensitive information about U.S. uses of bilateral diplomacy to address UAV proliferation concerns, U.S. efforts to coordinate and use certain sensitive information as part of the licensing process, and U.S. government efforts to coordinate the enforcement of export controls on UAVs. We conducted this performance audit from October 2010 to July 2012 in accordance with generally accepted auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Brazil is a member of MTCR, but not Wassenaar. Croatia, Estonia, Latvia, Lithuania, Mexico, Romania, Slovakia, and Slovenia are members of Wassenaar, but not MTCR. According to Israeli officials we spoke with, the changes that have occurred in Israel’s export control system since 2006 were significant because they elevated the importance of export controls. For this reason, this appendix provides additional information about these changes. According to Israeli officials we spoke with, in general, the changes are designed to encourage more interagency coordination and to facilitate enhanced enforcement of export control laws. In July 2006, Israel established a single export control agency within the Ministry of Defense, named the Defense Export Control Agency (DECA). DECA is responsible for reviewing and consequently approving or denying applications for licenses that involve items, technologies, know- how, and services which are considered under the definition of defense exports. According to Israeli officials and documents, in cases where a license application involves purely military items or dual-use items that are destined for a military end-user, DECA bears full responsibility, although it is required to consult with the Ministry of Foreign Affairs. In these instances, the licensing process is a two-stage process with a marketing license preceding the export license. In cases of applications involving dual-use items destined for a civilian end-user, the Israeli Ministry of Industry, Trade, and Labor bears the responsibility, while consulting with DECA. In these cases, the licensing mechanism is a one-stage process, as it includes the issuance and granting of the export license alone. According to Israeli officials and documents, for license applications in which DECA bears full responsibility, a mechanism was established within the Ministry of Defense to coordinate the review of these licenses. This includes the establishment of advisory committees. In addition, a technical committee called the “MTCR Committee” reviews license applications involving possible technologies controlled by MTCR. That committee’s task is to determine whether an item is contained within an MTCR control list and if so, what category. According to Israeli officials and documents, by law, DECA is solely responsible for enforcing export control directives and regulations. Within the framework of that responsibility, DECA is often assisted by Israeli Customs, which in practice, enforces most of the directives and regulations. In addition, DECA is responsible for conducting outreach to companies that export military and dual-use items, technologies, know- how, or services. According to Israeli officials that we spoke with, Israel also adopted export control legislation to control the export of both military and dual-use items, technologies, know-how, and services. According to Israeli documents, in July 2007, the Israeli Parliament enacted a new Defense Export Control Law, which entered into force in December of that year. This law elevated the importance of export controls in several ways, according to Israeli officials. For instance, Israeli officials stated that the law established a requirement for Israeli exporters to register before applying for any export control license and to create a new position within the company—director of export control. According to Israeli officials, the law also established periodic reporting, record-keeping, and inspection requirements; provided for new administrative penalties such as fines, suspensions, and revocations of licenses; and strengthened criminal penalties for those found violating the law. Moreover, according to Israeli documents, the Defense Export Control Law led the Ministry of Defense to establish separate lists of controlled technologies—one based on the MTCR Annex, two based on the Wassenaar munitions and dual-use lists, and a third dual-use list for transfers to the Palestinian Authority. The lists of controlled technologies are updated annually in two ways, according to Israeli officials. First, DECA meets with MTCR and Wassenaar Arrangement officials in outreach sessions conducted by the two regimes. The outreach sessions are designed in part to inform key countries that are not MTCR or Wassenaar Arrangement members about control list changes agreed to by member countries, according to Wassenaar Arrangement officials. In addition, DECA meets with export control counterparts from the United States, United Kingdom, Germany, and other countries, according to Israeli officials. With respect to license application approvals, the Israeli government typically imposes certain conditions, according to Israeli government and industry officials. For instance, DECA requires manufacturers to obtain re- export approval for all controlled components not made in Israel from the country of origin as a pre-condition for considering a license application. In addition, DECA typically imposes certain license conditions, for instance, requiring end-users to sign an end-use or end-user certificate. According to Israeli government officials, approved licenses often state that technology cannot be transferred to a third party without authorization from DECA. According to Israeli government officials and documents, with respect to license applications involving UAV technology, the Israeli government typically imposes additional licensing conditions as well. For instance, license applications must specify under which MTCR category the UAV falls, if any. According to Israeli government and industry officials, for MTCR Category I UAVs such as the Heron TP, the Israeli government has adopted a “presumption of denial” standard. In instances where authorization is eventually given to export a Category I UAV, it is limited to MTCR member countries only. In cases where authorization is granted to export an MTCR Category II UAV, these may be marketed or sold to MTCR nonmember countries only as long as they provide a declaration that they fully adhere to MTCR controls. In addition to the contact named above, the following staff made key contributions to this the report: Joseph A. Christoff, Director (ret.); Jeff Phillips, Assistant Director; Lynn Cothern; Martin De Alteriis; Elias Lewine; Grace Lui; José M. Peña; and Ryan Vaughan. Mitch Karpman provided technical assistance in statistics and data analysis, Jena Sinkfield provided graphics support, and Sarah McGrath provided editorial assistance. Burns Chamberlain, Gifford Howland, Mason Calhoun, Drew Lindsey, Rachel Dunsmoor, Judith Williams, and Juan P. Avila provided additional technical assistance. | The global use of UAVs has increased significantly over time, raising concerns about their proliferation. MTCR and Wassenaar are the multilateral regimes that address UAV proliferation. MTCR seeks to limit the proliferation of weapons of mass destruction delivery systems, while Wassenaar seeks to limit the spread of certain conventional weapons and sensitive technologies with both civilian and military uses. This report is an unclassified version of a classified report issued in February 2012. GAO was asked to address (1) global trends in the use of UAV technology, (2) U.S. national security considerations concerning UAV proliferation, (3) multilateral and bilateral tools to control UAV proliferation, and (4) coordination of U.S. efforts to limit the spread of UAV technology. To conduct this review, GAO analyzed intelligence, licensing, and end-use monitoring data, and interviewed U.S. and foreign officials. Since 2005, the number of countries that acquired an unmanned aerial vehicle (UAV) system nearly doubled from about 40 to more than 75. In addition, countries of proliferation concern developed and fielded increasingly more sophisticated systems. Recent trends in new UAV capabilities, including armed and miniature UAVs, increased the number of military applications for this technology. A number of new civilian and commercial applications, such as law enforcement and environmental monitoring, are available for UAVs, but these applications are limited by regulatory restrictions on civilian airspace. The United States likely faces increasing risks as countries of concern and terrorist organizations seek to acquire UAV technology. Foreign countries and terrorists acquisition of UAVs could provide them with increased abilities to gather intelligence on and conduct attacks against U.S. interests. For instance, some foreign countries likely have already used UAVs to gather information on U.S. military activities overseas. Alternatively, the U.S. government has determined that selected transfers of UAV technology support its national security interests by providing allies with key capabilities and by helping retain a strong industrial base for UAV production. For instance, the United Kingdom and Italy have used UAVs purchased from the United States to collect data on Taliban activity in Afghanistan. The United States has engaged in multilateral and bilateral diplomacy to address UAV proliferation concerns. The United States principally engaged the Missile Technology Control Regime (MTCR) to address multilateral UAV proliferation concerns. Since 2005, the United States proposed certain significant changes to address how MTCR controls UAVs, but members could not reach a consensus for these changes. Also, while the Wassenaar Arrangement (Wassenaar) controls the export of some key dual-use UAV components, it does not control other dual-use technologies that are commonly used in UAVs. The Department of State (State) has also used diplomatic cables to address the proliferation of UAV-related technologies bilaterally. State provided to GAO about 70 cables that it sent from January 2005 to September 2011 addressing UAV-related concerns to about 20 governments and the MTCR. Over 75 percent of these cables focused on efforts by a small number of countries of concern to obtain UAV technology. U.S. agencies coordinate in several ways to control the spread of UAV technology, but could improve their UAV-related information sharing. For instance, an interagency group reviews many license applications to export UAV technology. However, there is not a formal mechanism to ensure that licensing agencies have relevant and timely intelligence information when making licensing decisions. Also, States licensing database cannot provide aggregate data on military UAV exports State has authorized, which may impair the U.S. governments ability to oversee the release of sensitive UAV technology. The Department of Defense (DOD) and State each conduct end-use monitoring of some UAV exports, but differences in the agencies programs may result in similar types of items being subject to different levels of oversight. GAO recommends that State improve its export licensing database to better identify authorized UAV exports, that relevant agencies improve mechanisms for sharing information relevant to the export licensing process, and that State and DOD harmonize their UAV end-use monitoring approaches. The agencies generally agreed with the recommendations. |
Since passage of the Elementary and Secondary Education Act of 1965 (ESEA), more than 40 years ago, the Congress has sought to improve student learning through several initiatives. Current legislation, NCLBA, builds upon previous legislation—the IASA—by adding provisions meant to strengthen accountability requirements for school districts and schools. For example, both IASA and NCLBA required states to measure the performance of students in reading and math. NCLBA built upon this requirement by requiring annual testing in these subjects in each of grades 3 to 8 and added requirements that children’s performance in science also be assessed. Under NCLBA’s accountability provisions, states are required to develop plans that include academic standards and establish performance goals for schools’ meeting AYP that would lead to 100 percent of their students being proficient in reading, mathematics, and science by 2014. To measure their progress, states were required to establish academic proficiency goals for making AYP and to administer an annual assessment to students in most grade levels. In addition, each school’s assessment data must be disaggregated in order to compare the achievement levels of students within certain designated groups, including low-income and minority students, students with disabilities, and those with limited English proficiency, with the state’s proficiency targets. Each of these groups must make AYP in order for the school to make AYP. In addition to proficiency targets on state assessments, states must use another academic indicator to determine AYP. For high schools, the indicator must be graduation rates. States may choose what the other academic indicator will be for elementary and middle schools. Title I of the ESEA, as amended and reauthorized by NCLBA, authorizes federal funds to help elementary and secondary schools establish and maintain programs that will improve the educational opportunities of economically disadvantaged children For schools receiving Title I funds that do not achieve proficiency, a time line is required for implementing specific interventions based on the number of years the school missed AYP. If a school fails to meet AYP in reading, mathematics, or science for 2 consecutive years, districts must offer students in these schools the opportunity to transfer to a higher performing school in the district, and after the third year they must offer both school choice and supplemental education services (SES), such as tutoring. Prior legislation—IASA— required districts to take corrective action as a final intervention for schools that repeatedly missed AYP. While IASA allowed states to determine the appropriate corrective action for their districts and schools, NCLBA is more prescriptive in defining the corrective actions districts and schools must implement. In addition, a new intervention to change the governance of schools—school restructuring—was introduced for schools that miss AYP for 5 or more years. (See table 1.) Districts are responsible for selecting and implementing the corrective actions and restructuring options for these schools contained in the law. Schools exit improvement status if they make AYP for 2 consecutive years. In prior work on implementation of NCLBA, GAO reported that the Title I schools in corrective action and restructuring status during school year 2005-2006 were more frequently located in urban school districts and a few states and served higher percentages of low-income, minority, and middle school students than other Title I schools. In its last two reauthorizations of the ESEA, the Congress has recognized the importance of arts education in public schools. Although the NCLBA does not include proficiency requirements for the arts, it does authorize Education to make grants for arts education. The purpose of these programs as set out in NCLBA includes helping students meet state academic achievement standards in the arts and supporting “the national effort to enable all students to demonstrate competence in the arts.” In addition, arts education is identified by NCLBA as a core academic subject. Similarly, the Congress stated in IASA that the arts express “forms of understanding and ways of knowing that are fundamentally important to education.” This finding incorporates the two prevailing perspectives on the role that arts education can play in public schools. One perspective sees arts education as having intrinsic value because of the insights into self and others that experiencing the arts can yield. A second perspective focuses on the association between arts education and development of cognitive, affective, and creative skills, including improved achievement in academic subjects such as reading and math. While NCLBA does not attempt to address these perspectives, it does affirm that arts education has a role in public schools. Education administers a number of specific programs related to arts education, but two arts education grant programs authorized by NCLBA— the Model Development and Dissemination grants program and the Professional Development for Arts Educators program—are competitive grant programs that provide funding for arts education research projects that integrate arts disciplines into public school curricula, strengthen arts instruction, and improve students’ academic performance and funding for art teachers’ professional development, respectively. Total funding for these two programs in the last few years was $21.1 million in fiscal year 2006, $21 million in fiscal year 2007 and $20.7 million in fiscal year 2008. Prior to passage of NCLBA, the National Endowment for the Arts twice collaborated with Education to determine the extent to which public schools offer arts education in the four major art forms: visual arts, music, theater, and dance. Through surveys of school principals and teachers that Education conducted in school years 1993-1994 and 1999-2000, Education found that visual arts and music were offered by 80 to 90 percent of public elementary and secondary schools, while theater and dance were offered by a smaller fraction—fewer than half. Education plans to conduct another such survey in school year 2009-2010. Education sponsored the National Assessment of Educational Progress (NAEP) arts assessment of students in the eighth grade during school year 1996-1997, which reported the frequency of arts offerings by art form, and how well public school students could respond to, create, and perform works of visual art, music, and theatre. Known as the NAEP 1997 Arts Report Card, the study report was issued in November 1998. The assessment found that a high percentage of eighth grade students were offered music and visual arts in the schools they attended, but that instruction in theater and dance was more limited. Students’ performance ranged from 78 percent who sang the song “America” rhythmically to 1 percent who created expressive collages. Two other studies focused primarily on NCLBA implementation but also included analyses of changes in instruction time for all subjects, including arts education. One study, reported in Choices, Changes, and Challenges: Curriculum and Instruction in the NCLB Era, sponsored by the Center on Education Policy (CEP) and issued in July 2007, asked school district officials in school year 2006-2007 whether instruction time for individual subjects, including arts education, had changed since school year 2001-2002 when NCLB was enacted. The CEP study reported that 30 percent of school districts reported that instruction time for arts education in elementary schools had decreased since NCLBA was enacted. NLS- NCLB, also sponsored by Education, collected data in school years 2004- 2005 and 2006-2007 to describe major patterns in state, district, and school implementation of NCLBA’s central accountability provisions, including changes in instruction time. To address study question 1 in our report concerning changes in students’ access to arts education, if any, we analyzed the data on changes in instruction time and other school characteristics collected from elementary school teachers and principals during school year 2006-2007 by the NLS-NCLB. Education plans to undertake a new study, which is expected to build on previous research, including the NLS-NCLB study, to continue to examine NCLBA implementation issues. Among a broad range of topics the planned study likely will explore are the uses of instruction time for all academic subjects. Education expects to award a contract for the study in September 2009 and begin data collection in the 2011-2012 school year. Most elementary school teachers—90 percent—reported that instruction time for arts education stayed the same between the 2004-2005 and 2006- 2007 school years. The percentage of teachers that reported that instruction time had stayed the same was similarly high across a range of school characteristics, irrespective of the schools’ percentage of low- income or minority students or of students with limited English proficiency, or the schools’ improvement under NCLBA. However, 7 percent of the teachers reported a reduction in the time spent on arts education. Moreover, when we looked at teacher responses across a range of school characteristics, we found some significant differences in the percentages of teachers reporting that the time spent on arts education had decreased and in the average amount of time that instruction had been reduced. In contrast, among teachers reporting increases in instruction time for the arts, we found no differences across different types of schools. Because Education’s survey did not include questions for teachers to indicate why instruction time decreased at their school, in our analysis of Education’s data, we were unable to identify factors that might help explain some of the apparent disparities in instruction time suggested by our findings. According to Education’s data, the vast majority of elementary school teachers surveyed reported that the amount of weekly instruction time spent across all subjects, including arts education, stayed the same in the 2006-2007 school year compared with the 2004-2005 school year. Table 2 shows that about 89.8 percent of elementary school teachers reported that instruction time spent on arts education did not change between these school years, while about 3.7 percent reported the time had increased compared with about 6.6 percent that reported it had decreased. The percentage of teachers that reported increases in instruction time was higher for reading/language arts and mathematics than for other subjects, which is understandable since these were the two subjects for which the NCLBA held schools accountable for demonstrating student proficiency at that time. In contrast, the percentage of teachers that reported decreases in instruction time was higher for social studies and science than for other subjects, including arts education, even though the NCLBA required schools to begin testing student proficiency in science in the 2007-2008 school year. When we looked at teacher responses across a range of school characteristics—including percentage of low-income and minority students and students with limited English proficiency, as well as improvement status, as indicated in table 3—we found no differences across characteristics in the percentages of teachers reporting that the time spent on arts education had increased. However, there were some significant differences across characteristics in the percentages of teachers reporting that the time spent on arts education had decreased, as shown in table 3. Elementary school teachers at schools identified as needing improvement, those at schools with higher percentages of minority students, and those at schools with higher percentages of students with limited English speaking skills, were significantly more likely to report a decrease in the amount of time spent on arts education compared with teachers at other schools. We might also point out that the vast majority of teachers reported that instruction time stayed the same, irrespective of their schools’ percentage of low-income or minority students or students with limited English proficiency, or the schools’ improvement status under NCLBA. When we looked at the average amount of change in instruction time among teachers that reported either an increase or decrease, we found significant differences among teachers that reported a decrease. Among teachers that reported a decrease, teachers at schools with higher percentages of low- income or minority students reported significantly larger average decreases in time spent on arts education compared with teachers at other schools. (See table 4.) For example, among teachers reporting a decrease, teachers at schools with a higher percentage of low-income students reported an average decrease of 49 minutes per week in the time spent on arts education compared with an average decrease of 31 minutes reported by teachers at schools with a low percentage of these students. While this data might suggest that students at these types of schools are receiving less instruction time in arts education during the school day compared with students at other schools, we could not determine how this might affect their overall access to arts education without information on other opportunities, such as after- school programs in arts education. Interestingly, while teachers at elementary schools identified for improvement and those with high percentages of limited English- proficient students were more likely to report a decrease in arts education as shown in table 3, when looking at the amount of change, as shown in table 4, the data shows that, on average, they reported about the same amount of change in instruction time as teachers from nonidentified schools and those with lower percentages of limited English-proficient students, respectively—that is, the differences were not statistically significant. It was difficult to determine which school characteristic had a stronger effect on the changes in arts education instruction time without a more advanced analysis. Education’s NLS-NCLB survey did not include questions for respondents to identify the reasons instruction time may have changed, which might help explain some of the apparent disparities in instruction time suggested by our analysis of Education’s data. Although Education’s survey asked questions regarding whether schools have implemented any of a variety of NCLBA-defined interventions, such as extending the school day or adopting a new curriculum program, it did not specifically ask respondents to identify the reasons for any change in the amount of instruction time they reported for the respective subjects. According to our survey of state arts officials, since passage of NCLBA, basic state requirements for arts education in schools, such as the number of hours a week that the arts must be taught, have remained virtually unchanged and more states have established funding for some type of arts education, such as providing grants to schools to promote arts education. However, while some states have increased funding, other states have reduced funding since NCLBA’s passage. Arts officials attributed changes in funding to state budget changes to a greater extent than to NCLBA or other factors. By school year 2001-2002, the year NCLBA was enacted, most states had taken steps to establish arts education in their public school systems by developing basic arts education requirements, such as the number of hours a week that the arts must be taught or the number of courses that must be taken. As shown in table 5, of the 45 states that responded to our survey, 34 states had established the basic requirement that arts education be taught, and 28 states had included arts education as a high school graduation requirement by that school year. By school year 2006-2007, as shown in the third column of table 5, most of these states had retained these requirements. In addition, 3 more states had established basic arts education requirements, and 5 more states had included arts education as a high school graduation requirement by that school year. As table 5 also shows, a number of states did not have any requirements for arts education in place by the time NCLBA was passed. Specifically, 7 states had no basic requirement that arts education be taught, and 11 states had not included arts education as a high school graduation requirement by school year 2001- 2002. State by state breakouts are provided in appendix III. Many states had also provided funding to promote arts education in public schools and, as shown in the third column of table 6, most of the funding still was in place 5 years later, in school year 2006-2007. In addition, the number of states with arts education grants, training funding, and state established schools for the arts increased in school year 2006-2007. State arts officials identified multiple sources of funding for arts education, including the state education agency, the state cultural agency, private foundations, the federal government, and other organizations, as shown in table 7. Of the 45 arts officials who responded to the survey, more identified the state cultural agency as a funding source than any other organization, including the state education agency. While the number of states that had basic requirements for arts education remained nearly unchanged and most states maintained their arts education funding, levels of funding changed, with some states reporting decreases, and others reporting increases. For example, of the 32 states that awarded arts education grants in both years, funding decreased in 12 states, increased in 5 states, and stayed the about same in 8 states, as shown in table 8. According to our survey, state arts officials attributed changes in funding for state arts education to state budget changes to a greater extent than to NCLBA or other factors. For example, of the states that provided arts education grants in both school years 2001-2002 and 2006-2007, 11 arts officials attributed changes in funding to state budget changes, and 18 reported that shifting funds to meet NCLBA needs had little or nothing to do with the funding changes. Table 9 shows the extent to which the arts officials attributed changes in funding to state budget changes, state policy changes, shifting funds to meet NCLBA needs, and other factors for each of the four types of state arts education funding. District officials and school principals have used several strategies to provide arts education, including varying when the arts are offered, seeking funding and collaborative arrangements in the arts community, and integrating the arts into other subjects; however, some struggled with decreased budgets and competing demands on instruction time, according to officials we interviewed. Faced with decreased funding or increased demands on instruction time, some principals told us that they had to make trade-offs. School principals we met with had found several ways to maintain arts education, including varying when the arts are offered. More than half of the 19 schools we visited offered some form of arts education outside of the regular school day. In a few schools, after school classes were the only arts education opportunity available to students. At one middle school in Boston that had not met AYP in school year 2006-2007, the principal had eliminated arts education classes during the school day and purchased an after-school arts program in drama and music production from an outside organization. The program is open to all students, but participation in the program is offered on a first-come-first-served basis. In contrast, one New York City middle school, which was not meeting AYP in English and language arts in school year 2007-2008, changed when other classes were offered, rather than changing when arts education was offered. This school extended the school day for students who required additional help by adding a period to the school schedule four times a week. The principal told us that this allowed all students to attend art class held during the regular school day. While many schools experienced changes to their arts programs, several of the schools we visited reported no changes in their arts education offerings. For example, the principal of the high school we visited in the Waltham school district, near Boston, which met AYP, said that the school had experienced a stable budget for the past 10 years and had made no changes to its arts education policies. The principal of a large high school in Chicago, which has not met AYP for 4 years, also said that the school had not changed its arts education policies. He explained that because the school’s budget is determined by the enrollment level, his school had the resources to offer students arts education opportunities that smaller Chicago schools could not. Several of the schools we visited also reported receiving grants and private funding and establishing collaborative relationships with organizations in the arts community that supplemented the arts education classes funded by general revenues. For example, one elementary school in Boston has developed partnerships with several companies, including a bank, that fund the school’s instrumental music program. This elementary school also has obtained a grant from a television station to pay for instruments and participates in a city-funded program that sends seven selected students to the Boston Ballet once a week for lessons. A Chicago high school received a private grant that supported a student art project to do a mosaic on the walls outside the music rooms at the school. The principal of this high school also said that he has informal arrangements with local artists to bring special projects to the school, such as the group that visited the school to teach a belly dancing class. A high school in Miami set up internships for its students at local music stores and solicited a donation of used equipment from the local news station when it moved to a new facility. The drama teacher also solicits donations of costumes for school dramatic productions. In Broward County, Florida, the school district provides funds each year to pay for the cost of transporting the school district’s students to performances at the Broward Center for the Performing Arts (Center). A New York City junior high school receives support for students to attend plays from a private program and sends the school’s theater group to perform at Lincoln Center every year. A senior high school in the city has arranged music programs with Carnegie Hall, a local orchestra, and the Juilliard School of Music. The Museum of Modern Art and the Metropolitan Museum of Art also cover the students’ cost of admission for exhibits and performances. Arts organization officials in Chicago, Miami, and Broward County, Florida, described the arts integration model of arts education as a strategy for maintaining the arts in school curricula and provided examples of arts integration programs in schools we did not visit. In Chicago, the Chicago Arts Partnerships in Education, a nonprofit arts education advocacy organization, is participating as a partner in a project that supports arts integration in the 55 fine and performing arts schools operated under Chicago Public Schools’ (CPS) magnet school cluster program. The project, funded by Education’s Model Development and Dissemination grant program, funds teaching artists who work with art teachers and regular classroom teachers to incorporate the arts into teaching academic subjects. In Miami, Arts for Learning, a nonprofit that promotes arts integration through in-school and after-school programs, operates “GET smART,” a yearlong professional development program that provides interdisciplinary training to teachers on how to effectively create and implement arts integration projects in the core academic subjects. About 18 Miami-Dade schools participated in this program in school year 2007-2008. Arts for Learning also offers “Early GET smART” a program that works with preschoolers aged 2 to 6 to provide an arts-based learning approach to literacy and school readiness. The Broward County Cultural Division, a publicly funded agency established by the Board of County Commissioners, promotes arts integration in the local schools. One initiative provides a block grant to the school board to implement artist-in-residencies and arts integration workshops in individual schools. Officials representing the division said that schools are increasing use of the arts to teach lessons in academic subject areas. For example, as his class learned about a particular country, a social studies teacher would play music from that country to expose the students to different musical styles from around the world. The teacher was also working with an artist to develop a visual presentation that could be incorporated into the lesson. In addition, the Ft. Lauderdale Children’s Theater goes into schools and performs dramatic readings of plays with the children acting out the roles as part of their classroom reading lessons. Officials we met with told us that the main challenges to providing arts education have been decreased state or local funding and competing demands on instruction time due to requirements established by the state education agency or school district to meet NCLBA proficiency standards, such as doubling the amount of time low-performing students spend on reading and math. District officials and school principals in the Boston, Chicago, Miami- Dade, and New York City school districts all reported that state or local budget cuts created a challenge for arts education in the schools. The Boston school district expects an $11 million budget shortfall for the upcoming school year, a result of a declining population base. School district officials expect this shortfall to lead to a loss of 10 arts teachers across the school district. District officials and school principals in Chicago attributed funding shortages for arts education to the school district’s arts personnel funding policy. The Chicago school district funds personnel positions on the basis of student enrollment and supports one half-time position for an arts teacher in primary schools with fewer than 750 students. To employ a full-time arts teacher on the staff, a school principal must supplement the arts teacher’s salary from discretionary funds. Officials in both Florida school districts we visited reported budget pressures due to a state budget shortfall, but the consequences for arts education differed. Miami-Dade school district officials reported cuts in the district’s arts education budget of as much as 70 percent, resulting in staff cuts. In Broward County, while acknowledging budget pressures, school district officials reported that the arts have not been cut. They said that the district had taken steps several years ago to prepare for this possible economic downturn. However, if cuts in content area programs are necessary, the district makes an across-the-board percentage cut in the budget allocated to each school rather than targeting individual subjects for reduction. New York City school district officials reported that a line item in the school district budget that provided schools a per capita allotment solely to support arts education was eliminated in 2007, and funds were incorporated into the school’s general fund. This change allowed school principals to allocate the funds to the arts or other subjects. In addition to state and local budget cuts, district officials and school principals in the Boston, Chicago, Miami-Dade, and New York City school districts also agreed that competing demands on instruction time were a major challenge for providing arts education in their schools. These officials also identified NCLBA’s proficiency standards—as well as requirements established by the state and school district to meet NCLBA proficiency standards—as a key source of the time pressure. Boston school district officials said that it is difficult to convince principals of the importance of continuing to provide arts education when it is not a tested subject. They said that the arts curriculum takes a back seat because school success is based on student performance on their state tests as required under NCLBA. Although they tried to avoid pulling students out of arts education classes for remedial work, one elementary and one high school principal interviewed in Boston, whose schools were not meeting AYP, agreed that NCLBA’s testing requirements had increased the demands on instruction time for tested subjects and reduced time available for the arts, at least for students not meeting proficiency requirements. A Waltham school district official said that to meet the state and federal proficiency standards, the district added workshops in math, reading, and science, which led to cuts in arts staff and even eliminating arts field trips because they reduce the amount of available class time. She added that, 2 years ago, the district added a two-block period twice a week to keep up with state proficiency standards. This resulted in the loss of one full-time equivalent (FTE) arts teacher. A Chicago school district official affirmed that the priorities principals set for meeting AYP in reading and math affect the time available for the arts. In Florida, where the state requires that students who perform at the lowest two of five levels on the state NCLBA proficiency tests be placed in intensive classes for language arts and math, district officials agreed that time for arts education might be affected. In Broward County, officials said that the district follows the state policy that requires mandatory pull-out sessions for students performing at reading levels 1 and 2 on the state performance assessments. In some cases, the district will require some students to be pulled out for additional intensive instruction in math. These “pull-out” students receive double periods of reading or other intensive instruction that reduces the number of periods they have available to take elective classes, such as art or music. A New York City school district official acknowledged that schools not meeting AYP faced challenges in providing arts education but said that the responsibility for meeting instructional requirements was the school principal’s. Principals in the elementary and middle schools we visited in New York, two of which were not meeting AYP, said they had taken steps to meet the time demands of NCLBA’s testing requirements. The high school principal said that students not meeting proficiency requirements could attend their remedial classes and still meet the arts course requirement for graduation, but that they may not have an opportunity to take courses above the minimum credit requirement. This high school was not meeting AYP in school year 2007- 2008. District officials and school principals told us that when they faced decreased budgets or increased demands on instruction time, trade-offs had to be made, and school principals made the decision. Principals’ decisions differed, however. Some principals chose not to spend their limited discretionary funds on arts education, while other principals, even when their school had been identified as needing improvement several times, maintained their arts offerings. For example, one school principal in a Chicago elementary school chose to spend discretionary budget funds on special reading and math programs needed to improve students’ performance rather than supplement half the salary of a full-time arts teacher. On the other hand, one Miami-Dade high school principal had allocated Title I funds to help retain and rebuild the school’s arts education program as part of its NCLBA restructuring plan. New York City officials said that a new accountability system the school district had developed in part because of NCLBA, but also to evaluate progress toward meeting city instructional requirements, increased the discretionary authority vested in school principals. The district also developed an accountability initiative called ArtsCount. For this initiative, district arts officials developed measures to be incorporated in the district’s evaluation of school performance and the quality of arts offerings. This information will be used to influence the scores that are incorporated into each school principal’s report card. For middle and high schools, the results are incorporated into the measure of graduation requirements. Under the accountability system and this initiative, school principals are given greater authority to make trade-offs, such as the discretion to allocate funds formerly restricted to expenditures for the arts to other subjects, but the school district monitors the results of their decisions. While some studies that have examined the association between arts education and students’ academic achievement have found a small positive association with student outcomes, others have found none. One meta-analysis that combined the results of several studies found small positive relationships. This study included two separate analyses: one that looked at the association between music instruction and math scores, and another that looked at the association between listening to music and math scores. The first analysis of six studies found that learning to play music had a small positive relationship with both standardized and researcher-designed achievement test scores in mathematics, regardless of whether or not the child learned to read music. Music instruction in these studies included both instrumental and vocal performance for durations of at least 4 months and up to 2 years, and included children at the preschool through elementary level. The second analysis, which included 15 studies, determined that there was a small positive relationship with math test scores when children listened to certain types of music while attempting to solve math problems. In contrast, another meta-analysis found no association with students’ achievement. This analysis, which looked at 24 studies examining reading outcomes and 15 studies examining math outcomes, found no association between arts education and standardized reading or math test scores, regardless of the child’s background or academic history. The students included in the studies had a wide range of academic abilities and came from a wide range of backgrounds. For example, some of the studies included academically at- risk students and students from lower-income families, while some of the studies included “academically gifted” students and students from higher- income families. The studies also included children of a variety of ages and several different types of arts instruction, including music, visual arts, drama, and dance. Moreover, some research has focused on special populations, such as students from low-income families; however, most of these studies did not meet GAO’s criteria for methodological quality, and their findings are questionable. Similarly, studies that examined the association between arts education and abilities associated with academic performance also were mixed. For example, two of the three analyses from one meta-analysis looking at the association between music education and certain spatial abilities found a positive relationship. One analysis, which was made up of 15 studies, and another that analyzed 8 studies, found that music education was associated with student performance on a wide range of spatial tasks. However, the third analysis, which included 5 studies, found no association between music education and one measure of spatial performance. In these studies, enhanced spatial performance referred to the ability to mentally recognize and manipulate patterns that fall into a certain logical order and are usually used in subjects such as music, geometry, and engineering. An example of spatial ability in a music course would be the ability to produce a piece of music based on memory alone, anticipating mentally the changes needed to play a certain piece of music. A complete list of the studies assessed is included in appendix IV. Amid concerns about possible elimination of arts education, the national picture indicates that the vast majority of schools have found a way to preserve their arts education programs. However, a somewhat different story emerges for some schools identified as needing improvement under NCLBA, which include higher percentages of low-income and minority students. Among teachers reporting a decrease in instruction time for arts education, our study identified a more likely reduction in time spent on arts education at schools identified as needing improvement and those with higher percentages of minority students. While school officials in our site visit states told us that requirements established by the state and school district to meet NCLBA proficiency standards placed competing demands on instruction time for arts education, the reasons for the differences in instruction time our statistical analysis identified are difficult to establish nationally, given current limitations in Education’s NLS-NCLB longitudinal data. Having national-level information about the reasons for these differences could add to the current body of research on arts education and help guide school decisions with respect to arts education. To help identify factors that may contribute to changes in access to arts education for certain student subgroups, we recommend that the Secretary of Education require that the department’s planned study of NCLBA implementation include questions in its surveys asking survey respondents to describe the reasons for any changes in instruction time they report. Once the information has been collected and analyzed, Education could disseminate it to school districts and schools to help them identify and develop strategies to address any disparities in access. We provided a draft of the report to the Department of Education for review and comment. Education generally agreed with our findings and stated that, our finding that among the small percentage of teachers reporting a decrease in arts education instruction time, teachers in schools identified for improvement and those with high percentages of minority students were more likely to report reductions in time for arts education is cause for concern. Regarding our recommendation, Education agreed that further study would be useful to help explain why arts education instruction time decreased for some students. Education said that it will carefully consider our recommendation that the department’s planned study of NCLBA implementation include questions in its surveys asking respondents to describe the reasons for any changes in instruction time they report. Education also provided technical comments, which have been incorporated in the report as appropriate. Education’s comments appear in appendix V. We are sending copies of this report to the Secretary of Education, relevant congressional committees, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or ashbyc@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. This appendix discusses in more detail our methodology for examining any changes in students’ access to arts education in public elementary and secondary schools that may have taken place since passage of the No Child Left Behind Act (NCLBA) and what is known about the effect of arts education on student academic performance. The study was framed around four questions: (1) has the amount of instruction time for arts education changed and, if so, have certain groups been more affected than others, (2) to what extent have state education agencies’ requirements and funding for arts education changed since NCLBA, (3) what are school officials in selected districts doing to provide arts education since NCLBA and what challenges do they face in doing so, and (4) what is known about the effect of arts education in improving student outcomes? As the Department of Education (Education), working in collaboration with the National Endowment for the Arts, determined first in school year 1993-1994 and again in school year 1999-2000, arts education in some form is provided in the vast majority of public schools nationwide. Questions about changes in access thus need to be considered for the national population of public schools. However, because we recognized that states’ and school districts’ roles in school governance, funding, and implementation of NCLBA introduce variation in time devoted to individual subjects, including arts education, we determined that an in- depth look at state, district, and school policies and practices also was needed to help understand any systematic changes in instruction time for arts education that a national-level analysis might identify. Therefore, to examine any changes in students’ access to arts education in public elementary and secondary schools that may have taken place since passage of NCLBA, we focused on time devoted to instruction in arts education and other subjects and any changes that occurred in a nationally representative sample of elementary schools. We also reviewed state arts education requirements and funding related to students’ access to arts education and steps that school districts and schools in selected states had taken to provide arts education in the post-NCLBA environment. To determine what is known about the effect of arts education on student academic achievement and other outcomes, we reviewed and methodologically assessed existing research on arts education. We used separate sources of data for each study question, including nationally representative survey data collected by the Department of Education’s (Education) National Longitudinal Study of No Child Left Behind (NLS-NCLB), which collected data on changes in instruction time by subject; a GAO survey of state arts education officials; on-site interviews with school district, school, and arts organization officials in selected states; and existing studies of the effect of arts education on student outcomes that met GAO’s methodological criteria. Before deciding to use the NLS-NCLB data, we conducted a data reliability assessment. We discuss our assessment procedures and steps we took to mitigate any data limitations below, as part of the methodology for analyzing changes in instruction time. We provided specifications to Education for descriptive analyses of the NLS-NCLB data, and we conducted a descriptive analysis of our state survey data, a synthesis of our site visit data, and a methodological assessment of existing research on arts education. Because we were not able to obtain raw data files from Education to do a comprehensive analysis of the data ourselves, we asked Education to provide us with summary information from the Survey of Teachers component of the school year 2006-2007 NLS-NCLB. These data are from a nationally representative survey of teachers, as well as of schools and school districts. We requested tables that showed (1) the average (mean) amount of time that teachers reported devoting to arts education each week in 2006-2007; (2) the percentage of teachers that reported that the amount of time spent on arts education had increased, decreased, and remained the same over the past 2 years; and (3) for those teachers who reported a change, the average increase or decrease (in minutes per week) that was devoted to arts education. We obtained these estimates from Education for teachers in all schools, and separately for teachers in different categories of schools, defined by the percentages of students in the schools that were (1) minorities, (2) African-Americans, (3) Hispanics, (4) eligible for free/reduced lunches, and (5) in individualized education programs. We also compared the reports from teachers in schools that were (6) urban with those from rural teachers, and (7) that were and were not identified as being in need of improvement. We obtained from Education the standard errors associated with the estimates from the different types of schools and thus were able to test the statistical significance of the differences between what teachers from different types of schools reported. Before deciding to use the data, we reviewed guidance on the variable definitions and measures provided, documentation of the survey and sampling methodology used, and the data collection and analysis efforts conducted. We also interviewed Education officials about the measures they and their contractors took to ensure data reliability. We assessed the reliability of the NLS-NCLB data by (1) reviewing existing information and documentation about the data and the system that produced them and (2) interviewing agency officials knowledgeable about the data. On the basis of our efforts to determine the reliability of the estimates for which supporting information was provided, which included verifying calculations, we believe that they are sufficiently reliable for the purposes of this report. We designed and implemented a Web-based survey to gather information on states’ role in shaping the provision of arts education in public schools and changes that may have occurred since NCLBA. Our survey population consisted of state arts officials in 49 states and the District of Columbia. We identified these arts officials through searches of the Arts Education Partnership Web site, and verified the contact information provided through e-mails and phone contacts. To develop survey questions, we reviewed existing studies on arts education and the state arts education policy data bases on the Web sites of the Education Commission of the States and the Arts Education Partnership. We also conducted interviews with representatives of these organizations. In addition, we interviewed the Arts Education Director and Research Director of the National Endowment for the Arts (NEA) to develop an understanding of federal and state roles in arts education in public schools and of the alternative funding sources for arts education that are available to schools. Finally, we conducted pretests of various drafts of our questionnaire with arts education officials in seven states to ensure that the questions were clear, the terms used were precise, the questions were unbiased, and that the questionnaire could be completed in a reasonable amount of time. We modified the questionnaire to incorporate findings from the pretests. The survey was conducted using self-administered electronic questionnaires posted on the World Wide Web. In the questionnaire, we asked the state arts official to be the lead survey respondent and, if necessary, to confer with other representatives of state departments of education, state arts commissions, and state cultural agencies to answer questions requiring more detailed knowledge. We sent e-mail notifications to these officials beginning on April 22, 2008. To encourage them to respond, we sent two follow-up e-mails over a period of about 3 weeks. For those who still did not respond, GAO staff made phone calls to encourage the state officials to complete our questionnaire. We closed the survey on July 2, 2008. Forty-five state officials completed the survey. Because this was not a sample survey, there are no sampling errors; however, the practical difficulties of conducting any survey may introduce errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into the database or were analyzed, can introduce unwanted variability into the survey results. We took steps in the development of this questionnaire, in the data collection, and in the data analysis to minimize such error. For example, a social science survey specialist designed the questionnaires in collaboration with GAO staff with subject matter expertise. Then, as noted earlier, the draft questionnaire was pretested in seven states to ensure that questions were relevant, clearly stated, and easy to comprehend. The questionnaire was also reviewed by an additional GAO survey specialist. Data analysis was conducted by a GAO data analyst working directly with the GAO staff with subject matter expertise. When the data were analyzed, a second independent data analyst checked all computer programs for accuracy. Since this was a Web-based survey, respondents entered their answers directly into the electronic questionnaires. This eliminated the need to have the data keyed into databases thus removing an additional source of error. To obtain information about what school officials are doing to provide arts education since NCLBA and the challenges, if any, they face in doing so, we visited school districts and schools in four states—Illinois, Massachusetts, Florida, and New York. Having learned from other studies of NCLBA implementation that schools not meeting AYP were difficult to recruit for site visits, to ensure that a sufficient number of schools would be selected, we identified states for our visits with large numbers of schools that were not meeting AYP in school year 2006-2007. Within each state, we selected school districts and schools that represented variation in income level of the school district, schools’ performance under NCLBA, and schools’ location as indicated in table 10. Within each state, we visited two school districts and 4 to 6 schools in each district for a total of eight school districts and 19 schools. We interviewed officials responsible for the arts education curriculum in each school district and school principals and, at the principal’s discretion, art teachers in elementary, middle, and high schools. We also visited and interviewed officials representing local arts organizations that had undertaken arts education initiatives in the public schools. Recruiting low-income school districts and schools for this study was especially challenging. For example, one district we initially selected to include in our study was in California, the state with the largest number of schools identified as needing improvement in school year 2006-2007. Officials representing that school district said that the district had placed a moratorium on all research in the district’s schools. In other California school districts, we experienced long delays in receiving a response from both district and school officials to requests for initial or follow-up interviews. We ultimately decided to recruit school districts and schools in other states. For the site visits, we developed structured interviews with a standard set of questions for school district and school officials including the following topics: art forms included in the schools’ arts education classes; daily or weekly schedule for all subjects, including arts education; changes in instruction time for all subjects, including arts education, occurring in the past school year and recent years; changes in students’ access to arts education in the schools; challenges faced in providing arts education in the schools; and funding sources for arts education and how budget cuts are implemented when resource reductions occur. Our questions for arts organization officials asked them to describe their arts education initiatives in the local schools, what resources they contributed, if any, to arts education in the schools, and their perception of public school students’ access to arts education and the challenges school districts and schools face in providing arts education. To analyze the site visit data, we created matrices to summarize key findings from interviews with school district, school, and arts organization officials on changes in instruction time, changes in students’ access to arts education, challenges faced, and experience with changes in funding. To determine what existing research says about the effects of arts education on student outcomes, we used several search strategies. To identify existing studies, we conducted searches of several automated databases, including the Education Resources Information Center (ERIC), Proquest, and Nexis. We also interviewed individuals familiar with available research, including the Research Director of the NEA and the former Director of the Arts Education Partnership (AEP). From these sources, we identified over 1,000 studies that were screened for relevance for our study. Using information about these studies that was readily available, we screened them using the following criteria: published during or after 1998, research based on subjects within the United States, published in a peer reviewed journal, and employed an experimental or quasi-experimental design. We selected the studies for our review based on their methodological strength and not on the generalizability of the results. Although the findings of the studies we identified are not representative of the findings of all studies of arts education programs, the studies consist of those published studies we could identify that used the strongest designs— experimental or quasi-experimental—to assess the effects of arts education. At the end of this screening process, 32 studies on the effects of arts education on student outcomes remained. We performed our searches for research and research evaluations between August 2007 and April 2008. To assess the methodological quality of the 32 selected studies, we developed a data collection instrument to obtain information systematically about each study being evaluated and about the features of the evaluation methodology. We based our data collection and assessments on generally accepted social science standards. We examined such factors as whether evaluation data were collected before and after arts education implementation; how arts education effects were isolated, including the use of nonarts participant comparison groups or statistical controls; and the appropriateness of sampling, outcome measures, statistical analyses, and any reported results. A senior social scientist with training and experience in evaluation research and methodology read and coded the documentation for each evaluation. A second senior social scientist reviewed each completed data collection instrument and the relevant documentation for the outcome evaluation to verify the accuracy of every coded item. This review identified 7 of the 32 selected studies that met GAO’s criteria for methodological quality. Appendix II: Average Amount of Instruction Time Elementary School Teachers Reported Spending per week (in hours) Blank cell = either “don’t know” or “no response” Two meta-analyses: analysis 1 found no support for a causal relationship between arts study and verbal creativity. The second analysis found some equivocal support for a causal relationship between arts study and figural creativity. Results varied and showed an extremely small positive overall association between the study of music and reading/verbal scores. Three meta-analyses: two of the analyses showed a positive relationship between music instruction and spatial-temporal tasks. The third analysis showed no relationship between music and a non spatial task. Two meta-analyses: analysis 1 found a significant and robust relationship between listening to music and performance on all types of spatial tasks. Analysis 2 also found a significant, robust effect of music listening on spatial-temporal tasks. Quasi-experimental studies showed that background music has a very minimal effect on math scores. Experimental instruction showed a small association between music instruction and math skills. Analysis 1 did not demonstrate a reliable relationship between arts instruction and reading improvement. Analysis 2 found a positive, moderately-sized relationship between reading improvement and an integrated arts-reading form of instruction. Showed no evidence for any educationally significant impact of arts on achievement (both verbal and math outcomes). Sherri Doughty, Assistant Director; Sara Edmondson, Analyst-in-charge Michael Meleady; Michael Morris; Douglas Sloane; Luann Moy; Stuart Kaufman; Justin Fisher; Rebecca Rose; Michele Fejfar; Amanda Miller; Susannah Compton; and James Rebbe made significant contributions to this report. | Under the No Child Left Behind Act (NCLBA), districts and schools must demonstrate adequate yearly progress (AYP) for all students. Because schools may spend more time improving students' academic skills to meet NCLBA's requirements, some are concerned that arts education might be cut back. To determine how, if at all, student access to arts education has changed since NCLBA, the Congress asked: (1) has the amount of instruction time for arts education changed and, if so, have certain groups been more affected than others, (2) to what extent have state education agencies' requirements and funding for arts education changed since NCLBA, (3) what are school officials in selected districts doing to provide arts education since NCLBA and what challenges do they face in doing so, and (4) what is known about the effect of arts education in improving student outcomes? GAO analyzed data from the U.S. Department of Education (Education), surveyed 50 state arts officials, interviewed officials in 8 school districts and 19 schools, and reviewed existing research. According to data from Education's national survey, most elementary school teachers--about 90 percent--reported that instruction time for arts education stayed the same between school years 2004-2005 and 2006-2007. The percentage of teachers that reported that instruction time had stayed the same was similarly high across a range of school characteristics, irrespective of the schools' percentage of low-income or minority students or of students with limited English proficiency, or the schools' improvement under NCLBA. Moreover, about 4 percent of teachers reported an increase. However, about 7 percent reported a decrease, and GAO identified statistically significant differences across school characteristics in the percentage of teachers reporting that the time spent on arts education had decreased. Teachers at schools identified as needing improvement and those with higher percentages of minority students were more likely to report a reduction in time spent on the arts. Because Education's survey did not include questions about why instruction time changed, GAO was not able to determine the reasons for the disparities its analysis identified. A new study of NCLBA implementation that Education plans to undertake may collect information on the uses of instruction time, among other topics. However, Education has not yet determined if it will collect information on the reasons instruction time changed for certain groups. While basic state requirements for arts education in schools have remained unchanged in most states, state funding levels for arts education increased in some states and decreased in others, according to GAO's survey of state arts officials. Arts education officials attributed the funding changes to state budget changes to a greater extent than they did to NCLBA or other factors. School principals have used several strategies to provide arts education; however, some struggled with decreased budgets and competing demands on instruction time, according to those GAO interviewed. Strategies for maintaining arts education include seeking funding and collaborative arrangements in the arts community. Competing demands on instruction time were due to state education agency or school district actions taken to meet NCLBA proficiency standards. Overall, research on the effect of arts education on student outcomes is inconclusive. Some studies that examined the effect of arts education on students' reading and math achievement found a small positive effect, but others found none. |
During the past several years, we have visited over 75 VA hospitals and outpatient clinics to assess operating policies, procedures, and practices. These efforts have resulted in a wide range of recommended actions to improve the efficiency and effectiveness of the VA system. Some of these actions involve ways to restructure existing delivery processes to lower costs; others identify ways to recover more of the costs of health care provided to veterans and others. This report is based primarily on the results of these efforts as well as studies by the Veterans Health Administration (VHA), VA’s Office of Inspector General (IG), and others. We initially presented the results of this work in testimony before your Subcommittee on March 8, 1996. The VA health care system was established in 1930, primarily to provide for the rehabilitation and continuing care of veterans injured during wartime service. VA developed its health care system as a direct delivery system in which the government owned and operated its own health care facilities. It grew into the nation’s largest direct delivery system. Veterans’ health care benefits include medically necessary hospital and nursing home care and some outpatient care. Certain veterans, however, have a higher priority for receiving care and are eligible for a wider range of services. Such veterans are generally referred to as Category A, or mandatory care category, veterans. More specifically, VA must provide hospital care, and, if space and resources are available, may provide nursing home care to certain veterans with injuries related to their service or whose incomes are below specified levels. These mandatory care veterans include those who have service-connected disabilities, were discharged from the military for disabilities that were incurred or aggravated in the line of duty, are former prisoners of war, were exposed to certain toxic substances or ionizing radiation, served during the Mexican Border Period or World War I, receive disability compensation, receive nonservice-connected disability pension benefits, and have incomes below the means test threshold (as of January 1995, $20,469 for a single veteran or $24,565 for a veteran with one dependent, plus $1,368 for each additional dependent). For veterans with higher incomes who do not qualify under these conditions—called discretionary care category veterans—VA may provide hospital care if space and resources are available. These veterans, however, must pay a part of the cost of the care they receive. VA also provides three basic levels of outpatient care benefits: comprehensive care, which includes all services needed to treat any service-connected care, which is limited to treating conditions related to a service-connected disability; and hospital-related care, which provides only the outpatient services needed to (1) prepare for a hospital admission, (2) obviate the need for a hospital admission, or (3) complete treatment begun during a hospital stay. Separate mandatory and discretionary care categories apply to outpatient care. Only veterans with service-connected disabilities rated at 50 percent or higher (about 465,000 veterans) are in the mandatory care category for comprehensive outpatient care. All veterans with service-connected disabilities are in the mandatory care category for treatments related to their disabilities; they are also eligible for hospital-related care of nonservice-connected conditions, but, with the exception of veterans with disabilities rated at 30 or 40 percent, they are in the discretionary care category. Most veterans with no service-connected disabilities are eligible only for hospital-related outpatient care and, with few exceptions, are in the discretionary care category. From its roots as a system to treat war injuries, VA health care has increasingly shifted toward a system focused on treating low-income veterans with medical conditions unrelated to military service. In fiscal year 1995, only about 12 percent of the patients treated in VA hospitals received treatment for service-connected disabilities. By contrast, about 59 percent of the patients treated had no service-connected disabilities. About 28 percent of VA hospital patients had service-connected disabilities but were treated for conditions not related to those disabilities. (See fig. 1.) Between fiscal years 1980 and 1995, VA facilities underwent some fundamental changes in workload. The days of hospital care provided fell from 26 million in 1980 to 14.7 million in 1995, the number of outpatient visits increased from 15.8 million to 26.5 million, and the average number of veterans receiving nursing home care in VA-owned facilities increased from 7,933 to 13,569. (See fig. 2.) During this same time period, VA’s medical care budget authority grew from about $5.8 billion to $16.2 billion. (See fig. 3.) For fiscal year 1996, VA sought medical care budget authority of about $17.0 billion, an increase of $747 million over its fiscal year 1995 authority. VA expects its facilities to provide (1) about 14.1 million days of hospital care, (2) nursing home care to an average of 14,885 patients, and (3) about 25.3 million outpatient visits. VA is also seeking budget authority of about $17.0 billion for fiscal year 1997. On July 29, 1995, the Congress adopted a budget resolution providing VA medical care budget authority of $16.2 billion annually for 7 years (fiscal years 1996-2002). The budget resolution would essentially freeze VA spending at the fiscal year 1995 level. VA estimated that such a freeze would result in a cumulative shortfall of almost $24 billion in the funds it would need to maintain current services to the veteran population through 2002. As used by VA, current services encompass maintaining the currently funded workload, including services to veterans in both the mandatory and discretionary care categories and services to nonveterans. The resources VA facilities will need in the next 7 to 10 years to provide hospital and certain outpatient care to veterans in the mandatory care categories for hospital and outpatient care are overstated for the following reasons: VA did not adequately consider the impact of the declining veteran population on future demand for inpatient hospital care. A significant portion of VA resources is used to provide services to veterans in the discretionary care category who are eligible for care only to the extent that space and resources are available. Considerable resources are spent on services not covered under veterans’ VA benefits. Medical centers tend to overstate their workloads and therefore their resource needs. VA included resources for facility and program activations in estimating the resources it would need to maintain current services even though such activations would expand current services. Services provided to nonveterans through sharing agreements are included in VA’s justifications of future resource needs even though the provision of services through sharing agreements is to be limited to sales of excess capacity. In estimating the resources it will need to maintain current services over the next 7 fiscal years, VA assumed that the number of hospital patients it treats will remain constant. The number of hospital patients VA treats, however, actually dropped by 56 percent over the past 25 years and should continue to decline. In addition, because of the declining demand for inpatient care in the past 25 years, the number of operating beds in the VA health care system declined by about 50 percent between 1969 and 1994. About 50,000 VA hospital beds were closed or converted to other uses. The decline in psychiatric beds was most pronounced: from about 50,000 beds in 1969 to 17,300 beds in 1994. (See fig. 4.) Further declines in operating beds are likely in the next 7 to 10 years as the veteran population continues to decline. If veterans continue to use VA hospital care at the same rate that they did in 1994—that is, if VA continues services at current levels—days of care provided in VA hospitals should decline from 15.4 million in 1994 to about 13.7 million by 2010. (See fig. 5.) Our projections are adjusted to reflect older veterans’ higher usage of hospital care. VA has underestimated the extent to which its health care resources are spent on services for veterans in the discretionary care categories. Specifically, about 15 percent of the veterans with no service-connected disabilities who use VA medical centers have incomes that place them in the discretionary care category (that is, care may be provided to the extent that space and resources permit) for both inpatient and outpatient care by inpatient eligibility standards. In addition, VA incorrectly reported outpatient workload using inpatient eligibility categories, overestimating the amount of outpatient care subject to the availability of space and resources. VA does not, however, differentiate between services provided to veterans in the mandatory and discretionary care categories in justifying its budget request. As a result, the Congress has little basis for determining which portion of VA’s discretionary workload to fund. A portion of VA’s workload involves treating higher income veterans with no service-connected disabilities. In fiscal year 1991, about 10.7 percent of the 555,000 veterans receiving hospital care in VA facilities were veterans with no service-connected disabilities with incomes of $20,000 or more.Of those using VA medical centers in 1991 for both inpatient and outpatient care, about 11 percent (91,520) of the single veterans with no service-connected disabilities (832,000) and 57 percent (227,430) of the married veterans with no service-connected disabilities (399,000) had incomes of $20,000 or more. Among married veterans with no service-connected disabilities who used VA medical centers, 15 percent (59,850) had incomes of $40,000 or more. In March 1992, VA’s IG estimated, on the basis of work at one typical VA outpatient clinic, that about half of the patients and about one-third of the visits veterans made to VA outpatient clinics should have been categorized as discretionary rather than mandatory care. This occurred because VA was reporting its outpatient workload using inpatient eligibility categories. While VA must provide needed hospital treatment to the 9 million to 11 million veterans in the mandatory care category, over 90 percent of those veterans are in the discretionary care category for outpatient care for services other than those related to treating a service-connected disability. The VA IG further reported that about 56 percent of discretionary care outpatient visits provided services that were not covered under the veterans’ VA benefits. Most veterans’ outpatient benefits are limited to hospital-related care. An estimated $321 million to $831 million of the approximately $3.7 billion VA spent on outpatient care in fiscal year 1992 may have been for treatments provided to veterans in the discretionary care category that were not covered under VA health care benefits. VA medical centers frequently overstate the number of inpatients and outpatients treated and therefore the centers’ resource needs. VA has long had a problem with veterans failing to keep scheduled appointments. Once an outpatient visit is scheduled, however, medical center staff enter it into VA’s computerized records, and it is counted as an actual visit unless staff delete the record. VA’s IG identified problems in the reporting of both inpatient care and outpatient visits at several medical centers. For example, the IG found that 9 percent of the visits at the Milwaukee VA medical center and 7 percent of the visits at the Murfreesboro medical center were not countable in the workload because the appointments were not kept. Similarly, a 1994 VA IG report found that actual surgical workload at the Sepulveda VA medical center was 37 percent lower than reported. According to VHA, it acted in October 1992 to eliminate false workload credits. Facilities must now physically “check in” each patient to receive workload credit. A September 1995 VA IG report, however, found that VA outpatient workload data are still overstated. In a nationwide review, the IG found that one out of three reported visits represented overreporting of workload data. Specifically, 6 percent of the reported visits either did not or appeared not to have 15 percent of the reported visits represented one or more clinic stops that either did not or appeared not to have occurred, and 14 percent of reported visits had inconsistencies in reporting of clinic stops. The resources VA believes it needs to maintain current services include resources to support new workload generated through activation of programs and facilities. Almost 25 percent of the budget shortfall VA estimated to occur in the next 7 fiscal years under the congressional budget resolution would result from the lack of funds for facility activations and planned workload expansions. Delaying or stopping activations is, however, a difficult political decision, particularly for those projects already under way. In its analysis of the resources needed to maintain current services in the next 7 fiscal years, VA assumed that it will continue to incur additional costs, add staff, and attract new users through facility activations. For example, VA’s estimate that it will need $20.9 billion in the year 2000 to maintain current services includes increases of over $993 million and 10,000 full-time equivalent (FTE) employees for activations. In other words, the inclusion of activation costs overstates the resources VA will need in the year 2000 to maintain current services by almost $1 billion. In addition, the funds VA seeks for activations may be overstated because the activations planning process is not integrated with the resource planning and management (RPM) system workload forecasting process. VA sought about $108 million and 1,509 FTEs in its fiscal year 1996 budget submission to support a projected increase in the number of veterans seeking care. These estimates, based on workload forecasts developed through RPM, reflect historical trend data that could include workload increases resulting from prior years’ facility and program activations. In other words, the resources requested for workload increases projected using RPM likely include resources for some of the estimated workload to be generated through fiscal year 1996 activations. VA sought an additional $208 million for facility activations on the basis of the separate activations planning process. VA officials agree that some double counting may have occurred because of the separate planning processes but believe that the duplication is minimal. In commenting on a draft of this report, VHA said that it modified budgeting for activations requirements in 1997. The medical care request no longer includes “line item” requests for the activation of specific projects. The networks will activate projects from within the level of resources provided in their total 1997 medical care budget allocations. VA counts services provided to nonveterans through sharing agreements with military and private-sector hospitals and clinics in justifying the resources needed during the next fiscal year. In other words, VA essentially builds in excess resources to sell to the Department of Defense (DOD) and the private sector. VA also bills, and is allowed to retain, the costs of services provided through sharing agreements. Health resources sharing, which involves the buying, selling, or bartering of health care services, benefits both parties in the agreement and helps contain health care costs by better utilizing medical resources. For example, a hospital’s buying an infrequently used diagnostic test from another hospital is often cheaper than buying the needed equipment and providing the service directly. Similarly, a hospital that uses an expensive piece of equipment only 4 hours a day but has staff to operate the equipment for 8 hours can generate additional revenues by selling its excess capacity to other providers. To use federal agencies’ resources to maximum capacity and avoid unnecessary duplication and overlap of activities, VA is authorized to sell excess health care services to DOD. In addition, VA can share specialized medical resources with nonfederal hospitals, clinics, and medical schools. VA may sell medical resources to DOD and the private sector only if the sale does not adversely affect health care services to veterans. As an incentive to share excess health care resources, VA facilities providing services through sharing agreements may recover and retain the cost of the services from DOD or private-sector facilities. In fiscal year 1995, VA sold about $25.3 million in specialized medical resources to private-sector hospitals and about $33.0 million in health care services to the military health care system. Although VA facilities received separate reimbursement for the workload generated through these sharing agreements, the workload was nevertheless included in VA’s justification of its budget request. In commenting on a draft of this report, VHA said that VA provided care to about 45,000 unique sharing agreement patients in 1994. VHA said that even though its base workload counts do include sharing, the levels are small and its inclusion of sharing makes no material difference in VA’s workload presentations. VHA said that no appropriated funds are requested for the sharing workload because it is supported by reimbursements from DOD and other sharing partners. VHA also said that RPM excludes data for sharing patients in developing changes in both unique patients and cost per unique patient. The actual patient counts for the last year are straightlined in all RPM projections. In VA’s assessment of the possible budget shortfall it would face if its budget were frozen at fiscal year 1995 levels for 7 years, VA assumed that—beyond the unspecified savings of $335 million expected to occur in fiscal year 1996—no changes would occur in the efficiency with which it delivers health care services. VA should be able to further reduce its resource needs by billions of dollars over the 7-year period through improved efficiency and resource enhancements. In the past 5 to 10 years, VA’s IG, VHA, the Vice President’s National Performance Review, we, and others have identified many opportunities to use lower cost methods to deliver veterans’ health care services, consolidate underused or duplicate processes to increase efficiency, reduce nonacute admissions and days of care in VA hospitals, close underused VA hospitals, and enhance VA revenues from services sold to nonveterans and care provided to veterans. VA has actions planned or under way to take advantage of many of these opportunities. Such actions should reduce VA’s resource needs in the next 7 to 10 years by several billion dollars. Following are among the many opportunities to achieve savings through changes in the way VA delivers health care services to veterans, allowing VA facilities to provide services of equal or higher quality at a lower cost. Providing 90-day rather than 30-day supplies of low-cost maintenance prescriptions enabled VA pharmacies to save about $45 million in fiscal year 1995. The savings resulted because VA pharmacies handled over 15 million fewer prescriptions. Although VA encouraged its medical centers to implement multimonth dispensing in response to our January 1992 report, not all potential savings have occurred because medical centers have been slow to adopt multimonth dispensing. Purchasing services from community providers when they can provide the care at a lower cost could also produce savings. VA has encouraged its medical centers to establish “access points” to improve accessibility for veterans and encourage the shift to primary care. Access points can be established as VA-operated outpatient clinics as well as through contractual or sharing agreements. To date, only a few medical centers have established such access points, but many others are developing plans. Early indications are that access points established through contracts with community providers can often provide services at lower cost than VA outpatient clinics. The ultimate effect of access points on overall VA spending depends, however, on such issues as the extent to which the access points attract new users and to which current users increase their use of VA services in response to improved accessibility. VA should save over $225 million in 7 years by adopting Medicare fee schedules. VA’s IG compared the amount paid by VA under its fee-basis program with Medicare fee schedules and found that VA paid more than the Medicare rate in over half of the cases reviewed. VA plans to adopt Medicare fee schedules for both its outpatient fee-basis payments and for payment of inpatient physician and ancillary services at non-VA hospitals. VA expects to begin using Medicare fee schedules by July 1996. By establishing primary care teams, VA hospitals should be able to reduce veterans’ inappropriate use of more costly specialty clinics and achieve significant savings in staff costs. As we reported in October 1993, VA hospitals allow many veterans to receive general medical care in specialty care clinics after their conditions are stabilized. Transferring such veterans to primary care clinics in a timely manner would allow lower cost primary care staff to meet their medical needs rather than higher cost specialists. By purchasing specialized medical care services, such as positron-emission tomography scans and lithotripsy, from community providers rather than buying expensive, but seldom used, equipment, VA could reduce its cost of providing such services while it improves accessibility of such care for veterans. For example, although the Albuquerque VA medical center treated only 24 veterans for kidney stone removal in fiscal years 1990 through 1992, the hospital purchased a lithotripter, equipment that breaks up kidney stones so that they can be eliminated without surgery, at a cost of almost $1.2 million. During its first year of operation, 34 veterans received treatment. A private provider in the same city offered lithotripsy services for $2,920 a procedure. Thus, the hospital could have met the 34 veterans’ needs at a cost of about $100,000 compared with its expenditure of $1.2 million plus operating costs. Although the hospital sold lithotripsy services to more nonveterans than it provided to veterans, the hospital has used the equipment at less than one-fifth of its normal operating capacity. VA also expects to save costs by establishing a national drug formulary. Historically, each VA facility has established its own formulary—that is, a list of medications approved for use for treating patients. VA noted that establishing a national formulary should increase standardization, decrease inventory costs, heighten efficiency, and lower pharmaceutical costs through enhanced competition. VA has not estimated the possible savings, but it could save $100 million if using the national formulary could reduce the cost of purchasing medications by 10 percent. In commenting on a draft of this report, VHA said that $100 million probably overstates the possible savings. Savings realized through volume-committed contracting would, in VHA’s opinion, be offset by the costs of new therapies. (See VHA’s comment 17 in app. II.) VHA also identified several additional actions it has taken to improve the management of pharmaceuticals over the last 6 years. These include establishing a pharmacy benefit management function to reduce overall health care costs through appropriate use of pharmaceuticals. (See VHA’s comment 19 in app. II.) VA expects to save $168 million in 6 years by phasing out and closing its supply depots and establishing a just-in-time delivery system for medical care supplies and drugs as recommended by the Vice President’s National Performance Review. The depots were closed at the end of fiscal year 1994, and contracts for just-in-time delivery of drugs are in place. Actions to award just-in-time contracts for medical supplies and subsistence items are expected to be completed by July 1996. Following are examples of several nationwide initiatives that VA has under way to integrate, consolidate, or merge duplicate or underused services. Such actions should save additional costs over the next 7 years. By creating several bulk processing facilities to fill mail order prescriptions, VA will reduce its handling costs by two-thirds, saving about $26 million in fiscal year 1996. As we reported in January 1992, VA was mailing prescriptions to veterans from over 200 locations, resulting in uneconomically small workloads and labor-intensive processes. As of March 1996, VA had four operating bulk processing facilities using newly designed automated equipment and processes; another three facilities were not yet operational. Prescription workload is being transferred systematically from VA hospitals to the new bulk processing centers.When fully operational, these facilities could save about $74 million a year. By consolidating 14 laundry facilities over a 3-year period, VA expects to achieve one-time equipment and renovation savings of about $38 million as well as recurring savings of about $600,000 per year. Under a management improvement initiative, VA identified facilities for integration that were scheduled for or had requested funding for new equipment or renovation. Five of the 14 consolidations were completed in 1995; the remaining 9 are scheduled to be completed in the next 2 years. An internal VA Management Improvement Task Force predicted in 1994 that VA could save up to $73 million in recurring personnel costs by integrating management of VA facilities. Among other things, the task force recommended that the administrative and clinical management of 60 facilities be integrated into 29 partnerships. The task force expected that these facility integrations could reduce service and staffing duplication, integrate clinical programs, achieve economies of scale, and free resources to invest in new services. As of March 1996, about one-third of the recommended integrations had been approved. VA allows the facilities, however, to reinvest the savings into providing more clinical programs. Examples of reinvestment include buying equipment, building expansions or renovations, opening access points, and increasing specialty and subspecialty clinics. Our ongoing work for this Subcommittee will assess the extent to which these and other management improvement initiatives recommended by the task force have been implemented and are saving measurable costs. Establishing preadmission certification procedures for admissions and days of care similar to those used by private health insurers could save VA hundreds of millions of dollars by reducing nonacute admissions and days of care in VA hospitals. VA hospitals too often serve patients whose care could be more efficiently provided in alternative settings, such as outpatient clinics or nursing homes. In 1985, we reported that about 43 percent of the days of care that VA medical and surgical patients spent in the VA hospitals reviewed could have been avoided. Since then, several studies by VA researchers and the IG have found similar inefficiencies. For example, a 1991 VA-funded study of admissions to VA acute medical and surgical bed sections estimated that 43 percent (– 3 percent) of admissions were nonacute. Nonacute admissions to the 50 randomly selected VA hospitals studied ranged from 25 to 72 percent. The study suggested several reasons for the higher rate of nonacute admissions to VA hospitals than to private-sector hospitals, including the following: VA facilities do not have financial incentives to make the transition to outpatient care; the VA system, unlike private-sector health care, does not have formal mechanisms to control nonacute admissions, such as mandatory preadmission review; and the VA system, unlike private-sector health care, has a significantly expanded social mission that may influence the use of resources for patients. A 1993 study by VA researchers reported similar findings. At the 24 VA hospitals studied, 47 percent of admissions and 45 percent of days of care in acute medical wards were nonacute; 64 percent of admissions and 34 percent of days of care in surgical wards were nonacute. Reasons cited for nonacute admissions and days of care included nonavailability of outpatient care, conservative physician practices, delays in discharge planning, and social factors. Although the study cited VA eligibility as contributing to some inappropriate admissions and days of care, the study recommended only minor changes in VA eligibility provisions. Rather, it suggested that VA establish a systemwide utilization review program. VA, however, has neither established an internal utilization review program nor contracted for external reviews focusing on medical necessity. By contrast, all fee-for-service health plans participating in the Federal Employees Health Benefits Program are required to operate a preadmission certification program to help limit nonacute admissions and days of care. In commenting on a draft of this report, VA’s Under Secretary for Health said that VA is currently assessing the use of preadmission reviews systemwide as a way to encourage the most cost-effective, therapeutically appropriate care setting. He said that several facilities have adopted some form of preadmission review already and their programs are being reviewed. The Under Secretary also said that VHA is implementing a performance measurement and monitoring system that contains several measures for which all network directors and other leaders will be held accountable. Several of these measures, such as the percentage of surgeries done on an ambulatory basis at each facility and implementation of network-based utilization review policies and programs, will, he said, move the VA system toward efficient allocation and utilization of resources. If the actions discussed so far are taken to reduce the number of nonacute admissions and days of care provided by VA hospitals, the demand for care in some hospitals could fall to the point where keeping such hospitals open is no longer economically feasible. VA has taken over 50,000 beds out of service in the past 25 years but has not closed any hospitals because of declining utilization. Although closing wards clearly saves money by reducing staffing costs, the cost per patient treated rises because the fixed costs of facility operation are disbursed to fewer patients. At some point, closing a hospital and providing care either through another VA hospital or through contracts with community hospitals may become less costly. Closing hospitals and contracting for care, however, entail some risk. Allowing veterans to get free hospital care in community hospitals closer to their homes could result in increased demand for VA-supported hospital care, offsetting any savings achieved through contracting. The feasibility of closing underused hospitals was demonstrated when VA recently closed the Sepulveda VA medical center, which was damaged in an earthquake, and transferred the workload to the West Los Angeles medical center. VA’s IG found that the reported numbers of inpatients treated at both Sepulveda and West Los Angeles had declined significantly over the prior 4-year period and that the declining workload may have been even greater than VA reported because the facilities’ workload reports were overstated. VA does not plan to rebuild the Sepulveda hospital but plans to establish an expanded outpatient clinic at the site. The IG concluded that West Los Angeles had sufficient resources to care for the hospital needs of veterans formerly using the Sepulveda hospital. Savings from the closure have been limited, however, because Sepulveda staff were temporarily reassigned to the West Los Angeles medical center. The only other hospital VA has closed in the last 25 years is the Martinez VA medical center. Like Sepulveda, it was closed because of seismic deficiencies, and its workload was transferred to other VA medical centers. Although VA did not rebuild Sepulveda, it plans to build a replacement hospital for Martinez as a joint venture with the Air Force at Travis Air Force Base. Funds for the construction, however, have not been appropriated. In addition to actions to improve operational efficiency, VA should generate millions in additional revenues by (1) setting more appropriate prices for services sold to private-sector providers and (2) determining whether to require veterans to contribute to the cost of their care. By establishing appropriate prices for services sold to nonveterans through sharing agreements, VA can generate revenues used to serve veterans. In response to our December 1994 report on recovering the full costs of lithotripsy services at the Albuquerque VA medical center, VA recently encouraged its facilities to ensure that they price services provided to nonveterans to fully recover all costs and to include a profit when appropriate. For example, the Albuquerque medical center increased its price for basic lithotripsy services to nonveterans by over 125 percent. The new price could generate over $300,000 a year in additional revenues for the hospital. By verifying veterans’ reported income, VA expects to generate about $46 million in copayment revenues between January 1, 1996, and June 30, 1997. In a September 1992 report, we found that VA had not taken advantage of the opportunity to verify veterans’ incomes through the use of tax records. Through our own review of tax records, we identified over 100,000 veterans who may have owed copayments. In 1994, VA began routinely using such data to determine veterans’ copayment status. Although costs can and are being saved, the VA health care system lacks overall incentives to further increase efficiency. Unlike private-sector hospitals and providers, VA facilities and providers bear little financial risk if they provide (1) medically inappropriate care or (2) services not covered under a veteran’s VA benefits. Unlike in the private health care system in which the insurance company bears most of the risk, in VA’s system, the veteran, not VA, bears most of the financial risk for health benefits. However, when VA facilities have an incentive, such as the desire to fund new programs, they appear to be able to identify opportunities to save costs through efficiency improvements. Private insurers increasingly require their policyholders to obtain prior authorization from an independent utilization review firm before the insurers will accept liability for hospital care. Frequently, this authorization also limits the number of days of care the insurer will cover without further authorization of the medical necessity of continued hospitalization. Because compliance with these requirements directly affects their revenues, private-sector hospitals pay close attention to them. Similarly, the Medicare program has, since 1982, paid hospitals a fixed fee based on a patient’s diagnosis. The fixed fee is based on the national average cost of treating the patient’s condition. If the hospital provides the care for less than the Medicare payment, it makes a profit. But if the hospital keeps the patient too long, is inefficient, or provides unnecessary treatments, then it will lose money. This creates a strong incentive in the private sector to discharge Medicare patients as soon as possible. These financial incentives to increase efficiency and provide care in the most cost-effective setting are largely absent in the VA system. Even in those cases in which a private health insurer’s preadmission certification requirement applies, the hospital’s revenues are not affected by failure to obtain such certification. A VA hospital that admits a patient who does not need a hospital level of care incurs no penalty. In fact, facility directors often indicated to us that VA’s methods of allocating resources to its medical centers favored inpatient care. VA’s current RPM system is attempting to remove the incentive to provide care in a hospital rather than an outpatient clinic and create incentives to provide care in the most cost-effective setting. As used during the last two budget cycles, however, the system has done little to create such incentives. Because VA chose to shift few funds between the highest and lowest cost facilities, facility efficiency incentives were minimal. For fiscal year 1995, VA reallocated $20 million from 32 high-cost to 27 low-cost facilities. VA officials told us that they plan to use RPM to reallocate more money in fiscal year 1996 and to provide VISN directors a “risk pool” of contingency funds to help facilities unable to work within their budgets. It is yet unclear how VISN directors plan on using these funds. Finally, unlike private-sector health care providers, VA has no external preadmission screening program or other utilization review program to provide incentives to ensure that only patients who need a hospital level of care are admitted and that patients are discharged as soon as medically possible. VA gives private-sector hospitals providing care to veterans under its contract hospitalization program incentives to limit patients’ lengths of stay by basing reimbursement on Medicare prospective payment rates. VA does not, however, give its own hospitals the same incentives by basing their payments on the Medicare rates. Unlike under private health insurance and Medicare, in the VA system, the veteran is at risk of being denied care, rather than VA being at risk of losing funds, if a VA facility runs out of resources. Because it bears little risk, the VA system lacks a strong incentive to operate efficiently. A private insurer or managed care plan guarantees payment for covered services in exchange for a fixed premium. The insurer or managed care plan thus has a strong financial incentive to ensure that only medically necessary care is provided in the most cost-effective setting. Otherwise, the insurer may suffer a financial loss. Unlike private health providers, however, the VA system does not guarantee the availability of covered services. As a result, the ability of veterans to get covered services depends on resource availability. If a VA facility is inefficient and the resources allocated to the facility are not sufficient to meet anticipated workload, the VA facility is allowed to deny (that is, ration) services to eligible veterans. In 1993, we reported that 118 VA medical centers reported rationing some types of care to eligible veterans when the centers lacked enough resources. The ability of facilities to find ways to become more efficient when they want to fund a new program, such as establishing an access point clinic, indicates that when they are given an incentive to become more efficient, they do so. For example, VA’s Under Secretary for Health encouraged hospitals to take all steps within their means to improve the geographic accessibility of VA care. But he told the hospitals that they would have to use their own resources to do this. Over half of VA’s hospitals quickly developed plans to establish so-called access points. For example, the Amarillo VA medical center identified ways to save over $850,000 to pay for the establishment of access points: The medical center saved an estimated $250,000 a year by consolidating inpatient medical wards and reducing the number of surgical beds it staffed. Because of these consolidations, the center eliminated nine nursing positions, saving salaries and related benefits. Officials said that the consolidations coincided with declining workloads, attributable to lower admissions and lengths of stay, and as such would not affect the availability or quality of care the center provides. The medical center expects to save up to $150,000 by reviewing patients’ use of prescription medications. These reviews have led to a reduction in medications provided, saving the cost of procuring, storing, and dispensing the drugs. It expects to reduce future pharmacy costs by $250,000 by trying to change patients’ lifestyles to reduce their cholesterol. Center officials estimate that this has reduced the use of lipid-lowering drugs by half. The medical center established health education classes, which teach correct eating and exercise techniques. Before this, physicians had routinely prescribed lipid-reducing drugs to lower cholesterol levels. Officials are planning to establish similar health clinics for patients with high blood pressure and other common conditions that may be effectively treated without prescription drugs. The medical center expects to save $200,000 or more by using a managed care contract to purchase radiation therapy services. Radiation therapy involves a series of treatments, which the center has historically paid for on a fee-for-service basis. The hospital recently signed a contract with a private-sector hospital to provide each series of radiation treatments at a capitated rate based on Medicare’s reimbursement schedule. Officials are currently negotiating similar contracts for other medical services. In 1995, the Under Secretary for Health proposed criteria for potential service realignment that would facilitate the types of changes needed to achieve efficiency comparable with private-sector hospitals and clinics. For example, he encouraged VHA directors to identify opportunities to buy services from the private sector at lower costs, consolidate duplicate services, and reduce their fixed and variable costs of services directly provided to veterans. VA’s assessment of its resource needs over the next 7 to 10 years did not include any projected savings from the increased efficiencies that should result from establishing VISNs, which assess needs on a network rather than facility basis, improving facility planning. This will allow hospitals serving veterans in the same geographic area to pool their resources and reduce duplication. A planned move to capitation funding should create incentives for facilities to provide care in the most cost-effective setting. However, VA has much to do before it can set appropriate capitation rates. For example, while VA’s RPM data show a wide variation in operating costs among facilities VA considers comparable, VA has done little to determine the reasons for these variations. Without such an understanding, no assurance exists that capitation rates can be set at the level that promotes the most efficient operation. Understanding facility or VISN cost variations necessitates improving the information VA has on its hospitals’ operating costs. Although the automated Decision Support System (DSS) that VA is implementing has potential to be an effective management tool for improving the quality and cost-effectiveness of VHA operations, VA has not developed a way to verify the accuracy of the cost and utilization data going into DSS. Some of the data provided to DSS from other VA information systems are incomplete and inaccurate, limiting VA’s ability to rely on DSS-generated information to make sound business decisions. VA has recognized the need for accurate cost and utilization data for DSS and has a special project team developing ways to improve the system’s input data. Given VA’s overstatement of future resource needs, the system does not need to spend as many resources as previously expected. Moreover, because the possible magnitude of future efficiency savings was not factored into VA’s assessments of future resource needs, VA’s system may have more discretionary resources available than expected. This suggests that an operating goal of $16.2 billion a year may be achievable. In any event, it seems likely that the impact of such funding levels would not, by necessity, result in the budget shortfalls that VA estimated. Although actions to improve VA’s efficiency are planned or under way that could yield enough savings to enable VA to contribute billions of dollars toward deficit reduction in the next 7 years without affecting current services, VA provides little information to the Congress on those savings and how they are reinvested. Essentially, VA reinvests these savings in new programs and expanded services without giving the Congress the chance to use all or a part of the savings to apply to the deficit. Billions of dollars could be saved by establishing an independent external preadmission certification program similar to those used by most private health insurers. Similarly, by creating financial incentives for VA medical centers to discharge patients as soon as their medical conditions allow, VA could significantly reduce unnecessary days of hospital care. Although VA has changes under way that should help create financial incentives to provide care in the most cost-efficient setting, it will take time for the new VISN directors to achieve significant savings. The directors have been in their positions for only a few months so it is too early to tell how successful they will be in increasing efficiency. It is important that VA complete its implementation of clear mechanisms and useful management data by which to hold VISN directors accountable for workload, efficiency, and other performance targets. Without such mechanisms and improved data, the VISN structure holds some risk for further decentralizing VHA authority and responsibility for achieving efficiencies. We recommend that the Secretary of Veterans Affairs do the following: Establish an independent, external preadmission certification program for VA hospitals. Provide the Congress, through future budget submissions, data on the extent to which VA services were provided to veterans in the mandatory and discretionary care categories for both inpatient and outpatient care. Include in future budget submissions (1) information on costs saved through improved efficiency and (2) plans to either reinvest savings in new services or programs or use the savings to reduce the budget request. By letter dated May 10, 1996 (see app. I), the Under Secretary for Health said that VA appreciates our positive acknowledgment of its efforts to restructure the VA health care system but disagrees with many of our findings, conclusions, and recommendations. In VHA’s opinion, the report presents outdated information that does not accurately reflect the current direction of VA health care. VHA said that our analysis is particularly inadequate as a basis for projecting future resource requirements for VA medical care. Specifically, VHA said that our report does not adequately consider all factors that affect VA’s future resource needs, incorrectly states that VA does not adequately consider the declining veteran population in forecasting future resource needs, and unfairly bases comments about the extent to which VA resources are spent on discretionary care on work done by the VA IG at one facility. As discussed in the following paragraphs, we do not find VHA’s comments convincing. Our analysis, VHA said, places too much significance on the findings of our September 1995 review of VA’s response to a congressional request (a static assessment of different funding proposals and their effect over future years) in concluding that VHA’s forecasting of future resource needs is overstated. Our September 1995 analysis was, VHA said, a fragmented discussion of efficiencies that did not consider other factors. Resource needs are projected on the basis of assessment of inflation, current workload, new efficiencies, health care technology, and VA health care system deficiencies. Our analyses were, by necessity, limited to review of the estimates of future resource needs developed by VA. We tried to obtain the basis for the 5-year projections of resource needs in VA’s fiscal year 1996 budget submission, but VA officials, including the Under Secretary for Health, said that they had no part in developing the estimates. VHA offered no estimates of its future resource needs beyond those included, either during our review or in its comments on this report. VHA said that it recognizes that further management efficiencies can and must be achieved in future budget years to continue to provide quality health care. Our report, VHA said, does not recognize the efficiencies included in VA’s fiscal year 1996 budget request. In this request, VA assumed that management efficiencies would save $335 million. The Congress increased this savings amount by an additional $397 million in administrative savings that have no impact on patient care. This results in $732 million in permanent administrative savings in fiscal year 1996. Our report does recognize that VA planned to achieve unspecified savings of $335 million in fiscal year 1996. In addition, we have added a discussion to reflect the final appropriation action approved after this report was sent to VA for comment. The reductions, however, will not necessarily be achieved without impacting patient care. Because VA does not have a plan to achieve the needed savings, VA facilities may achieve these savings by reducing patient care. VHA said that VA’s model for projecting hospital workload explicitly considers not only the change in the size and age of the veteran population, but also changes in observed hospital use rates over time. VHA said that one of the more misunderstood variables relates to the change in the veteran population versus the number of veterans who use VA for their health care services. According to VHA, although the veteran population is declining, the number of veteran users is expected to increase. VHA said that although the number of hospital admissions declined by 19 percent between 1980 and 1995, the number of outpatient visits increased by 53 percent in the same period. Figure 2 shows the increases in demand for outpatient care from 1980 to 1995. Such data do not, however, adequately reflect changing resource needs. The savings from the decreased demand for inpatient hospital care should more than offset the costs of meeting the increased demand for outpatient care. Between fiscal years 1980 and 1995, the number of days of hospital care provided in VA facilities declined from 26.1 million to 14.7 million, a decrease of over 11 million days of care. During the same period, outpatient visits to VA clinics increased from 15.8 million to 26.5 million, an increase of 10.7 million visits. Because an outpatient visit is 2-1/2 to 3-1/2 times cheaper than a day of inpatient hospital care, savings from the declining inpatient workload should have more than offset the costs of the increased outpatient workload VA experienced over the 16-year period. The increase in demand for outpatient care is also consistent with what we have been saying about VA’s efforts to (1) improve accessibility of VA health care through access points and (2) expand outpatient eligibility. Expanding eligibility, as was done in 1973 with outpatient eligibility to include services that would obviate the need for hospital care, has historically resulted in increased demand for outpatient services. For example, in its fiscal year 1975 annual report, VA includes a figure showing the “relationship of workload to the progressive extension of legislation expanding the availability of outpatient services.” Similarly, in its comments, VA noted that the number of VA outpatient clinics grew by 72 percent between 1980 and 1995. In other words, the number of clinics was growing faster than the number of visits, which VA says grew by 53 percent in the same time period. VHA said that our conclusions that a significant portion of VA resources go to discretionary care and that services provided are not covered under veterans’ VA benefits are based on two IG reports that reviewed the work of one satellite outpatient clinic and one VA medical center. Our conclusions are based both on our own work and on a series of IG studies. The IG’s report discussed problems at two facilities—the Allen Park VA medical center and the Columbus, Ohio, outpatient clinic. The Allen Park facility was, the IG report notes, “. . . selected as the review site in consultation with VHA program officials because it was considered to be a typical outpatient environment in an urban tertiary care facility.” Although our report cited only one IG report, the IG has found lax enforcement of eligibility provisions at many other medical centers. One of the recommendations in the IG’s report was that VHA conduct reviews of each facility’s outpatient workload to identify the proportion of visits properly classified as mandatory, discretionary, and ineligible using the definitions relevant to current law. VHA, however, as of May 1996, has not conducted the recommended reviews. VHA also said that our estimate of the percentage of VA users in the discretionary care category was inaccurate. According to VHA, only 3 percent of VA inpatients and less than 5 percent of both inpatient and outpatient users were discretionary in fiscal year 1995. Our estimate better reflects the extent to which care is provided to veterans in the discretionary care category. VA’s estimate is apparently based on unverified data provided by veterans when they apply for care; such data underestimate veterans’ incomes. We compared VA’s fiscal year 1990 treatment records with federal income tax records and found that about 15 percent of the veterans with no service-connected disabilities who used VA medical centers had incomes that placed them in the discretionary care category for both inpatient and outpatient care. Our review showed that VA may have incorrectly placed as many as 109,230 veterans in the mandatory care category in 1990. Tax records for these veterans showed they had incomes that should have placed them in the discretionary care category. We estimated that VA could have billed as much as $27 million for care provided to these veterans. Although data from our study are now 6 years old, data from VA’s own tax record reviews are yielding similar results. VA has now established its own income verification program. Its initial review found that about 18 percent of veterans with no service-connected conditions underreported their income. VA’s matching agreement with the Internal Revenue Service indicates that VA expects its comparison of fiscal year 1996 treatment records with tax data to generate about $30.5 million in copayment collections for care provided to veterans who were incorrectly classified as mandatory care category veterans. Accordingly, we believe our estimate—and VA’s own data—show that about 15 percent of veterans with no service-connected disabilities who use VA medical centers are in the discretionary care category for both inpatient and outpatient care. VHA also said that they do not believe that extrapolating data from a single facility to the VA system nationwide is appropriate. According to VHA, our report states that “systemwide, 56% of discretionary care outpatient visits did not meet eligibility criteria in 1992, and may have resulted in $321 million to $831 million being potentially used to provide outpatient care to veterans in the discretionary care category who may not have been entitled to that care.” What our report actually says is that the VA IG further reported that about 56 percent of discretionary care outpatient visits provided services that were not covered under the veterans’ VA benefits . . . . We state that an estimated $321 million to $831 million of the approximately $3.7 billion VA spent on outpatient care in fiscal year 1992 may have been for treatments provided to veterans in the discretionary care category that were not covered under VA health care benefits. Nowhere in the report do we suggest that the problem is one of “entitlement.” No veteran, whether in the mandatory or discretionary care category, is entitled to care from VA. The issue is one of eligibility. The IG report found that veterans in the discretionary care category for outpatient care received treatments that they were not eligible for regardless of whether VA had the space and resources to provide the services. In other words, these veterans received services that were not needed to prepare for, to follow up after, or to obviate the need for hospital care. According to VHA, this report and the IG reports demonstrate the need for eligibility reform. In VHA’s opinion, the law needs to be amended to enable VA to provide care so that veterans are treated in the most appropriate, most efficient, and most cost-effective setting. VHA said that this is an instance where, despite statements to the contrary in this report, the law contributes to the system’s inefficiencies by perpetuating complicated outpatient eligibility criteria. VA needs the outpatient eligibility reform tool to achieve the best patient and system outcomes. Although we agree with VHA that eligibility reforms are needed, VA’s efforts to expand eligibility are not effectively targeted toward meeting the health care needs of veterans within available resources. Our concerns about current proposals to expand eligibility were expressed in our recent testimony before the Senate Committee on Veterans’ Affairs and will be explored more fully in a forthcoming report. VHA said that our report’s statement that medical centers frequently overstate the number of inpatients and outpatients treated is no longer true. VHA said that it improved its information systems and eliminated false workload credits in response to the IG reports. Before this, facilities could obtain automatic workload credit for all scheduled visits unless action was taken to indicate that a patient failed to appear. We revised the discussion in our report to reflect the actions taken in response to the IG’s reports. We also added a discussion of a September 1995 VA IG report showing continued problems in VA facilities’ reporting of outpatient workload. VHA agreed with our recommendation that it establish an independent, external preadmission certification program for VA hospitals. VHA said that policies and processes for preadmission review are being developed by a task force charged with reviewing and revising VHA’s existing utilization review policy. The preadmission review will, according to VHA, identify the appropriate level of care for both inpatient and outpatient care, appropriate alternatives to care, and a system of referral and arrangement of alternative care. Although we found VHA’s agreement to pursue establishment of an external preadmission certification program encouraging, we do not believe VHA’s action fully responds to our recommendation because it provides no time frames for completing development and implementation of the program. In addition, it does not indicate how compliance with the findings of the external reviews will be enforced. Because VA facilities currently incur no financial risk from providing inappropriate care, external preadmission certification requirements may not be effective unless coupled with a financial penalty for noncompliance with the review findings. Recommendations and VA promises to establish effective utilization review mechanisms to help prevent inappropriate days of hospital care date back over 10 years. Because of the hundreds of millions of dollars wasted from VA’s past failure to address this problem, we believe VA needs to develop and follow a specific timetable to implement an external preadmission certification program and develop plans to place VA facilities at financial risk if they admit patients not requiring a hospital level of care. VHA did not agree with our recommendation that it include (1) information on savings achieved through improved efficiency and (2) plans to either reinvest savings in new services or programs or use the savings to reduce the budget request. The recommendation is, VHA said, unrealistic. Although VHA is moving rapidly to implement several management initiatives, such as those discussed in this report, VHA said it cannot predict the extent of possible savings or accurately predict future costs. VHA said that VA will be better able to predict savings when the VISNs are fully operational but probably not to the level of detail that our recommendation seems to require. Providing the Congress information on factors, such as inflation and creation of new programs, that increase resource needs without providing information on changes that could reduce or offset those needs leaves the Congress with little basis for determining appropriate funding levels. Because VA facilities are essentially allowed to keep any funds they generate through efficiency improvements and seek additional funds to compensate for the effects of inflation, the true rate of increase in VA’s medical care appropriations is understated. Finally, VHA did not agree with our recommendation that it provide data to the Congress on the extent to which VA services are provided to veterans in the mandatory and discretionary care categories for both inpatient and outpatient care. According to VHA, VA does not have accounting systems that would allow VA to differentiate between mandatory and discretionary care. Developing accounting systems capable of such differentiation would, VHA said, be extremely difficult and may not be cost-effective given the complexities of outpatient eligibility. For example, one outpatient visit may comprise several clinic stops, across which outpatient eligibility may vary. These complexities, according to VHA, make it very difficult to efficiently and meaningfully track mandatory and discretionary care. Future data systems, such as the DSS and resource allocation systems, may, VHA said, improve the identification of patient care costs. The difficulties in identifying mandatory versus discretionary care categories will, according to VHA, remain until eligibility laws are amended. Without information on the extent to which VA resources are used to provide services to veterans in the priority categories established under VA law, the Congress lacks the basic information needed to guide decisions about what portion of VA’s discretionary care workload to fund. In addition, it lacks the basic information it needs to ensure that resources are equitably allocated to VISNs to ensure that veterans have reasonably equal access to VA benefits regardless of where they live. If VHA is applying the eligibility rules established under Public Law 100-322—as VHA maintained in its comments it has instructed its facilities to do—it should be relatively easy to develop a reporting system to capture the results of those decisions. VA has, for years, indicated that it may include data on mandatory and discretionary care in its resource allocation system in DSS and in other data systems but has never detailed any plans to accomplish this task. VA needs to promptly decide how to gather such data and set realistic milestones for implementing the changes needed to provide the Congress and VA managers the data they need to effectively assess VA medical care budget needs. By not developing such data, VA makes it exceedingly difficult for the Congress to consider reductions in its budget request because the Congress does not know whether its reduction would affect provision of services to veterans in the mandatory care category for inpatient care. According to VHA, in fiscal year 1995, less than 3 percent of VA inpatients and less than 5 percent of both inpatient and outpatient users were discretionary by inpatient eligibility standards. VHA said that any savings available from no longer treating any discretionary care category veterans defined by inpatient eligibility would be relatively very small. The data VA cites are apparently based on unverified information provided by veterans at the time of application. As discussed in this report, many veterans underreport their income to VA to qualify for free care. VA expects to recover about $30.5 million in copayments in fiscal year 1996 through its recently established income verification program. VHA provided additional comments in an attachment to its May 10, 1996, letter. Those comments are addressed in appendix II and changes have been made in the body of the report as appropriate in response to the additional comments. We are sending copies of this report to the Chairmen and Ranking Minority Members, Subcommittee on VA, HUD, and Independent Agencies, House Committee on Appropriations; the House and Senate Committees on Veterans’ Affairs; the Secretary of Veterans Affairs; the Director, Office of Management and Budget; and other interested parties. Copies will also be made available to others upon request. This report was prepared under the direction of Jim Linz and Paul Reynolds, Assistant Directors, Health Care Delivery and Quality Issues. Please call Mr. Linz at (202) 512-7110 or Mr. Reynolds at (202) 512-7109 if you or your staff have any questions. Other evaluators who made contributions to this report include Katherine Iritani, Linda Bade, and Walt Gembacz. VHA’s additional comments noted on the following pages are copied from the enclosure that accompanied VHA’s May 10, 1996, letter to us. References to page numbers in our draft report have been changed to refer to the appropriate page numbers in our final report. Each VHA comment is followed by our evaluation. [This comment responds to GAO’s reporting on page 2 that facilities receive scant pressure to effect efficiencies but do so when they want to implement new services or expand existing ones.] Headquarters normally makes a commitment to support a facility’s budget before the beginning of the fiscal year starts. However, 1996 is an exception in that the initial allocation of resources are delayed due to the uncertainty as to the outcome of the Congressional action for the fiscal year. This budget provides incentives for the facility to figure out how to operate during the year at a lower per-unit-of-service cost. Any savings that the facility can make during the year after meeting savings targets can be put back into either enhancing the level of service in specific areas or into expanding services. In addition to a prospective budget, the Central Office, for the past three years, established facility budgets using per-capita prices for five different risk groups. While some of these groups include bed-service care; e.g., the extended care group, the largest risk group, basic care, has no inpatient/outpatient designation. In this risk group, a facility receives budget credit based solely on the number of patients that they will care for times a single average price. A significant amount of information is provided facilities and VISNs on their relative cost, casemix and productivity. This peer comparison is structured to promote the treatment of patients with the most appropriate care in the most cost-effective manner. In developing the 1997 allocation prices, VHA will be developing incentives for shifting to ambulatory care. The Resource Allocation Methodology (RAM), which was used to make adjustments to medical centers’ budgets during fiscal years 1985-1990, provided more workload credit for inpatient care. Over the years, VHA has tried to remedy the situation. In 1995, under the Resource Planning and Management (RPM) system, VHA changed the structure of the workload classification system to promote primary and ambulatory care. As VHA is preparing for the FY 1997 budget allocation, the Capitation Advisory Panel will be making recommendations to provide incentives in the resource allocation system for ambulatory surgery. As VHA develops a capitation-based resource allocation system for FY 1998, it will continue its ongoing efforts to promote incentives for ambulatory care. We believe VHA has taken these remarks out of context. We reported that, historically, VA’s central office provided few incentives for facilities to become more efficient. Furthermore, the report goes to say that recent changes at VA are starting to create the types of efficiency incentives that have long existed in the private sector. The remainder of this section of the report discusses the kinds of changes, such as capitation funding and establishment of performance measures, VA is making to create efficiency incentives. [This comment responds to GAO’s reporting on page 10 that many veterans leave the VA system when they become eligible for Medicare.] This is misleading, because while a VA study of inpatients only (Feitz) reveals some VA inpatients do leave VA upon reaching age 65, many do return in the following years, especially as outpatients (Hisnanick). A large proportion (46 percent) of VA unique patients across both inpatient and outpatient care are Medicare eligible. We did not mean to imply that all veterans leave the VA system or even that those who leave the system discontinue all use of VA services. We have revised the wording in the final report to state that many veterans reduce their use of the VA system when they become eligible for Medicare. When veterans have both Medicare and VA coverage, they overwhelmingly use Medicare. In 1990, for example, almost 62 percent of Medicare-eligible veterans used Medicare but no VA services during the year; 7 percent used VA but no Medicare services; and 8 percent used a combination of both Medicare and VA services. About 24 percent did not use services under either program. While most Medicare-eligible veterans rely primarily on private-sector providers participating in Medicare for their health care needs, Medicare-eligible veterans do, as VHA points out, and as we have pointed out in previous reports, account for about half of VA’s workload. Throughout the report, you continually shift your statements from inpatient care to outpatient care without adequately differentiating them. This is at times confusing or misleading. Changes have been made in the final report to clarify discussions of inpatient and outpatient care as appropriate. Pages 10 & 11. Your discussion seems to hinge on comments made in a VA Inspector General report (report no. 2AB-A02-059, dated March 31, 1992), regarding outpatient provisions of Public Law 100-322. VHA finds it inappropriate to base these specific findings and conclusions upon findings in the Inspector General report, which were based on one tertiary care facility. This certainly cannot be deemed to represent the system as a whole, nor can it be assumed that the same concerns identified at this location, and a satellite outpatient clinic also cited in the Inspector General report, would necessarily be found at all other VA health care facilities. The IG’s report discussed problems at two facilities—the Allen Park VA medical center and the Columbus, Ohio, outpatient clinic. The Allen Park facility was, the IG report notes, “...selected as the review site in consultation with VHA program officials because it was considered to be a typical outpatient environment in an urban tertiary care facility.” It was selected as a typical tertiary care facility because VHA had previously expressed concern that the findings at the Columbus outpatient clinic did not represent conditions at a typical tertiary care outpatient clinic. One of the recommendations in the IG’s report was that VHA conduct reviews of each facility’s outpatient workload to identify the proportion of visits properly classified as mandatory, discretionary, and ineligible using the definitions relevant to current law. VHA, however, was unwilling to conduct such reviews, which might possibly have disproved the IG’s findings or shown the problems to be isolated to a few facilities. As of May 1996, VHA still has not conducted the recommended reviews. Although we focused on a single IG report in our testimony, the IG found lax enforcement of eligibility provisions at many other medical centers.In addition, our recent work on VA access points found no indication that VA requires access point contractors to establish veterans’ eligibility or priority for care or that contractors were making such determinations for each new condition. “...VA physicians generally practice with little real regard for the illogical eligibility rules. Indeed, it appears to me as if these rules are more of a hassle factor than anything else—bureaucratic barriers to be circumvented in one way or another in the interest of taking care of patients. In fact, over the last 7 years VHA has provided approximately 200 million outpatient visits and about 7 million hospital admissions, but there has not been one instance where an administrator or practitioner has been reprimanded for violating the eligibility rules, despite several GAO and IG reports finding ‘varying interpretations of the statutory outpatient eligibility criteria,’ incorrect coding of mandatory visits, physicians ‘not consistently involved in required clinical examination to determine eligibility status,’ and other such things.” Nowhere in the Inspector General audit does it state that “VA incorrectly applied inpatient eligibility categories to its outpatients,” which insinuates that administratively, VHA field facilities used inpatient eligibility criteria to determine a veteran’s eligibility for outpatient care. It is repeatedly implied that veterans were provided outpatient care under the auspices of the “obviate-the-need for hospital care” criterion. In the Inspector General’s opinion, some of these veterans did not medically fit their definition of “obviate-the-need for hospital care,” because many of these individuals were treated for chronic conditions. VA is mixing up the IG’s two distinct findings, one of which concerns an administrative determination of the veterans’ priority for care and the other of which deals with the medical determination of whether outpatient care was needed in preparation for, as a follow-up to, or to obviate the need for hospital care. With respect to the administrative determination of veterans’ priorities for care, the IG found that VA was not reporting outpatient workload according to the mandatory and discretionary care categories established under the Public Law 100-322 and was instead reporting workload on the basis of the mandatory and discretionary care categories set in 1986 by Public Law 99-272 and still applicable to hospital care. We have, however, clarified the wording in the final report to indicate that VA was incorrectly reporting workload. The Inspector General report implies that VHA may be providing outpatient care to veterans who are otherwise eligible for discretionary care, but not to the outpatient care they are receiving. However, there is disagreement as to whether or not this statement is true, in that in some cases VA provides care that clinical staff deem to be mandatory under the “obviate-the-need” criterion, but which some do not see as clearly meeting the administrative definition of mandatory as defined in Public Law 100-322. This appears to be what you are referring to when stating that VA is incorrectly applying inpatient eligibility categories, although one has nothing to do with the other. We were careful in our report to distinguish between the IG’s two major findings. First, the IG reported that VHA’s budget plans do not accurately reflect statutory definitions for outpatient eligibility according to the mandatory and discretionary care categories defined in Public Law 100-322. Second, the IG reported that VA has not adequately defined the conditions and circumstances under which outpatient treatment may be provided to obviate the need for hospitalization. When we discuss veterans obtaining care for which they were not eligible, we are not discussing differences between being in the mandatory and discretionary care category. These categories define priorities for care, not eligibility for care. What we are referring to is providing veterans eligible for only hospital-related care services that are not needed in preparation for, as a follow-up to, or to obviate the need for hospital care. Policy directives are in place to guide administrative staff on the eligibility provisions of Public Law 100-322. These directives spell out mandatory versus discretionary outpatient medical care from an administrative perspective. The Inspector General, and by extension GAO, through its use of the Inspector General report, are concerned with the fact that “obviate-the-need for hospital care” is not clearly defined. This leads, in their opinion, to providing inappropriate care to veterans, who are determined eligible, based on their need for care to obviate the need for hospital care. Eligibility under this criterion is a medical decision and not an administrative decision. We agree that interpreting the obviate-the-need criterion is a medical decision. That fact does not, in our opinion, preclude issuance of guidelines intended to bring greater consistency to those medical decisions or independent reviews to determine compliance with those guidelines. Medical decisions are questioned every day. For example, a primary purpose of utilization review is to examine the reasonableness of a physician’s medical decisions. Similarly, a preadmission certification program uses an independent party to evaluate the reasonableness of the medical decisions physicians make to admit their patients to hospitals. Similarly, practice guidelines are frequently issued setting expectations for how physicians will practice. For example, at our urging, VA issued guidelines defining what constitutes a complete physical examination for women veterans. Those guidelines set expectations that VA physicians will provide women veterans complete cancer screening examinations at recommended intervals. Similarly, as noted elsewhere in its comments, VA recently required its Veterans Integrated Service Network (VISN) directors to establish formularies of medications to guide VA physicians toward prescribing certain drugs. Page 13. You indicate that because of separate planning processes, there might be double counting, in that the projected new workload may well be associated with the activations. Budgeting for activations requirements in 1997 is modified as the medical care request does not include “line item” requests for the activation of specific projects. The networks will activate projects from within the level of resources provided in their total Medical Care 1997 budget allocations. We included the updated information in the final report. Page 13. Although VA facilities received separate reimbursement for the workload generated through those sharing agreements, the workload was nevertheless included in VA’s justification of its budget request. In 1994, VA provided care to about 45,000 unique sharing agreement patients. No appropriated funds are requested for this workload since it is supported by reimbursements from DOD and other sharing partners. Even though our base workload counts do include sharing, the levels are small and the inclusion of sharing makes no material difference in our workload presentations. The Resource Planning and Management (RPM) model excludes data for sharing patients in developing changes in both unique patients and cost per unique patient. The actual patient counts for the last actual year are straightlined in all RPM projections. In addition, some of the DOD sharing agreement earnings are for non-patient workload such as pounds of laundry processed, etc. We have expanded the discussion in our final report to include the information VA provided. Because the workload data VA reported to the Congress included services provided under sharing agreements, VA was, in the past, receiving appropriated funds to pay for services paid for through sharing agreements. Page 15. VA agrees that there are potential opportunities for savings which will allow VA to operate more effectively and efficiently. We are actively pursuing these opportunities, including most of those initiatives cited in the report. However, it is important to qualify your estimate of billions in savings. It is impossible to project savings for some of the cited initiatives. To do so establishes peg points for reducing VA resources without any real justification. We agree that estimating precise savings from all of the initiatives included in this report is impossible. That is why we conservatively estimated that VA could save billions through management improvements over a 7-year period. To the extent possible, we have cited estimates developed by VA program officials and the VA IG. We do not agree, however, that VA should not establish “peg points” for reducing VA’s budget request on the basis of planned savings and then monitor management initiatives to determine whether the savings were realized. Effectively determining future resource needs is impossible without tracking savings. In its fiscal year 1997 budget request, VA seeks an increase over the fiscal year 1996 appropriation to offset inflation. The actual increase, however, is really much higher because no offsetting decrease exists in the request to compensate for any management savings likely to occur during the year, such as efficiency improvements expected to occur through full implementation of VISNs. Assuming a 5 percent compound annual inflation requirement, VA medical care would need to be 40 percent more efficient in order to operate at Congress’ straightlined 1995 level of $16.2 billion through 2002. It is highly unlikely that VA could reach these dramatic savings without severely impacting the level of health care currently provided to veterans. These additional savings would be over and above the $10.5 billion in savings that have resulted from the VA medical care budget increasing at a rate less than the Medical Consumer Price Index over the period from 1980 through 1995. VHA’s comparison of increases in its budget with increases in the medical consumer price index are inappropriate. VA’s inpatient hospital workload—which accounts for over one-half of VA’s medical care budget—declined dramatically between 1980 and 1995, while less costly outpatient workload increased just as dramatically. Comparing the increase in the overall budget with the consumer price index is inappropriate without considering changes in workload over the time period. A more appropriate comparison would be to compare the increase in VA’s average cost of hospital, nursing home, and outpatient care with growth in the consumer price index. For example, while VA’s medical care budget increased by about 170 percent between 1980 and 1995 (from $6.0 billion to $16.1 billion) the cost of a day of care in a VA hospital increased by over 305 percent (from $154 to $625). Page 15. You often point out savings expected from reduced acute inpatient care, but seem to ignore large increases needed in outpatient and other non-institutional care programs for current levels of eligible veterans and the greater use of VA services by veterans despite their population reductions. We agree that savings from shifting nonacute inpatient care to other settings will be partially offset by increased costs under other programs. We do not agree, however, that shifting nonacute care to outpatient settings will result in large increases in outpatient demand. Veterans who use VA for inpatient care already receive significant amounts of their care as outpatients. For example, in fiscal year 1995, veterans with no service-connected conditions who were hospitalized in a VA facility during the year received, on average, over 15 outpatient visits from VA clinics. We discuss the increased demand for outpatient care on page 5 of the report. Page 15. Since 1979, VA medical facilities have had the option to dispense multi-month quantities of locally determined medications to eligible veteran patients. Due to a number of reasons, including budgetary limitations, lack of automation to facilitate implementation, and patient care concerns, only a small number of VA medical facilities implemented such programs in the 1980’s. Subsequent to an expressed interest in this program by the Congress and GAO in the early 1990’s, additional guidance was distributed to all facilities regarding implementation of the program. Again, due to budgetary and patient care concerns; e.g., psychiatric patients, some facilities have been slow to adopt the program. Despite slow implementation by some facilities, there has been a dramatic increase in the use of this program in recent years. For example, in FY 1994, 9 million fewer prescriptions were dispensed by VA pharmacies due to multi-month dispensing. In FY 1995, this figure increased to 15 million fewer prescriptions, and VHA anticipates further efficiencies in FY 1996. We believe we have implemented this program in a prudent manner, balancing quality of care issues and budgeting issues. In addition, VHA issued analysis and guidance to medical facilities in FYs 1994 through 1996, and will continue to monitor the impact and implementation of the program. Our report, in reflecting the estimated savings in fiscal year 1995, recognizes the progress VA has made since issuance of our 1992 report. It seems to us, however, that budgetary concerns, rather than slowing implementation of more cost-effective drug-dispensing methods, would encourage quicker implementation. This is particularly true because essentially no start-up costs are involved in going from a 30-day prescription to a 90-day prescription. Access points are not alternatives to VA outpatient clinics. They include VA operated outpatient clinics as well as contractual or sharing agreements. The term “access points” was used to reorient managers’ attitudes toward outpatient care; i.e., not as pre- or post-hospital care but as a veteran’s principal contact with the system. The cost of outpatient treatment would generally be lower than the cost of inappropriate hospital days. The cost of privately provided outpatient care is not necessarily lower than the cost of VA-provided outpatient care. One would expect a facility to arrange for private provision of only those services which can be obtained at less cost. We have clarified the wording of this part of the final report. Our recent review of access points suggests that VA medical centers that have established access points have generally found that contracting for services is less expensive when an access point serves a relatively small number of veterans. As the number of veterans served by an access point increases, the decision on whether to contract for services or provide them directly becomes more difficult. The issue of new users attracted to VA by the opening of new clinics is exaggerated. The purpose in establishing access points is to improve access to VA services by veterans, not to expand the system. We acknowledge that establishing access points may result in new users to the system. The net effect does not appear substantial in that between 20 and 30 percent of individuals treated by VA in each year are new to the VA system. We believe the suggestion that making health care services more accessible does not substantially affect veterans’ use of VA services is naive. Elsewhere in its comments, VHA presented data showing that as the number of VA outpatient clinics increased 72 percent between 1980 and 1995, the number of outpatient visits increased by 53 percent. Although VA may be correct in stating that 20 to 30 percent of the veterans who use VA services each year did not use VA services the year before, we used a more conservative approach in estimating new users. In our analyses, we considered veterans to be “new” users only if they had not used VA services within the preceding 3 consecutive years. Veterans who had used VA within the 3-year period—about 4 to 5 million veterans nationwide—were considered current users. In other words, we considered only those veterans attracted to the access points who had not sought VA care for over 3 years new users. Page 17. Your comment that VA could realize potentially $100 million in savings through use of a national formulary is most likely overstated. If the intention is that overall pharmaceutical expenditures be reduced by $100 million through implementation of a national formulary, this is misleading. In all probability, savings realized through volume committed contracts will be offset by new therapies. For example, this year new agents for the treatment of HIV/AIDS have been granted accelerated approval by the Food and Drug Administration. It is estimated that VA expenditures will increase by over $50 million annually for the new HIV/AIDS therapies. It is also very difficult to predict how much lower VA can drive drug prices, keeping in mind that Public Law 102-585 established drug pricing for VA that is much more favorable than for other managed care organizations. Because VHA could not provide an estimate of potential savings from establishing a national formulary, we included a “ballpark” estimate of what potential savings could be if the formulary allowed VA to save 10 percent of its pharmaceutical costs. We recognize that the estimate has no precision but note that VHA’s directive on establishing VISN formularies notes that the advantages of such formularies are decreased inventory, increased efficiency, and lower pharmaceutical prices. Although savings from establishing VISN formularies or a national formulary may be used to offset increased costs for new therapies or for other uses, we nevertheless believe that the savings, and how they are used, should be accounted for in VA budget submissions. VA has taken a number of additional actions to improve management of pharmaceuticals in the last six years. VISN network formularies have been established which will evolve into a national formulary, by approximately April 1997. More important is the approval by the Under Secretary for Health of the Pharmacy Benefit Management (PBM) function as part of the restructured VHA. Basically, the PBM will address (1) contracting for pharmaceuticals to ensure the most efficient and effective contract processes; (2) the most efficient and effective distribution systems for pharmaceuticals (e.g., consolidated mail outpatient pharmacies); and, (3) the appropriate utilization of pharmaceuticals through the issuance of evidence-based disease management protocols, treatment protocols and drug use protocols. VHA is also testing commercial software to compare pharmaceutical utilization against these established protocols and to measure outcomes achieved from drug therapy. In short, the goal of the PBM is to reduce overall health care costs through appropriate use of pharmaceuticals, not reduce the cost of individual pharmaceuticals. The final report has been revised to indicate that VHA has taken other actions to improve management of pharmaceuticals. Page 17. VHA had strategically planned to consolidate mail prescription processing through automated technology well before 1992. In fact, through research and development at the VA Medical Center Nashville, TN beginning in 1990, VHA essentially developed the automated prescription dispensing technology that is on the commercial market today. GAO’s 1992 report was not the determining factor prompting VA’s decision to implement consolidated mail outpatient pharmacies or the timing of their implementation. Timing of the implementation was actually influenced by the development of suitable technology associated with efficient human resources management. Due to the fact that none of the existing mail prescription facilities is operating at full capacity, it is too early for either VA or GAO to estimate annual cost avoidance. Experience to date suggests that substantial savings will accrue. How much savings is also very difficult to estimate due to the fact that technology is continually evolving. Our report does not indicate that VA’s decision to establish consolidated mail service pharmacies was in response to our January 1992 report. Our report, did, however, recommend that VA require pharmacies to maximize the use of 90-day supplies when dispensing maintenance drugs. It also contained recommendations on the location and operation of the bulk processing centers. We got our estimate of savings from VA pharmacy officials. Page 19. The 1991 study is based on FY 1986, or ten year old data. The 1993 study is based on 1989 data. In both studies, trained reviewers were instructed to assume all levels of care were available at each VA medical center in the determination of the appropriateness of inpatient services. In the 1991 study, social factors were only considered if documented in the patient’s chart. In the 1993 study, reviewers were explicitly instructed not to consider social factors in the determination of the appropriateness of inpatient care. This would, of course, have bearing on the conclusions drawn in the current GAO report. We reviewed the two studies because VA, the National Performance Review, and the Independent Budget cite them as support for their views that eligibility reform would allow VA to shift 20 to 43 percent of nonacute admissions to outpatient settings. We agree with VA that the assessments of the “appropriateness of inpatient care” under both studies were based on application of medical necessity criteria, not on whether extenuating circumstances, such as nonavailability of an ambulatory surgery program, long travel distance, and eligibility restrictions, might lead to nonacute admissions. A secondary goal of the studies, however, was to provide some insights into the reasons for nonacute care. Our comments are based on the reasons for nonacute admissions identified by the researchers. For example, the 1993 study notes that “hospital reviewers were asked to prioritize up to three reasons for each nonacute admission and day of care.” The reviewers, in identifying reasons for nonacute admissions, looked both at the availability of other care settings and social factors. For example, the study notes that “ack of an ambulatory care alternative was the most important reason for nonacute admissions to surgery.” You fault VA for nonacute inpatient admissions. Yet in order to shift much of this nonacute care of mandatory VA inpatients to cost-effective outpatient alternatives when outpatient eligibility is discretionary or limited, VA needs the outpatient eligibility reform tool. VA gets blamed for both the problem and the solution when much of the problem stems from the complexity of or lack of outpatient eligibility in order to achieve the best patient and system outcomes. As discussed in our March 20, 1996, testimony before the Senate Committee on Veterans’ Affairs, we see little basis for linking nonacute admissions to VA hospitals to eligibility restrictions. Rather, nonacute admissions are most often caused by the VA system’s inefficiencies, VA’s resource allocation systems that have historically rewarded VA medical centers for choosing inpatient over outpatient care, and the system’s slowness in developing ambulatory care facilities. VA continues to emphasize expanding hospital capacity over outpatient capacity in its fiscal year 1997 budget submission. VA proposes to spend over $383 million, including about $75 million in fiscal year 1997, to build major hospital capacity in two markets that already have a surplus of private-sector beds. “Practitioner reasons such as conservative practice for admissions and delays in discharge planning for nonacute days of care accounted for 32% of nonacute admissions and 43% of nonacute days of care for medical service. Lack of availability of an ambulatory program for surgery and invasive medical procedures explained 36% of nonacute admissions to surgery and 18% to medicine. Other important reasons for nonacute admissions included social and environmental reasons such as homelessness, and long travel distances to the hospital. Administrative reasons included admissions to permit placement in nursing homes, payment for travel or for disability evaluations. For the following reasons, we believe the above quotation supports our position that the study did not attribute most nonacute admissions to eligibility problems. “Conservative practice was,” the study notes, “generally interpreted by reviewers to mean both that no other social, VA system, or regulation reason was identifiable, and the decision of the practitioner to admit the patient to the acute hospital service was an example of conservative medical practice.” “Delays in discharge planning” would contribute to nonacute days of care, not to nonacute admissions. Nor were those nonacute days of care the result of eligibility restrictions. Under current law, all veterans are eligible for posthospital outpatient treatment. The quotation cites the lack of an ambulatory “program” for surgery and invasive medical procedures, not the lack of patient eligibility for such services as the cause of nonacute admissions. Social and environmental reasons such as homelessness and travel distance are unrelated to eligibility restrictions. Two of the three administrative reasons cited (admissions to pay travel reimbursement and admissions to perform disability examinations) are not related to eligibility for health care services. The requirement that veterans with no service-connected disabilities be admitted to VA hospitals before they can be placed in community nursing homes is an eligibility-related limitation. The study found that this limitation accounted for 2.5 percent of the nonacute admissions to acute medical wards. [This comment responds to GAO’s reporting that the Smith study recommended only minor changes in VA eligibility provisions, specifically, that VA establish a systemwide utilization program and that VA has not established such a review function.] The final report of the Smith study, 1993, did not make “minor” recommendations related to outpatient eligibility as you suggest. Of the three recommendations, which follow, two are related to limited outpatient eligibility and its impact upon the development and availability of such care: A. VA should establish a system-wide program for using the ISD criteria for utilization review with emphasis on identifying the local and systemic reasons for nonacute admissions and days of care and for monitoring the effectiveness of changes in policy. B. VA physicians need to be encouraged to make greater use of ambulatory care alternatives and to be more effective and timely in planning for patient discharges. C. VA needs to facilitate the shift of care from the inpatient to the outpatient setting. This should include incentives in the reimbursement methodology for providing ambulatory care, changes in eligibility regulations that promote rather than prohibit ambulatory care, prioritization of construction funds and seed funds for new programs to support the shift to ambulatory care. VA does not need eligibility reform to implement either of the first two recommendations. VHA agreed with the recommendation made in our report that it establish an independent preadmission certification program to reduce inappropriate admissions to VA hospitals. In addition, VA has, through its emphasis on primary care, encouraged the shift to ambulatory care. Nor does VA need eligibility reform to change its reimbursement methodology to promote ambulatory care (such a change is under way through RPM) or to prioritize construction funds to facilitate the shift toward ambulatory care (VA continues to seek construction funds primarily for hospital construction rather than ambulatory care programs). Concerning the recommendation to change eligibility “regulations,” the detailed section of the Smith report recommended that legislation be enacted to (1) allow veterans with nonservice-connected disabilities to be placed in VA-supported community nursing homes without first being admitted to a VA hospital and (2) remove limitations on eligibility for outpatient compared with inpatient services such as dental services and provision of needed prosthetic devices. The eligibility reform proposal developed by VA would allow direct admission of nonservice-connected veterans to community nursing homes and the provision of prosthetic devices on an outpatient basis for treating nonservice-connected conditions. The VA proposal would not remove the limitations on provision of dental services on an outpatient basis. Trying to link the studies discussed here to broader VA eligibility reform is inappropriate because the studies did not contain the types of data needed to make such a link. In other words, the studies did not determine whether the patients inappropriately admitted to VA hospitals had service-connected or nonservice-connected disabilities, the degree of any service-connected disability, whether they were in the mandatory or discretionary care category for outpatient care, or whether they would have been eligible to receive the services they needed on an outpatient basis. Had such information been included in the studies, it would be possible to determine whether a higher incidence of nonacute admissions occurred for veterans eligible for only hospital-related outpatient services than for those eligible for comprehensive outpatient services. “The most important reason for nonacute admissions to surgical services in previous VA studies and in this study was the lack of an available ambulatory care alternative. This was also an important reason for nonacute admissions to medical services. These findings support the need to facilitate the shift of care from an inpatient to an outpatient setting.” Elsewhere in its comments, VHA maintains that the reviewers conducting the study were expressly told to assume that all care settings were available. It seems to us to be inconsistent to now cite the study’s finding that the most important reason for nonacute admissions to surgical services was the lack of an ambulatory care alternative. We agree, however, that VA’s slowness in developing ambulatory care capabilities is a primary reason for nonacute admissions to VA hospitals. We applaud VHA’s recent efforts to expand such capabilities. “The eligibility regulations need to be adjusted to encourage outpatient rather than inpatient care. Legislation will be needed to allow contract nursing homes to be reimbursed by VA for patients admitted directly from outpatient status to nursing home care. Limitations need to be removed on eligibility for outpatient as compared to inpatient services such as dental services and provision of needed prosthetic devices.” We cited this recommendation in our report, and we believe we correctly characterize it as suggesting only minor changes in VA eligibility provisions. Rather than recommending a significant expansion of VA eligibility, it recommends three specific changes affecting a relatively small portion of VA benefits—nursing home care, dental care, and prosthetics. Contrary to the statement in the report, VHA has had a systemwide utilization review (UR) program since October 1993. In planning for this program, VHA’s Office of Quality Management initiated a utilization management (UM) pilot study in 1992. The UM pilot study had a two-fold purpose. One, to provide guidance for development of a national policy and data base to assist managers at all levels in VHA to assess the appropriateness and efficiency of resource utilization. Second, to determine the reliability and validity of an appropriateness measure that facilities could use to determine the extent and causes of these allegedly inappropriate admissions and days of care. The UM pilot study concluded in November 1992. A UR national training program was conducted in the summer of 1993, prior to implementation in October 1993. In addition to the internal UR program, VHA has also actively pursued the potential of external utilization review for national data collection to address system issues. We have clarified the wording in the final report to indicate that VA does not have a utilization review program focusing on medical necessity. VA’s current utilization review program focuses almost exclusively on quality of care. VHA is currently assessing the use of pre-admission reviews systemwide as a method to encourage the most cost-effective, therapeutically appropriate care setting. A number of facilities have adopted some form of pre-admission reviews already and their models are being reviewed. In addition, VHA is implementing a performance measurement and monitoring system which contains a number of measures for which all network directors and other leaders will be held accountable. Several of these measures, such as percent of ambulatory surgery done at each facility, and implementation of network-based utilization review policies and programs will move the VA system towards efficient allocation and utilization of resources. We have added a discussion of VHA’s current efforts to the final report. Page 25. With VHA restructuring, resources are allocated to the network director. VISN directors now have both the responsibility and incentive to examine cost variations among facilities within their network. Network directors are at the cutting edge, assessing the current configuration of VA health services and costs in order to make decisions on redirecting resources to achieve a more efficient and patient centered health care system. We agree that the VISN restructuring and the planned move to capitation funding should lead to an increased emphasis on efficiency, as discussed in the final report. The National Cost Containment Center (NCCC) was premised on the goal of analyzing costs across the system to identify opportunities for improvement. They have published numerous analyses. In addition, VHA clinical technical advisory groups; e.g., the Chronic Mental Illness group, also analyze costs on a programmatic level. We recognize that VA has taken some steps, through the NCCC and Technical Advisory Groups, to analyze particular cost variations across the system to identify potential efficiencies. These efforts are a step in the right direction, but VA needs more comprehensive evaluations of unit cost variations, their link to facility performance, and the need for changes to supporting data systems to improve comparisons. Such evaluations and improved data systems will be necessary to ensure a successful transition to a capitation system and provide for the needed accountability in the system for workload, efficiency, and other performance targets. VA Health Care: Approaches for Developing Budget-Neutral Eligibility Reform (GAO/T-HEHS-96-107, Mar. 20, 1996). VA Health Care: Opportunities to Increase Efficiency and Reduce Resource Needs (GAO/T-HEHS-96-99, Mar. 8, 1996). VA Health Care: Issues Affecting Eligibility Reform (GAO/T-HEHS-95-213, July 19, 1995). VA Health Care: Challenges and Options for the Future (GAO/T-HEHS-95-147, May 9, 1995). VA Health Care: Retargeting Needed to Better Meet Veterans’ Changing Needs (GAO/HEHS-95-39, Apr. 21, 1995). VA Health Care: Barriers to VA Managed Care (GAO/HEHS-95-84R, Apr. 20, 1995). Veterans’ Health Care: Veterans’ Perceptions of VA Services and VA’s Role in Health Reform (GAO/HEHS-95-14, Dec. 23, 1994). Veterans’ Health Care: Use of VA Services by Medicare-Eligible Veterans (GAO/HEHS-95-13, Oct. 24, 1994). Veterans’ Health Care: Implications of Other Countries’ Reforms for the United States (GAO/HEHS-94-210BR, Sept. 27, 1994). Veterans’ Health Care: Efforts to Make VA Competitive May Create Significant Risks (GAO/T-HEHS-94-197, June 29, 1994). Veterans’ Health Care: Most Care Provided Through Non-VA Programs (GAO/HEHS-94-104BR, Apr. 25, 1994). VA Health Care: A Profile of Veterans Using VA Medical Centers in 1991 (GAO/HEHS-94-113FS, Mar. 29, 1994). VA Health Care: Restructuring Ambulatory Care System Would Improve Service to Veterans (GAO/HRD-94-4, Oct. 15, 1993). VA Health Care: Comparison of VA Benefits With Other Public and Private Programs (GAO/HRD-93-94, July 29, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the Department of Veterans Affairs' (VA) health care system, focusing on ways that VA could: (1) operate more efficiently; (2) reduce the resources needed to meet veterans' health care needs; and (3) reorganize its health care system and create efficiency incentives. GAO found that: (1) the VA health care system should be able to respond to deficit reduction within the next seven years; (2) VA has overstated the level of resources that it would need to satisfy veterans health care requirements in the next seven to ten years; (3) VA did not adequately consider the impact of the declining veteran population on the future demand for inpatient hospital care; (4) a significant portion of VA resources is used to provide services to veterans in the discretionary care category; (5) VA could significantly reduce its operating costs over the next seven years by completing actions on a wide range of efficiency improvements; (6) the success of these efforts depends on how VA health care facilities spend appropriated funds; (7) VA managers often find ways to operate more efficiently when they need resources to implement new services or expand existing services; (8) VA is holding network directors accountable for the Veterans Integrated Service Network's (VISN) performance; (9) the Under Secretary for Health distributed criteria to help VISN directors develop efficiency initiatives and gave VISN and facility directors authority to realign VA medical centers to achieve efficiencies; (10) VA plans to develop a capitation funding process that provides greater efficiency incentives for VA facilities; and (11) VA must implement clear mechanisms and verify management data to achieve its workload, efficiency, and other performance targets. |
Federal agencies and our nation’s critical infrastructures—such as energy, transportation systems, communications, and financial services— are dependent on computerized (cyber) information systems and electronic data to carry out operations and to process, maintain, and report essential information. Federal and nonfederal operations are largely supported by computer systems and electronic data, and organizations would find it difficult, if not impossible, to carry out their missions, deliver services to the public, and account for their resources without these cyber assets. Information security is, thus, especially important for federal and nonfederal entities to ensure the confidentiality, integrity, and availability of their systems and data. Conversely, ineffective information security controls can result in significant risk to a broad array of operations and assets, as the following examples illustrate: Computer resources could be used for unauthorized purposes or to launch attacks on other computer systems. Sensitive information, such as personally identifiable information, intellectual property, and proprietary business information, could be inappropriately disclosed, browsed, or copied for purposes of identity theft, espionage, or other crimes. Critical operations, such as those supporting critical infrastructure, national defense, and emergency services, could be disrupted. Data could be added, modified, or deleted for purposes of fraud, subterfuge, or disruption. Threats to systems are evolving and growing. Cyber threats can be unintentional or intentional. Unintentional or nonadversarial threat sources include failures in equipment or software due to aging, resource depletion, or other circumstances that exceed expected operating parameters, as well as errors made by end users. They also include natural disasters and failures of critical infrastructure on which the organization depends, but that are outside of the control of the organization. Intentional or adversarial threats include individuals, groups, entities, or nations that seek to leverage the organization’s dependence on cyber resources (i.e., information in electronic form, information and communications technologies, and the communications and information- handling capabilities provided by those technologies). Threats can come from a wide array of sources, including corrupt employees, criminal groups, and terrorists. These threat adversaries vary in terms of their capabilities, their willingness to act, and their motives, which can include seeking monetary gain, or seeking an economic, political, or military advantage. Cyber threat adversaries make use of various techniques, tactics, and practices, or exploits, to adversely affect an organization’s computers, software, or networks, or to intercept or steal valuable or sensitive information. These exploits are carried out through various conduits, including websites, e-mails, wireless and cellular communications, Internet protocols, portable media, and social media. Further, adversaries can leverage computer software programs as a means by which to deliver a threat by embedding exploits within software files that can be activated when a user opens a file within its corresponding program. Reports of successfully executed cyber exploits illustrate the debilitating effects they can have on the nation’s security and economy, and on public health and safety. Federal and nonfederal entities have experienced security breaches in their networks, potentially allowing sensitive information to be compromised, and systems, operations, and services to be disrupted. The examples that follow illustrate that a broad array of personal information and critical infrastructures are at risk: In October 2016, the International Atomic Energy Agency reported that a cyber attack had caused a disruption to the operations of a power plant. The agency did not disclose details about the information gathered or what specific operations were disrupted. In September 2016, Yahoo Incorporated, a multinational company, confirmed that 500 million user accounts were compromised. Yahoo company officials reported that the account information may have included names, e-mail addresses, telephone numbers, and dates of birth. In August 2015, the Internal Revenue Service reported that approximately 390,000 tax accounts were potentially affected by unauthorized third parties gaining access to taxpayer information from the agency’s “Get Transcript” application. According to testimony from the Commissioner of Internal Revenue in June 2015, criminals had used taxpayer-specific data acquired from nonagency sources to gain unauthorized access to information, although at that time, the commissioner reported that approximately 100,000 tax accounts had been affected. The data included Social Security information, dates of birth, and street addresses. In July 2015, the Office of Personnel Management reported that an intrusion into its systems had compromised the background investigation files of 21.5 million individuals. This was in addition to a separate but related incident that had affected the personnel records of about 4 million current and former federal employees, which the agency announced in June 2015. In April 2015, the Department of Veterans Affairs’ Office of Inspector General reported that two contractors had improperly accessed the agency’s network from foreign countries using personally owned equipment. In 2009, DHS developed NCCIC to provide a central place for federal and private-sector organizations to coordinate efforts to address cyber threats and respond to cyber attacks. The center’s stated mission is to reduce the likelihood and severity of incidents that may significantly compromise the security and resilience of the nation’s critical information technology and communications networks. The National Cybersecurity Protection Act of 2014 statutorily established the center’s role within DHS to act as a federal civilian interface for sharing information related to cybersecurity risks, incidents, analysis, and warnings with federal and nonfederal entities, and to provide shared situational awareness to enable real-time actions to address cybersecurity risks and incidents to federal and nonfederal entities. The Cybersecurity Act of 2015 added roles for NCCIC and required DHS to create and issue several related policies and procedures. Table 1 describes the 11 cybersecurity functions that NCCIC is to carry out, as prescribed by the National Cybersecurity Protection Act of 2014 and the Cybersecurity Act of 2015. Further, the National Cybersecurity Protection Act of 2014 states that the center shall ensure that it carries out these functions, to the extent practicable, in accordance with the following 9 principles: 1. Ensure that timely, actionable, and relevant information related to risks, incidents, and analysis is shared. 2. Ensure that when appropriate, information related to risks, incidents, and analysis is integrated with other information and tailored to a sector. 3. Ensure that the activities are prioritized and conducted based on the level of risk. 4. Ensure that industry sector-specific, academic, and national laboratory expertise is sought and receives appropriate consideration. 5. Ensure that continuous, collaborative, and inclusive coordination occurs across sectors, with sector coordination councils, information sharing and analysis organizations, and other nonfederal partners. 6. Ensure that, as appropriate, the center works to develop and use mechanisms for sharing information related to cybersecurity risks and incidents that are technology-neutral, interoperable, real-time, cost- effective, and resilient. 7. Ensure that the center works with other agencies to reduce unnecessarily duplicative sharing of information related to cybersecurity risks and incidents. 8. Ensure that information related to cybersecurity risks and incidents is appropriately safeguarded against unauthorized access. 9. Ensure that activities conducted comply with all policies, regulations, and laws that protect the privacy and civil liberties of United States persons. To perform its functions, NCCIC is organized into four branches: United States Computer Emergency Readiness Team (US-CERT) is responsible for leading efforts to improve the nation’s cybersecurity posture, coordinate cyber information sharing, and proactively manage cyber risks to the government and private sector. Industrial Control Systems (ICS) Cyber Emergency Response Team (ICS-CERT) is responsible for taking steps to reduce risk to the nation’s critical infrastructure by strengthening control systems security and resilience through public-private partnerships. In executing its mission, ICS-CERT is to serve its partners as the preeminent federal government resource for industrial control systems security. National Coordinating Center for Communications (NCC) is responsible for helping government, private industry, and international partners to share and analyze threat information about, assess the operating status of, and understand the risk posture of the communications infrastructure. In addition, it is to coordinate efforts to prepare for, prevent, protect against, mitigate, respond to, and recover from significant communications disruptions. NCCIC Operations & Integration (NO&I) is responsible for engaging in planning, coordination, and integration capabilities to synchronize analysis, information sharing, and incident response efforts across the center’s branches and activities. Figure 1 shows the organizational structure of NCCIC. According to DHS policy, the cyber situational awareness, incident response, and management efforts of NCCIC’s four branches are to occur on a 24-hour-a-day, 7-day-a-week basis at an integrated operations center known as the Watch Floor, as shown in figure 2. According to DHS policy, the center is to collaborate with federal departments and agencies most responsible for securing the government’s cyber and communications systems. DHS policy and law also states that it is to engage with critical infrastructure owners and operators; other private sector entities; state, local, tribal, and territorial governments; and international partners. These federal and nonfederal entities, as well as individual citizens, represent NCCIC’s customers that are recipients of its products, such as documents about threats, malware or digital media analyses, or software vulnerabilities; and services, which can include support for the testing of emergency communications, incident response, or vulnerability assessments. According to DHS policy, NCCIC is to coordinate with its federal partners that focus on securing the federal information infrastructure, with the intent of integrating cyber center information to provide cross-domain situational awareness, analysis, and reporting on the composite state of U.S. cyber networks and communication infrastructure. In addition, the Comprehensive National Cybersecurity Initiative requires the center to foster partnerships with other key federal cybersecurity and communications centers in order to collaborate and improve cybersecurity and communications infrastructure across the federal government. The federal centers include: Defense Cyber Crime Center sets standards for digital evidence processing, analysis, and diagnostics for Department of Defense investigations that require computer forensic support to detect, enhance, or recover digital media, including audio and video. Intelligence Community Security Coordination Center provides attack sensing and warning capabilities to characterize cyber threats and attributions of attacks, and anticipates future incidents. National Cyber Investigative Joint Task Force, organized by the Federal Bureau of Investigation, serves as a focal point for all government agencies to coordinate, integrate, and share information related to domestic cyber threat investigations. National Security Agency/Central Security Service Threat Operations Center establishes real-time network awareness and threat characterization capabilities to forecast, alert, and attribute malicious activity. United States Cyber Command Joint Operations Center establishes and maintains situational awareness and directs the operations and defense of the “.mil” networks. National Infrastructure Coordinating Center is the coordination and information sharing operations center that maintains situational awareness of the nation’s critical infrastructure for the federal government. DHS policy also states that each of these federal cybersecurity and communications centers is to provide complementary capabilities and resources that collectively form the threat characterization, vulnerability analysis, information sharing, detection and response, investigation, and defense of civilian federal cyber networks and communication infrastructures. NCCIC works with the private sector that owns and operates most of the nation’s critical infrastructure, such as banking and financial institutions, telecommunications networks, and energy production and transmission facilities. Infrastructure owners and operators are to integrate (both physically and virtually) into the center’s operations so that, during an incident, information can be aggregated and communicated between government and appropriate private sector partners in an efficient manner. As of August 2016, 174 private sector companies had as-needed access to NCCIC through their participation in the Cyber Information Sharing and Collaboration Program (CISCP). As part of this effort, NCCIC is to coordinate on an ongoing basis with various private sector partners, including information sharing and analysis centers (ISAC) and technology vendors. ISACs have been formed for a number of sectors. According to the National Council of ISACs, these include (1) automotive; (2) aviation; (3) defense industrial base; (4) emergency services; (5) electricity; (6) financial services; (7) healthcare, (8) information technology; (9) maritime security; (10) communications; (11) multistate; (12) national health; (13) oil and gas; (14) public transit; (15) real estate; (16) retail; (17) research and education; (18) supply chain; (19) surface transportation; and (20) water. As of October 2016, five nonfederal entities maintained a permanent presence on the NCCIC Watch Floor (the ISACs of the financial, national health, aviation, and energy sectors, as well as the Multi-State ISAC). NCCIC maintains partnerships with state, local, tribal, and territorial governments to support their protection of each respective community. These governments are responsible for the security and integrity of their own cyber networks, along with associated preparedness, mitigation, and response efforts. The center is to facilitate overarching situational awareness and the sharing of technical information and best practices with these nonfederal government partners to help ensure a strengthened national cyber risk posture. As part of this effort, NCCIC is to coordinate on an ongoing basis with the Multi-State Information Sharing and Analysis Center (MS-ISAC), and an MS-ISAC representative is physically located on the NCCIC Watch Floor. As part of its mission, NCCIC also engages and collaborates with international partners, including governments that are members of the North Atlantic Treaty Organization (NATO), to disseminate bulletins and perform services. The center works with international partners and their respective cyber centers while conducting cyber-related exercises —as part of a March 2016 exercise, representatives from Australia, Canada, Denmark, Finland, Germany, Hungary, Japan, Netherlands, New Zealand, Sweden, Switzerland, and the United Kingdom participated. NCCIC also can assist international partners with responding to cyber incidents. For example, during the cyber attack against the Ukrainian power infrastructure in December 2015, the center collaborated with the Ukrainian government to determine the methods of the cyber attack. NCCIC reported that it spent about $480 million on cybersecurity-related activities during fiscal years 2014 through 2016. According to the center’s officials, this included spending for 262, 268, and 301 full-time employees for each of the three fiscal years, respectively. Figure 3 depicts the reported expenditures per year by each of the four branches of NCCIC. NCCIC has taken steps to perform each of its 11 statutorily required cybersecurity functions. It has developed a variety of products and services in support of these functions, including those related to analyzing and sharing cyber information, facilitating coordination among federal and nonfederal partners, and conducting technical assistance and exercises. However, the extent to which NCCIC carried out these functions in accordance with the nine principles specified in the National Cybersecurity Protection Act of 2014 is unclear because the center has not consistently evaluated its performance against the principles. In addition, a number of factors impede NCCIC’s ability to more efficiently perform several of its cybersecurity functions. Although not generalizable to any larger population, recipients of its products and services that responded to our survey expressed generally favorable views of its activities. Nevertheless, NCCIC has limited assurance that it is fully meeting statutory requirements and efficiently performing its cybersecurity functions because it has not completely evaluated its performance against the principles or addressed the impediments to performing its cybersecurity functions. NCCIC has developed 43 types of products and services in support of its 11 statutorily required functions. Descriptions of these products and services as well as the total numbers of each provided to NCCIC’s customers during fiscal years 2015 and 2016 are discussed in greater detail in appendix II. The center manages several programs that provide data used in developing the products and performing the services related to its cybersecurity functions. These programs include: The National Cybersecurity Protection System, operationally known as EINSTEIN, monitors network traffic entering or exiting networks of federal agencies and provides intrusion detection and intrusion prevention services. NCCIC analysts use data logged by EINSTEIN to notify federal and nonfederal partners of potential breaches of information security. The Advanced Malware Analysis Center is a set of capabilities intended to provide a segregated, closed, computer network system that is used to analyze computer network vulnerabilities and threats. According to NCCIC officials, information transmitted to NCCIC through the Advanced Malware Analysis Center may include malicious codes, computer viruses, worms, spyware, bots, and Trojan horses. Once received, analysts use the malware analysis capabilities to analyze the code or images in order to discover how to secure or defend computer systems against the threat. The corrective action information is then published in products such as vulnerability reports or alerts or malware reports. The Automated Indicator Sharing program was created to provide real-time sharing of cyber threat indicators and defensive measures by enabling NCCIC to (1) receive cyber threat indicators and defensive measures submitted by its nonfederal participants and federal entities; (2) remove personally identifiable information and other sensitive information that is not directly related to a cybersecurity threat; and (3) disseminate the cyber threat indicators and defensive measures to its nonfederal participants and federal entities, as appropriate. The Automated Indicator Sharing program uses the DHS-developed Structured Threat Information Expression/Trusted Automated Exchange of Indicator Information formats, a mechanism for sharing cyber threat information in a common manner. NCCIC uses this program to send out machine- readable cyber threat indicators at near-real-time and is now on- boarding participants across the public and private sectors. NCCIC officials stated that the Automated Indicator Sharing program was first disseminated to the 5 cyber centers. Since then, the program has become accessible to additional entities. According to the officials, as of August 2016, 32 private sector entities representing 6 critical infrastructure sectors and 7 federal agencies were connected to the program. NCCIC officials stated that DHS is in the process of expanding the service to all 24 Chief Financial Officers Act agencies in response to guidance from the Office of Management and Budget from October 2015. The following summarizes NCCIC’s products and services that support its 11 statutorily required functions. Details on how all 43 products and services support each of the cybersecurity functions are in appendix III. Function 1: Be a federal civilian interface for the multidirectional and cross-sector sharing of information related to cyber threat indicators, defensive measures, cybersecurity risks, incidents, analysis and warnings for federal and nonfederal entities. NCCIC has nine products and services that support this function. Among these, it provides products such as Cyber Information Sharing and Collaboration Program (CISCP) Indicator Bulletins and US-CERT Indicator Bulletins, which can include cyber threat indicators, defensive measures, cybersecurity risks, incidents, analysis, and warnings. For fiscal year 2016, the center developed and disseminated 151 US-CERT Bulletins. For example, one US-CERT Bulletin identified Internet protocol addresses that had conducted unauthorized scans of networks of partner entities. NCCIC also provides services to interface with federal and nonfederal entities. Through the center’s Information Sharing and Liaison Services, representatives from federal and nonfederal sectors are able to reside permanently or temporarily on the Watch Floor alongside NCCIC officials, to better ensure multidirectional, cross-sector sharing of information. According to the officials, there are seven seats available on the Watch Floor that its partners can reserve as a temporary residence. As of August 2016, the center reported agreements with 118 entities that could elect to reside temporarily on the Watch Floor. We observed additional cross-sector information sharing through the presence on the NCCIC Watch Floor of liaison officers from the other five cyber centers; members from the Multi-State, Communications, and Financial Services ISACs; and the intelligence community working in conjunction with NCCIC analysts. Function 2: Provide shared situational awareness to enable real-time, integrated, and operational actions across the federal government and nonfederal entities to address cybersecurity risks and incidents to federal and nonfederal entities. NCCIC supports this function with the use of 12 products and services. Among these products, it provides situational awareness to its customers to enable real-time, integrated, operational actions. Through one such product, Watch Floor Situation Reports, the center provides awareness of incidents and recommendations on remediation. For example, one report disseminated to its partners identified current events related to Ransomware incidents directed towards hospitals. As part of the report, the center identified immediate and future actions in support of resolving the incidents. In addition, in providing notifications, such as National Coordinating Center for Communications (NCC) Watch Train Derailment and GPS Testing Notices, the center shared information to support operational actions on behalf of the partner entities. Further, NCCIC provided situational awareness of potentially malicious Internet protocol addresses through Victim/Abuse notifications. Function 3: Coordinate the sharing of information related to cyber threat indicators, defensive measures, cybersecurity risks and incidents across the federal government. NCCIC has nine products and services that support this function. Among the products are CISCP Bulletins, US-CERT Bulletins, Joint Analysis Reports, and Joint Indicator Bulletins that can contain information related to cyber threat indicators, defensive measures, cybersecurity risks and incidents. For example, the center issued eight Joint Analysis Reports during fiscal years 2015 and 2016. One report, issued jointly with the Federal Bureau of Investigation on April 14, 2015, was also coordinated with the Departments of Treasury and Energy. The report contained a summation of open-source analysis related to common vulnerabilities leveraged by state-sponsored cyber operators in products such as Adobe Flash, Adobe Reader, Microsoft Office, Microsoft server software, and OpenSSL. The report contained information on the specific version of the product affected, as well as the associated information to patch the vulnerability. According to NCCIC officials, the center relies on the NCCIC Portal as a mechanism to coordinate the sharing of these products to customers. Specifically, the portal is comprised of 35 compartments, which include customers across the globe, and within government and various critical infrastructures. Each of the compartments represents a grouping of entities with a similar role or focus. For example, the Government Forum of Incident Response and Security Teams are comprised of individuals from federal civilian and military agencies responsible for securing government information technology systems. Function 4: Facilitate cross-sector coordination to address cybersecurity risks and incidents, including cybersecurity risks and incidents that may be related or could have consequential impacts, across multiple sectors. NCCIC has six products and services that support this function. For example, the center facilitates cross-sector coordination to address cybersecurity risks and incidents through its Industrial Control Systems Joint Working Group and its Incident Notifications. In particular, the joint working group holds biannual meetings with the industrial control system community. For example, the most recent meeting occurred on May 3–5, 2016, and had over 300 stakeholders represented. According to the after-action report, representatives from several sectors, including officials from the energy, water, transportation, and nuclear sectors, among others, attended the meeting. Function 5: Conduct and share integration and analysis, including cross- sector, of cyber threat indicators, defensive measures, cybersecurity risks and incidents with federal and nonfederal entities. NCCIC has eight products and services that support this function. For example, the US-CERT Analysis Report is an integrated analysis document that can contain indicators of compromise and tactics, techniques, and procedures related to specific threats. Further, US-CERT officials stated that the center provides common vulnerabilities to the National Vulnerability Database, which is an established, open source of indicators used by information security professionals located across the nation and throughout the world. Further, ICS-CERT officials stated that it provides industrial control system vulnerabilities to over 15,000 “.gov” e- mail addresses that are signed up to receive ICS-CERT Vulnerability Alerts. Function 6: Provide timely technical assistance, risk management support, and incident response capabilities to federal and nonfederal entities with respect to cyber threat indicators, defensive measures, and cybersecurity risks and incidents, which may include attribution, mitigation, and remediation. NCCIC supports this function with the use of five products and services. With these products, it has the capacity to provide technical assistance, risk management support, and incident response capabilities to customers upon request. For example, in responding to and conducting incident response analyses for public or private sector customers, US- CERT developed Incident Response Team Reports that outlined mitigation recommendations to the customers. In addition, to support risk management, the center conducted, as services, Risk and Vulnerability Assessments, which are activities to assist entities in developing strategies for improving their cybersecurity posture. According to officials, NCCIC attempts to provide a report of its findings to the requesting entity within 30 days of the assessment. Further, NCCIC provided Cyber Assessments of control systems. For example, the Cyber Security Evaluation Tool (CSET®) can be downloaded to conduct a self-evaluation of an entity’s cybersecurity posture against, among other things, best practices and National Institute of Standards and Technology recommendations. In addition, ICS-CERT officials stated that upon a customer’s request, NCCIC can provide further assistance by conducting industrial control system architectural assessments and network assessments. Function 7: Provide information and recommendations on security and resilience measures to federal and nonfederal entities, including information and recommendations to facilitate information security and strengthen information systems against cybersecurity risks and incidents; and share cyber threat indicators and defensive measures. NCCIC has 16 products and services that support this function. Among these products, the center provided information and recommendations on security and resilience measures through its Preliminary Digital Media Analysis Report and Digital Media Analysis Report products. Specifically, for these products, it conducted analysis of digital media and provided a report that includes analysis of the exploits and associated mitigation strategies. Further, according to NCCIC officials, the US-CERT and ICS-CERT components conduct incident response activities (known as US-CERT Incident Response Team Report and ICS Incident Response Deployment) and develop reports to document their findings at the request of partner entities. These reports can contain recommendations to strengthen information systems against cybersecurity risks and incidents and potentially share cyber threat indicators and defensive measures. Function 8: Engage with international partners, in consultation with other appropriate agencies, to (a) collaborate on cyber threat indicators, defensive measures, and information related to cybersecurity risks and incidents; and (b) enhance the security and resilience of global cybersecurity. NCCIC supports this function with the use of 10 products and services. It engages with international partners to collaborate on cyber threat indicators, defensive measures, and information related to cybersecurity risks and incidents. For example, the most recent Cyberstorm exercise— the department’s national-level exercise series—was conducted during the spring of 2016 and involved more than 1,200 participants including NCCIC’s national and international partners. According to the exercise after-action report, 12 international partners participated in the Cyberstorm exercise. To enhance the security and reliance of global cybersecurity, the after-action report identified areas of improvement relating to the escalation of incidents and coordination of public and private efforts. ICS-CERT also collaborated with the Ukrainian government in the aftermath of a cyber attack on its power infrastructure to develop and disseminate vulnerability alerts, reports, and briefings on the attack. US-CERT, on a different occasion, collaborated with Canada on developing vulnerability alerts associated with ransomware. Function 9: Share cyber threat indicators, defensive measures, and other information related to cybersecurity risks and incidents with federal and nonfederal entities, including across sectors of critical infrastructure and with state and major urban area fusion centers, as appropriate. NCCIC relies on four products and services to support this function. For example, it shared cyber threat indicators, defensive measures, and information through its Malware Initial Findings Reports and Malware Analysis Reports. These reports are based on NCCIC malware analysis conducted at the request of the customer. They can contain indicators, such as a description of the malware artifact, as well as defensive measures, such as the Internet Protocol addresses potentially associated with the malware. Both types of reports are disseminated via the NCCIC Portal. The center used a Traffic Light Protocol (TLP), which is a designation to ensure that sensitive information is shared with the appropriate audience. MS-ISAC representatives have access to the NCCIC portal and can share information with its members per TLP protections. Function 10: Participate, as appropriate, in national exercises run by the department. NCCIC has four products and services that support this function. For example, in addition to Cyberstorm, the center conducted and participated in external exercises for customers to support the improvement of national and international cybersecurity. NCCIC officials stated that these external exercises include federal, state, local, tribal, territorial, private, and international partners and range from individual table-top exercises to multi-organization exercises. The center conducted such an exercise in October 2015 with a state government to improve its communication capabilities and provided a seminar on the current threats to control systems worldwide. Function 11: Coordinate with the Office of Emergency Communication of the Department, assessing and evaluating consequence, vulnerability, and threat information regarding cyber incidents to public safety communications to help facilitate continuous improvements to the security and resiliency of such communications. NCCIC has four products and services that support this function. Among its activities, the center engages with the Office of Emergency Communications in planning and preparing for disasters and incidents, including cyber incidents, to ensure continued readiness of the communications network. NCCIC officials stated they meet weekly with Office of Emergency Communications Regional coordinators during a NCC Weekly Operations call to discuss threats and vulnerabilities. According to NCCIC, the Office of Emergency Communications is a supporting partner of its execution of national coordinator responsibilities for Emergency Support Function 2–Communications under the National Response Framework. In addition, officials from the center have briefed the National Council of Statewide Interoperability Coordinators on how to share cyber data using NCCIC’s incident reporting process at a national conference in April 2016. NCCIC is required to carry out its functions in accordance with nine principles specified in the National Cybersecurity Protection Act of 2014, to the extent practicable. As previously described, these principles, among other things, relate to ensuring that industry sector-specific, academic, and national laboratory expertise is sought and receives appropriate consideration; the information related to cybersecurity risks and incidents is appropriately safeguarded against unauthorized access; and shared information is timely, actionable, and relevant to risks, incidents, and analysis. The extent to which NCCIC carried out its 11 cybersecurity functions in accordance with the nine principles specified in the act is unclear. We identified instances where, with certain products and services, NCCIC had implemented its functions in adherence with one or more of the principles. For example, consistent with the principle that it seek and receive appropriate consideration from industry sector-specific, academic, and national laboratory expertise, NCCIC coordinated with contacts from industry, academia, and the national laboratories to develop and disseminate vulnerability alerts through the National Vulnerability Database. In addition, to comply with the principle that the information related to cybersecurity risks and incidents be appropriately safeguarded against unauthorized access, the center used the TLP designation to ensure that sensitive information was shared with the appropriate audience. Specifically, NCCIC disseminated its products via the NCCIC portal, using the protocol for products such as Indicator Bulletins, Analysis Reports, Malware Initial Findings Reports, and Malware Analysis Reports. (Additional examples of how NCCIC products and services helped the center implement its functions according to the principles are provided in appendix IV.) On the other hand, we also identified instances where the cybersecurity functions were not performed in adherence with the principles. For example, with regard to function 6, NCCIC is to provide timely technical assistance, risk management support, and incident response capabilities to federal and nonfederal entities. The function is supported, in part, by Risk and Vulnerability Assessments. However, NCCIC had not established measures or other procedures for ensuring the timeliness of these assessments. According to officials responsible for this service, the assessments have an estimated completion time frame of 8-10 weeks for each customer. However, the officials stated that this time frame is not an established metric by which they evaluate the timeliness of the service. Further, NCCIC had not established measures or procedures to assess the actionability of its products and services. For example, US-CERT Indicator Bulletins, a product that supports several functions, typically contain actionable information, such as specific malicious Internet addresses to be blocked. NCCIC had not established a means of determining the extent to which a particular bulletin helped to mitigate a risk or prevent an incident. In discussing this matter, NCCIC officials acknowledged that they had not made a complete determination of the applicability of the principles with all of the center’s functions and thus had not established measures and procedures for assessing its products and services against the principles. The officials stated that they have begun to map activities supporting the cybersecurity functions to the implementing principles. For example, according to the officials, the center established a unit for reviewing and making recommendations to improve overall NCCIC operations. During fiscal year 2016, this unit completed performance management reviews across the center’s programs to identify areas in which NCCIC could better align its operations with its overall requirements, including the principles. Further, officials from the ICS-CERT branch stated that they were in the preliminary stages of measuring their activities against one of the nine principles. Specifically, the officials stated that 20 metrics were being developed that would measure timeliness, relevance, and actionability (principle 1) across the components of their organization. Nevertheless, while these preliminary actions are important steps, they do not represent a complete determination of the applicability of all nine principles across all of NCCIC’s statutorily-required cybersecurity functions. As such, NCCIC officials could not say whether the principles did or did not apply to all of the 11 functions. Moreover, because a complete determination of the applicability of the nine principles had not been done, the center also had not developed metrics and methods for assessing and ensuring adherence with the principles. Until the center determines the applicability of the implementing principles for all of its functions and develops the metrics and methods necessary to ensure that the principles are met, it will not be positioned to ensure that NCCIC is effectively meeting its statutory requirements. In addition to NCCIC not having made a complete determination of how it is adhering to the principles, a number of factors impede the center’s ability to more efficiently perform several of its cybersecurity functions. In particular, the center faces impediments in tracking security incidents; maintaining current and reliable customer information, to include obtaining such information on all owners and operators of the most critical infrastructure assets; working across multiple network platforms; and collaborating with international partners. Tracking of security incidents is not centralized or reconciled. The National Cybersecurity Protection Act of 2014 requires NCCIC to coordinate the sharing of information across the government. This includes information related to cyber threat indicators, defensive measures, and cybersecurity risks and incidents. However, NCCIC officials were unable to completely track and consolidate cyber incidents reported to the center, thereby inhibiting its ability to coordinate the sharing of information across the government. For US-CERT-related incidents, personnel assigned to the NCICC service desk generated a daily report of the current status of the open incident tickets. For example, the July 18, 2016 report had a total of 520 incident tickets. However, this report did not represent the totality of incidents across the center because it did not include incidents reported to ICS- CERT. Since the NCCIC service desk did not have access to the data within the ICS-CERT ticketing system, it could not produce a management report on the status of all incidents reported to the center. NCCIC officials attributed the lack of a single, centralized incident tracking system to the fact that ICS-CERT and US-CERT had operated as separate entities prior to the establishment of the center. As such, both ICS-CERT and US-CERT has its own incident ticketing system. Senior ICS-CERT officials stated they are aware of this challenge and are exploring options on how best to integrate the two systems. Until such integration takes place, NCCIC will continue to encounter difficulty in completely tracking the total efforts of its branches to address reported cybersecurity incidents. As a result, the center will be challenged in determining how effective it is in sharing information related to cyber threat indicators, defensive measures, and cybersecurity risks and incidents across the government. The difficulty of logging incident data is further compounded by the multiple ways in which an incident can be reported to the center. US- CERT officials stated that there are six preferred ways in which NCCIC receives information related to potential incidents. For example, to communicate with US-CERT, customers can choose e-mail, a phone call, an automatic submission form, or an automated machine-to-machine submission as the means to notify US-CERT of an incident. In addition, to communicate with ICS-CERT regarding an industrial control system- related incident, customers can choose to submit an e-mail or phone directly to ICS-CERT (Figure 4 shows the ways in which NCCIC prefers to receive reported incidents.) However, contrary to the 6 preferred methods of communicating with US- CERT and ICS-CERT, officials from NCCIC’s Operations and Integration office provided documentation that identified at least 22 methods by which the center receives potential incidents. These methods would include phone numbers and e-mail addresses other than the aforementioned 6 methods, established by various groups within the four NCCIC components as a means to communicate with partners. In addition, according to NCCIC officials, depending on the method of reporting, incidents are not always logged into the NCCIC incident ticketing systems. For example, when customers have prior established relationships, analysts can be called directly and can handle the incidents without logging them into the system. The lack of control over the entry points as well as inconsistencies in logging data, together; inhibit the center in consistently tracking incidents and their status across the entire NCCIC. Until the center can reduce, consolidate, or modify the points of entry that customer entities use to communicate with NCCIC, it will lack the ability to better ensure that all incident tickets are logged appropriately. Thus, further contributing to the center being less able to effectively perform its statutorily-required function in coordinating the sharing of information related to cyber threat indicators, defensive measures, and cybersecurity risks and incidents across the government. Maintaining current and reliable customer information. The National Cybersecurity Protection Act of 2014 requires NCCIC to be the federal civilian interface for the multidirectional and cross-sector sharing of information related to cyber threat indicators, defensive measures, cybersecurity risks and incidents, and analysis and warnings for federal and nonfederal entities. To perform this function, the center needs to have accurate and up-to-date contact information for the potential recipients of the cybersecurity information it shares. However, NCCIC’s contact information was not always up to date, thus impacting its ability to effectively function as a federal civilian interface for federal and nonfederal entities. Specifically, after e-mailing our survey to recipients of NCCIC’s products and services, we received 303 undeliverable return messages out of 2,792 recipients contacted. We also identified individuals who were included on the list of recipients that NCCIC provided to us that no longer had the role the center indicated or were no longer with the entity listed. NCCIC officials were unable to demonstrate that they had any formal process for maintaining customer contact information. The officials stated that maintaining customer contact information was an ad hoc process and acknowledged that capturing changes to that data was a challenge. Without regularly validating data pertaining to its product and service recipients, the center may lack quality information it needs to effectively develop and maintain partnerships and share cybersecurity-related information with federal and nonfederal entities to support its operation as required by the statutes. Obtaining contact information of all owners and operators of the most critical cyber-dependent infrastructure assets. The National Cybersecurity Protection Act of 2014 requires NCCIC to facilitate cross- sector coordination to address cybersecurity risks and incidents. This includes cybersecurity risks and incidents that may be related or could have consequential impacts across multiple sectors. However, representatives of federal and nonfederal entities that own critical cyber assets that could have a catastrophic impact on the nation if victimized by a cyber attack were not fully represented in the NCCIC customer log. Specifically, our review found that 23 percent of the entities owning such critical assets (as determined by DHS) were not represented within the master NCCIC customer log as of September 2016. Without representation of these entities, NCCIC may not have the information it needs readily available to facilitate coordination with critical asset owners. NCCIC officials were unable to demonstrate that they had a formal internal process for maintaining customer contact information and acknowledged that doing so remains a challenge for the center. Without a concentrated effort on ensuring the full representation of the owners and operators of these critical assets, the center lacks assurance that it is adequately facilitating the cross-sector coordination of cybersecurity risk and incidents to the nation’s most critical cyber- dependent assets that, if impacted, could have a catastrophic effect on the nation. Working across multiple network platforms. The National Cybersecurity Protection Act of 2014 requires NCCIC to coordinate the sharing of information across the government. This includes information related to cybersecurity risks and incidents. However, we found that the sharing of information is complicated by NCCIC analysts having to operate across multiple networks, often manually entering data into each network, which decreases the rate of response of coordinating the sharing of incident information to customers and increases the risk of false entry. For example, officials stated that it takes on average 3 minutes for a ticket to be closed when working within one network. Across the 3 systems, it could take up to 15 minutes, depending on the size of the ticket, and the amount of information needed to be manually entered into each system. According to senior NCCIC officials, this impediment was attributed to a legacy technical infrastructure implemented prior to the center’s existence. They added that efforts were under way to address this impediment. However, NCCIC had not developed an implementation plan or established time frames for consolidating or integrating the networks. Until NCCIC develops a process to avoid manual data entry, it will continue to face challenges in efficiently sharing information related to cyber threat indicators, defensive measures, and cybersecurity risks and incidents across the government. Collaborating with international partners using the NCCIC Portal. The Cybersecurity Act of 2015 requires NCCIC to engage with international partners, in consultation with other appropriate agencies, to (a) collaborate on cyber threat indicators, defensive measures, and information related to cybersecurity risks and incidents; and (b) enhance the security and resilience of global cybersecurity. International and other partners have access to the center’s products through the NCCIC Portal, which has functioned as a mechanism to disseminate products to recipients since 2003. However, DHS is migrating the NCCIC portal to the Homeland Security Information Network. This network is categorized as a FIPS 199 high-impact system and, thus, requires authentication of individuals with access to the system. According to NCCIC officials, international partners had expressed a concern that the new network will have a negative impact on their collaboration with NCCIC because continued access would require the submission of international participants’ passports and other sensitive personal information to a U.S. government entity. While DHS has a responsibility to ensure the security over its high-impact systems, NCCIC may face a barrier to engaging with international partners. Without taking action to address this potential barrier, international partners may be reluctant to engage with NCCIC. Thus, the center may be challenged in its ability to collaborate and enhance global cybersecurity if it does not find alternative methods to engage and share information with international partners while ensuring the security requirements of high-impact systems. The respondents to our nongeneralizable survey of the center’s activities reported that they used its products and services to varying extents. The respondents also expressed generally favorable views of the center’s activities. Table 2 depicts the extent to which the survey respondents, who each self-identified as a customer of an NCCIC component (US- CERT, ICS-CERT, NCC, the Watch Floor, and NO&I), used, did not use, or were unsure if they used a particular NCCIC product or service. With regard to evaluating the characteristics of products and services, the respondents to our nongeneralizable survey generally reported that NCCIC products and services were timely, relevant, and actionable. Specifically, 289 of 333 respondents (87 percent) found products and services they had used to be extremely, very, or moderately timely; 286 of 332 respondents (86 percent) found products and services to be extremely, very, or moderately relevant; and 234 of 332 respondents (70 percent) stated that products and services have led to an actionable result to a very great, great, or moderate extent (e.g., used to address a vulnerability or apply a defensive measure) on their part. In addition, although between 12 and 18 percent of respondents to our nongeneralizable survey indicated a low level of effectiveness, respondents had generally favorable views of the center’s provision of cybersecurity information. Specifically, 236 of 335 respondents (70 percent) evaluated the provision of cyber threat indicators to be at a high or moderate level of effectiveness. In addition, 219 of 333 respondents (66 percent) identified risks and incidents to be at a high or moderate level of effectiveness. Further, 211 of 339 respondents (62 percent) indicated cyber defensive information to be at a high or moderate level of effectiveness. Further, the survey respondents evaluated NCCIC’s ability to provide timely, relevant, and actionable information at a 235 of 331 (71 percent), 245 of 334 (73 percent), and 222 of 339 (65 percent) high or moderate level of effectiveness, respectively. Table 3 shows survey respondents’ evaluations of NCCIC’s effectiveness in providing them with cyber threat indicators, information on risks and incidents and defensive measures, and information that was timely, relevant, and actionable. Survey respondents also evaluated the center’s effectiveness with regard to its information sharing capability, the uniqueness in the information it provides, and its partnerships with them in improving the protection of critical cyber assets and functions, and how well it is fulfilling its mission. Table 4 shows respondents’ overall evaluation of the center in terms of the effectiveness of its information sharing capability, customer partnerships, and the extent to which it is fulfilling its mission, among other things. Further, respondents regarded NCCIC as important to the nation’s ability to protect critical cyber assets and functions. Specifically, 264 of 337 respondents (78 percent) to our nongeneralizable survey stated that there would be a “very” or “somewhat” negative impact on the nation if the NCCIC products and services did not exist. However, not all survey responses were positive. Specifically, survey respondents reported that they were not aware of all of the products and services that the center offered. The respondents added that they would be interested in receiving additional NCCIC products and services but were unsure about how to begin receiving them. The respondents reported that the center had not provided information identifying these products and services. NCCIC officials acknowledged that customers may not be aware of certain products and services because not all products and services are meant for every customer. NCCIC, as the federal civilian cyber center, is generally performing 11 required cybersecurity functions through the development and dissemination of 43 products and services. However, the extent to which NCCIC carried out these cybersecurity functions in accordance with the 9 implementing principles is unclear. Until it determines the extent to which the implementing principles apply to these functions, NCCIC will not be able to fully assess the extent to which it is meeting the mandated principles. Further, without measuring the extent to which principles are being met, NCCIC will be challenged in articulating how effectively it is performing the functions in support of its role as a focal point for cybersecurity incident coordination, information sharing, and incident response across the federal civilian government and critical infrastructure. NCCIC also faces several impediments that inhibits it from efficiently performing its cybersecurity functions. These impediments relate to consolidating entry points for receiving and logging potential incident data and maintaining the center’s relationship with customers. Until NCCIC takes steps to overcome these impediments it may not be able to efficiently perform its cybersecurity functions and assist federal and nonfederal entities in identifying cyber-based threats, mitigating vulnerabilities, and managing cyber risks. To more fully address the requirements identified in the National Cybersecurity Protection Act of 2014 and the Cybersecurity Act of 2015, we recommend that the Secretary of the Department of Homeland Security take the following nine actions: 1. Determine the extent to which the statutorily required implementing principles apply to NCCIC’s cybersecurity functions. 2. Develop metrics for assessing adherence to applicable principles in carrying out statutorily required functions. 3. Establish methods for monitoring the implementation of cybersecurity functions against the principles on an ongoing basis. 4. Integrate information related to security incidents to provide management with more complete information about NCCIC operations. 5. Determine the necessity of reducing, consolidating, or modifying the points of entry used to communicate with NCCIC to better ensure that all incident tickets are logged appropriately. 6. Develop and implement procedures to perform regular reviews of customer information to ensure that it is current and reliable. 7. Take steps to ensure the full representation of the owners and operators of the nation’s most critical cyber-dependent infrastructure assets. 8. Establish plans and time frames for consolidating or integrating the legacy networks used by NCCIC analysts to reduce the need for manual data entry. 9. Identify alternative methods to collaborate with international partners, while ensuring the security requirements of high-impact systems. We received written comments on a draft of this report from DHS. In its comments, the department concurred with all nine recommendations. The department also provided details about steps that it plans to take to address each of the recommendations, including estimated time frames for completion. If effectively implemented, these actions should enhance the effectiveness and efficiency of NCCIC in performing its statutory requirements. The department’s written comments are reprinted in appendix V. In addition to the aforementioned comments, DHS also provided a technical comment via e-mail, which we considered and incorporated. We are sending copies of this report to the appropriate congressional committees, the Department of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Gregory Wilshusen at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Our objective was to determine the extent to which the National Cybersecurity and Communications Integration Center (NCCIC) was performing its statutorily defined cybersecurity-related functions. To determine this, we analyzed two acts that establish roles and responsibilities for the center: the National Cybersecurity Protection Act of 2014 and the Cybersecurity Act of 2015. These laws together require the center to carry out 11 cybersecurity functions. More specifically, the National Cybersecurity Protection Act of 2014 prescribed 7 functions and the Cybersecurity Act of 2015 prescribed 4 additional functions. The National Cybersecurity Protection Act of 2014 also identified 9 implementing principles. The two acts also contained provisions for GAO to report on NCCIC’s implementation of its cybersecurity mission. To determine the extent to which it was addressing the 11 cybersecurity functions, we analyzed the center’s program descriptions, concepts of operations, and policies and procedures documenting how each of the center’s components are to operate. For example, we analyzed the U.S. Computer Emergency Readiness Team (US-CERT) Strategic Action Plan, Industrial Control Systems-Cyber Emergency Response Team (ICS-CERT) Five-Year Plan (2015-2019), and the National Coordinating Center (NCC) Information Guide and Overview for 2015. In addition, we analyzed the NCCIC Watch Floor Concept of Operations that describes its operations. We corroborated information by interviewing center officials, including the Assistant Secretary for Cybersecurity and Communications, the Director of the NCCIC, and the directors of each of the center’s components, as well as other responsible officials. Based on our analysis of this information, we developed an initial list of products and services and held interviews with officials to confirm the total list of products and services. Based on these actions, we determined that the center develops and disseminates 43 products and services. We then collected and analyzed examples of each product and service to determine how they addressed each of the 11 cybersecurity functions established in the two laws. To gain a greater understanding of the purposes and methods of developing the products and services, we also interviewed NCCIC officials. To identify instances where products and services addressed the 9 implementing principles, we analyzed relevant program documentation and reviewed the procedures by which the center develops its products and services. We corroborated our information by interviewing NCCIC officials responsible for product and service development. To gain a greater understanding of its operations, we visited the site of the ICS- CERT operations in Idaho Falls, Idaho, to observe its activities, including the development of its products and services. We also observed operations of the Watch Floors in Arlington, Virginia and Idaho Falls, Idaho, and interviewed officials responsible for operating the Watch Floors, developing services, and liaising with federal and nonfederal partners. We also analyzed the dissemination methods of products and services by examining the contents of the NCCIC web portal, including how the customer base was segmented to disseminate products and services in accordance with information sharing protections. At the ICS-CERT facility in Idaho Falls, Idaho, we observed the basic ICS training exercise services provided to customers. We also collected and analyzed performance measures and interviewed officials about the actions being taken to improve its measurement efforts and efforts to consolidate operations across NCCIC components. In addition, NCCIC officials provided budget execution information that we analyzed to determine the reported amount spent across three fiscal years for each component. During the interviews, we discussed with officials the impediments that the center faced in more efficiently performing the 11 cybersecurity functions. To obtain the views of the recipients of the center’s products and services, we administered a survey to a sample of individuals identified by NCCIC as having access to a product or service, or participating in a center group or activity. We asked customers about their awareness and use of the 43 products and services, and other activities and roles performed by the center. We then asked them to assess their experiences, including rating the effectiveness and the implementing principles of timeliness, actionability, and relevance. We also asked respondents to rate various elements of NCCIC in terms of importance, expectations, challenges, and reasons for not using the center’s products and services. To develop our questionnaire, we met with NCCIC officials and identified the activities performed for customers, including the development and dissemination of 43 products and services, disseminated to customers. We pretested draft versions of the questionnaire with nonfederal representatives of two Information Sharing and Analysis Centers and an information security officer at a federal agency, to reflect some of the variation in the population. We defined the target population for the survey to be all organizational points of contact or other individuals identified by NCCIC, as of June 8, 2016, as having access to a product or service, or participating in a center group or activity. NCCIC provided us with 19,573 records across14 lists of customer contact information. Some of the provided lists consisted only of e-mail addresses of individuals subscribed to a particular NCCIC product or service, while others consisted of members of a group. Some organizations were represented by many individual e-mail addresses across the lists, and some individuals appeared on more than one list. While the basic unit of the population to be sampled was an individual e- mail address, due to the variability in coverage of the population mentioned above, an individual survey respondent may be representing their own personal experiences and opinions, or those of an organization, and multiple respondents may be representing the same organization. After removing records with missing, incomplete, or erroneously duplicative e-mail addresses within each list, our sample was reduced to 19,293 records. We did not remove multiple instances of the same e-mail address appearing on more than one list; these duplicates were retained in the sample frame so that each instance of that e-mail address might have a chance of initial selection proportional to the size of the customer list it appeared on. We initially drew a random but nongeneralizable sample of 2,907 e-mail address records, allocated across the 16 customer types roughly proportional to the sizes of each type. We then removed 115 of this initial sample because their e-mail addresses duplicated selections made from other customer lists, for a total sample of 2,792 customer records with unique e-mail addresses, which we attempted to contact with our survey. We began our survey on August 2, 2016. We sent e-mails with login information to the web-based questionnaire to the sample. We sent up to three follow-up e-mails during the fieldwork period to those who had not yet responded. The survey ended on September 8, 2016. The outcomes of the survey fieldwork are displayed in table 5 below. The response rate to the survey, calculated as the number of usable responses divided by the number found to be eligible was about 14 percent. Because of the variability in coverage of the population by the sample frame, irregularities in the contact information and eligibility of the records sampled, and the low rate of response to the survey, the results of this survey only represent those that responded, and are not generalizable to any larger population of NCCIC customers. We do not make any inferences about those not sampled or not responding to the survey. In addition to this limitation, questionnaire surveys of this kind are subject to other potential errors. To minimize the possibility of measurement error (differences between reported and true answers) arising from question design, interpretation or administration, or the misreporting of answers, we designed and administered the survey in consultation with survey methodologists, made improvements to the questionnaire based on pretest results, and had a separate survey methodologist review the draft questionnaire to identify potential problems in questionnaire design. Of the 340 respondents, 14 percent identified themselves as individual participants in NCCIC activities, 64 percent as representatives of a single public or private organization, 13 percent as representing an association or other entity representing a sector or group of organizations, and 9 percent identified in other ways. Thirty-four percent said they represented federal government entities; 18 percent said they represented state, local, or tribal entities; 44 percent said they represented private sector entities; and 4 percent gave other answers. During the processing and analysis of reported data, we also identified and corrected for patterns of response across questions that we could identify as inconsistent or contradictory. Nonresponse error (failure to obtain a response to a question or the questionnaire) may lead to bias in the results if those who do not respond would have given materially different responses from those who did respond. To minimize nonresponse, we made follow-up contacts throughout the survey. To minimize processing error (mistakes in converting reported data into published survey results), data processing and analysis programming was independently verified by a separate data analyst. We conducted this performance audit from January 2016 to February 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 6 below highlights the total number of each product and service that the National Cybersecurity and Communications Integration Center (NCCIC) reported providing to its customers in fiscal years 2015 and 2016. The National Cybersecurity and Communications Integration Center (NCCIC) is required to perform 11 cybersecurity functions. Table 7 below summarizes how the 43 products and services were being used as of October 2016 in support of the 11 functions. Although the National Cybersecurity and Communications Integration Center (NCCIC) did not completely determine the applicability of statutory implementing principles to its products and services, table 8 below provides examples of our determination of how NCCIC products and services adhered to the principles. In addition to the contact named above, Michael W. Gilmore (assistant director), Kush K. Malhotra (analyst in charge), Chris Businsky, Lee A. McCracken, Constantine Papanastasiou, David Plocher, Carl Ramirez, and Priscilla A. Smith made key contributions to this report. | Cyber-based intrusions and attacks on federal systems and systems supporting our nation's critical infrastructure, such as communications and financial services, have become more numerous, damaging, and disruptive. GAO first designated information security as a government-wide high-risk area in 1997. This was expanded to include the protection of critical cyber infrastructure in 2003 and protecting the privacy of personally identifiable information in 2015. The National Cybersecurity Protection Act of 2014 and the Cybersecurity Act of 2015 require NCCIC to perform 11 cybersecurity-related functions, including sharing information and enabling real-time actions to address cybersecurity risks and incidents at federal and non-federal entities. The two acts also contained provisions for GAO to report on NCCIC's implementation of its cybersecurity mission. For this report, GAO assessed the extent to which the NCCIC was performing the 11 required functions. To do this, GAO analyzed relevant program documentation, interviewed officials, and conducted a non-generalizable survey of 2,792 federal and nonfederal recipients of NCCIC products and services. The National Cybersecurity and Communications Integration Center (NCCIC) of the Department of Homeland Security (DHS) has taken steps to perform each of its 11 statutorily required cybersecurity functions, such as being a federal civilian interface for sharing cybersecurity-related information with federal and nonfederal entities. It manages several programs that provide data used in developing 43 products and services in support of the functions. The programs include monitoring network traffic entering and exiting federal agency networks and analyzing computer network vulnerabilities and threats. The products and services are provided to its customers in the private sector; federal, state, local, tribal, and territorial government entities; and other partner organizations. For example, NCCIC issues indicator bulletins, which can contain information related to cyber threat indicators, defensive measures, and cybersecurity risks and incidents and help to fulfill its function to coordinate the sharing of such information across the government. The National Cybersecurity Protection Act also required NCCIC to carry out its functions in accordance with nine implementing principles, to the extent practicable. However, the extent to which NCCIC adhered to the 9 principles when performing the functions is unclear because the center has not yet determined the applicability of the principles to all 11 functions, or established metrics and methods by which to evaluate its performance against the principles. GAO identified instances where NCCIC had implemented its functions in accordance with one or more of the principles. For example, consistent with the principle that it seek and receive appropriate consideration from industry sector-specific, academic, and national laboratory expertise, NCCIC coordinated with contacts from industry, academia, and the national laboratories to develop and disseminate vulnerability alerts. On the other hand, GAO also identified instances where the cybersecurity functions were not performed in accordance with the principles. For example, NCCIC is to provide timely technical assistance, risk management support, and incident response capabilities to federal and nonfederal entities; however, it had not established measures or other procedures for ensuring the timeliness of these assessments. Until NCCIC determines the applicability of the principles to its functions and develops metrics and methods to evaluate its performance against the principles, the center cannot ensure that it is effectively meeting its statutory requirements. In addition, GAO identified factors that impede NCCIC's ability to more efficiently perform several of its cybersecurity functions. For example, NCCIC officials were unable to completely track and consolidate cyber incidents reported to the center, thereby inhibiting its ability to coordinate the sharing of information across the government. Similarly, NCCIC may not have ready access to the current contact information for all owners and operators of the most critical cyber-dependent infrastructure assets. This lack could impede timely communication with them in the event of a cyber incident. Until NCCIC takes steps to overcome these impediments, it may not be able to efficiently perform its cybersecurity functions and assist federal and nonfederal entities in identifying cyber-based threats, mitigating vulnerabilities, and managing cyber risks. GAO recommends nine actions to DHS for enhancing the effectiveness and efficiency of NCCIC, including to determine the applicability of the implementing principles and establish metrics and methods for evaluating performance; and address identified impediments. DHS concurred with GAO's recommendations. |
VA provides health care services nationwide through a direct delivery system of 172 hospitals, 365 outpatient clinics, and 128 nursing homes. In addition to operating its own nursing homes, VA pays for care provided to veterans by community and state veterans’ nursing homes. VA’s goal is to have 40 percent of those veterans needing VA assistance receive care through contracts with community nursing homes, 30 percent through agreements with state homes, and 30 percent in VA nursing homes. VA has 1,595 hospital beds and 600 nursing home beds in Central Florida that veterans in Orlando and Brevard County may use. These beds are located at VA medical centers in Gainesville, Tampa, and Bay Pines and serve a geographic area commonly referred to as Central Florida. VA also operates outpatient clinics in Orlando and Daytona Beach. The VA hospital in Tampa is about 125 miles west of Brevard and 80 miles west of Orlando. The VA hospital in Gainesville is about 175 miles northwest of Brevard and 109 miles northwest of Orlando. The VA hospital in Bay Pines is about 30 miles west of Tampa. In addition, VA has hospitals and nursing homes in Lake City, Miami, and West Palm Beach, which have a total of 1,367 hospital beds and 480 nursing home beds. VA also operates several outpatient clinics. These facilities, along with those in Central Florida, comprise VA’s Florida facilities. Figure 1 shows the locations of the facilities in Florida, including the former Orlando Naval Hospital and the planned Brevard Hospital. VA’s Under Secretary for Health plans to restructure the Veterans Health Administration and fundamentally change the way that veterans’ health care is provided. His plans include increasing ambulatory care access points, emphasizing primary care, decentralizing decisionmaking, and integrating the delivery assets to provide an interdependent, interlocking system of care. The structural vehicle to do this will be the Veterans Integrated Service Network. The basic budgetary and planning units of health care delivery shifts from individual medical centers to integrated service networks providing for populations of veteran beneficiaries in defined geographical areas. The network director is responsible for consolidating and realigning services within the network. The integrated network for Florida includes all six VA medical centers in Florida plus the medical center in San Juan, Puerto Rico. VA has two basic goals for serving Central Florida veterans. First, VA plans to provide hospital beds comparable to the national level of availability to serve the health care needs of veterans. Second, VA expects to improve the geographic accessibility of VA hospital beds for as many veterans as possible. VA uses an Integrated Planning Model when developing strategic management and operational plans, including construction. VA’s model is primarily driven by three variables to estimate veterans’ future use of VA hospital beds. These variables include veteran population by age groups, average lengths of hospital stays for selected medical services (such as surgery or psychiatry), and the number of patients treated in the selected medical services. In 1982, VA planners conducted a study of the health care needs of Florida veterans, including the projected future use of VA facilities through 1995. In 1991, VA planners updated this study and determined that 710 hospital and nursing home beds were needed in six counties, commonly referred to as East Central Florida. The planners concluded that these beds were needed to make VA health care more geographically accessible to veterans in East Central Florida. VA developed plans to build a 470-bed hospital and 120-bed nursing home in Brevard County and a 120-bed nursing home and outpatient clinic in Orlando. In July 1993, the Defense Base Closure and Realignment Commission recommended closing the Orlando Naval Hospital. In May 1994, the Orlando Naval Training Center Reuse Commission accepted VA’s proposal to convert the 153-bed Naval Hospital into a nursing home and outpatient clinic after the Navy moved out in June 1995. VA announced that this conversion would be done in lieu of its plan to build a new nursing home and clinic. VA estimates that it will spend $1.1 billion over the next 10 years to build and operate the new Brevard Hospital and the nursing homes in Brevard and Orlando. At VA’s request, the Congress provided VA $14 million in fiscal year 1995 to renovate the former Naval Hospital and $17.2 million to develop preliminary designs for a new 470-bed hospital and 120-bed nursing home in Brevard County. VA has requested $154.7 million in its fiscal year 1996 budget to construct the new hospital. VA estimates that it will need $115 million a year to operate the facilities in Brevard and Orlando. Veterans’ use of the 1,595 beds in the three VA hospitals serving Central Florida has decreased over the last 4 years. In 1994, veterans used 1,060 beds a day on average. They used 271 beds in the Gainesville hospital compared with almost 400 beds in both the Tampa and Bay Pines hospitals. Appendix II describes veterans’ use of VA hospitals in Central Florida in more detail. In contrast, veterans’ use of VA nursing home care has increased gradually over the last 4 years. In 1994, veterans occupied about 867 beds a day on average. VA provided about 61.5 percent of this care in its homes; it contracted with community homes for 34 percent and a state home for 4.5 percent. Appendix III describes veterans’ use of VA nursing home care in Central Florida in more detail. Community nursing homes in East Central Florida appear to be able to provide the 240 beds that VA plans to construct. There are 60 nursing homes in East Central Florida that are willing to supply beds for veterans’ use or may be willing to supply beds if contacted by VA. These homes operate 7,176 beds, including 320 beds that were empty at the time of VA’s 1993 survey. VA determined that these 60 homes would be able to provide only 105 beds based on two questionable assumptions concerning bed availability. First, VA assumed that a nursing home was fully occupied if it had an occupancy rate of 95 percent or higher. Second, VA assumed that beds occupied at the time of its survey would not be available for VA’s use. VA’s assumption that community homes are fully occupied at 95 percent of capacity seems inappropriate because VA routinely contracts with homes that have occupancy rates of 95 percent or higher. For example, VA had contracts with 22 homes in East Central Florida, 17 of which had occupancy rates of 95 percent or higher. Similarly, VA had contracts with 86 homes in other parts of Central Florida and 56 of these homes had occupancy rates of 95 percent or higher. Social workers at the three Central Florida hospitals told us that they were able to place veterans in these homes. By assuming that community homes are fully occupied at 95 percent of capacity, VA determined that only 105 of the 320 empty beds in East Central Florida would be available for its use. Of the 215 empty beds that VA excluded, 86 were in the 22 homes with which VA already had contracts. VA’s assumption that occupied beds will not be available appears inappropriate because occupied beds can be expected to turn over frequently during each year and VA should have a reasonable chance to place veterans in some of these beds. The nursing homes in East Central Florida had 6,856 occupied beds at the time of VA’s survey that were excluded from further consideration. Nationwide, about one-half of the patients admitted to community nursing homes stayed fewer than 83 days, according to the latest government survey of nursing homes. Moreover, only about one-fourth stayed longer than 12 months. Community nursing home beds appear to be available at prices that are below VA’s costs to construct and operate the 120 beds proposed at the Brevard and Orlando sites. Nationwide, VA’s contract costs average $106 a day for a bed. By contrast, VA’s costs are estimated to be $207 a day for a VA-constructed and -operated bed. These costs do not include the depreciation charges associated with the costs of initially constructing the VA nursing homes. VA also has some flexibility in placing veterans in community nursing homes in Florida. If veterans’ needs exceed the capacity of homes in East Central Florida, VA also has available beds in 204 community nursing homes in other parts of Central Florida. For example, VA had contracts with 86 homes that had 525 empty beds. Also, 118 other homes had empty beds that VA had determined to be willing and able or who may be willing to serve veterans if contacted by VA. In our view, it is reasonable to consider these community nursing homes as part of the available bed supply. Many of the veterans using the proposed Brevard hospital will likely reside in parts of Central Florida other than the six East Central Florida counties and, thus, would be placed in homes closer to their residences. Appendix IV describes VA’s assessment of its future need for nursing homes and its survey of community nursing homes in greater detail. VA has a large supply of unused beds in its three hospitals now serving Central Florida veterans and the number of unused beds is expected to increase substantially. Moreover, VA’s use of community nursing homes, as previously discussed, will allow VA to add the former Orlando Naval Hospital’s beds to this supply of available hospital beds. To achieve the most prudent and economical use of resources, VA’s hospital planning should be guided by two objectives. First, VA should make the best use of existing capacity before constructing new space. Second, VA should design new construction to meet veterans’ expected use over a facility’s useful life as efficiently and effectively as possible. Toward this end, it appears that converting unused beds to psychiatric care and using the bed capacity in the former Naval Hospital provide a viable lower-cost option to constructing a new hospital. Veterans use of VA beds in its three Central Florida hospitals has declined steadily over the last 4 years. The hospitals have a large supply of unused beds, totaling about 535 daily. Each hospital has more than 160 unused beds. In addition, these hospitals reduced their total bed capacity from 1,858 to 1,595 by removing 263 beds from service and converting the space to other uses, primarily expanded outpatient services such as ambulatory surgery or dialysis. From fiscal year 1991 to 1994, the veteran population in Central Florida was stable but VA projects the population to decrease steadily through fiscal year 2010. An estimated 1.1 million veterans lived in the Central Florida service area in 1994; about 284,000 lived in East Central Florida.By 2010, VA estimates that the veteran population will decrease by 17 percent. Figure 2 shows the expected decrease in veterans’ population in Central Florida. Veterans’ use of beds in VA’s three hospitals in Central Florida is expected to continue declining, due in large part to the decreasing veteran population. VA’s 1994 Integrated Planning Model estimates that veterans will use 350 fewer beds between 1995 and 2010. Thus, the three existing hospitals’ supply of unused beds is expected to increase, providing additional surplus capacity that could be converted to provide the psychiatric care VA plans to construct in the proposed Brevard hospital. Veterans now receive psychiatric care at all VA hospitals in Florida. The three hospitals in Central Florida operated a total of 359 psychiatric beds in fiscal year 1994. Of these, about 94 were unused. Also, the other two VA hospitals serving Florida veterans operated an additional 228 psychiatric beds, of which 39 were unused. In addition, the VA hospital in West Palm Beach added 60 more psychiatric beds for veterans’ use. These hospitals provide a range of psychiatric services. For example, each hospital initially diagnoses and treats veterans so that their conditions become stabilized. Available services include general psychiatric care (186 beds), geropsychiatric care (36 beds), and substance abuse rehabilitation (90 beds). Most of the psychiatric services are short-term with lengths of stay of fewer than 30 to 90 days. The types of inpatient psychiatric care planned for Brevard appear comparable with care now provided at these hospitals or with care being considered for implementation at the facilities. For example, VA’s psychiatric design consultant for the Brevard hospital told us that most services would be for acute diagnostic stays of fewer than 30 days and that stays would rarely exceed 90 days. Services are to include substance abuse and posttraumatic stress disorder. Moreover, he stated that veterans in need of further care would be referred to nursing homes with geropsychiatric capabilities or to other facilities. VA’s existing hospitals may be more geographically accessible to veterans, given that VA expects certain veterans from all parts of Florida to receive psychiatric care at Brevard. Using VA’s 1994 Integrated Planning Model, we estimated that veterans in the six East Central Florida counties would account for 41 percent of the expected use (95 beds) and 59 percent of the use (135 beds) would be generated by veterans from other parts of Florida. Therefore, the majority of expected psychiatric patients apparently reside closer to existing VA hospitals than they do to the proposed Brevard hospital. Figure 3 shows the locations of existing VA hospitals and the percentage of Brevard’s 230 psychiatric beds expected to be used by veterans throughout Florida. Appendix V provides additional information on veterans’ use of psychiatric beds in VA’s existing Florida hospitals and the types of psychiatric care that VA plans to provide in its proposed hospital in Brevard County. The Orlando Naval Hospital has served military beneficiaries for over 20 years. The hospital has 153 beds that provide a range of medical services. On its latest survey, the Joint Commission on the Accreditation of Hospital Organizations awarded the Naval Hospital accreditation with commendation. The hospital consists of an outpatient clinic with a large number of services on the ground floor and three floors of hospital beds. Figure 4 depicts the former Naval Hospital. The former Naval Hospital’s 153 beds could be used to meet VA’s service goals for veterans in East Central Florida. Using VA’s 1994 Integrated Planning Model, we estimated that East Central Florida veterans could be expected to use 148 medical and surgical beds in 2005. For our estimate, we applied veterans’ use rates for the three hospitals in Central Florida to the veteran population in the six counties in East Central Florida, a methodology consistent with VA planning policy. This methodology assumes that East Central Florida veterans’ future use would be comparable with Central Florida veterans’ historical use. Using the former Naval Hospital’s beds would provide a significant opportunity for new users to access VA’s hospital beds. In 1994, about 1 percent of East Central Florida veterans received VA care either at VA hospitals or at community hospitals (at VA’s expense). These veterans used an estimated 80 beds a day, which leaves a sizable number of beds for new users. Of the 80, about 70 were in VA hospitals. East Central Florida veterans’ hospital use in fiscal year 2005 will not precisely equal the projected use based on Central Florida veterans’ historical use. If veterans’ use of the former Orlando Naval Hospital should exceed its capacity, veterans could be referred to one of the other Central Florida VA hospitals that have a large supply of unused beds. In general, this would appear to be a short-term situation, given the decreasing veteran population and VA’s shifting emphasis from inpatient to outpatient services. If veterans’ use is lower than estimated, there would be unused beds and VA could convert them to other uses, such as nursing home care. VA’s justification for the hospital in Brevard County is based on questionable work load assumptions that if unfulfilled could result in a large supply of unused beds. In addition, VA did not adequately consider the potentially significant effect that the decreasing veteran population may have on veterans’ long-term use. Nor did VA adequately consider the effect that this hospital will likely have on unused beds at existing VA and community hospitals in Central Florida. VA’s decision to build 470 medical, surgical, and psychiatric beds in the Brevard hospital is based on the assumption that East Central Florida veterans’ demand for care will equal veterans’ use of VA hospitals nationwide. By using national VA hospital use rates from its 1993 Integrated Planning Model, VA estimated that veterans in East Central Florida would use 360 beds in fiscal year 2005. VA added an additional 110 beds based on its decision that Brevard would be a statewide resource for psychiatric care. VA rarely uses national VA hospital use rates as a substitute for veterans’ local hospital use rates when projecting potential future hospital use. National VA hospital use rates are almost 50 percent higher than the rates at which Central Florida veterans use existing VA hospitals. For example, veterans’ estimated use would be 199 beds (148 medical and surgical and 51 psychiatric beds), based on Central Florida veterans’ past use of the three VA hospitals. VA asserts that East Central Florida veterans’ use will equal veterans’ national use because it assumes that Florida veterans’ past use was suppressed because of the lack of adequate resources in the state and the geographic inaccessibility of VA facilities. VA concluded that resources were inadequate based largely on a bed-availability analysis in which VA showed that the number of VA hospital beds available for Florida veterans was below the national average—about 1.40 beds per 1,000 Florida veterans compared with 2.02 beds per 1,000 veterans nationwide. We do not believe that this comparative analysis demonstrates that resources are inadequate. As previously discussed, there are over 500 unused beds in VA’s Central Florida hospitals and the hospitals have converted 263 beds for outpatient care and other uses. Also, VA hospitals in Central Florida do not have waiting lists. In addition, VA hospital officials told us that sometimes elective surgeries might have to be delayed or some veterans referred to other hospitals, but the veterans get the care they need. Finally, VA’s Central Florida hospitals reported providing or scheduling more discretionary veterans for hospital care in 1993 on average than other VA hospitals nationwide (see fig. II.3). These factors suggest that the availability of VA hospital beds may not be a key factor affecting veterans’ use of VA hospitals in Florida. In this regard, VA has not adequately considered other key factors that may explain lower use rates for Florida veterans. Without information on these factors, VA’s need to build a 470-bed hospital is uncertain. Among the factors that we believe are likely to have contributed to Florida VA hospitals’ lower utilization rates are differences among Florida veterans’ health status, economic status, and insurance coverage and those of veterans nationwide. For example, Florida has the third largest total Medicare population; about 40 percent of Florida veterans are eligible for Medicare, which affords them choices for selecting health care providers. In addition, the rate disparities may be attributable to differences in the availability of private sector health care among Florida and other states. For example, 15 percent of Florida’s Medicare beneficiaries are enrolled in health maintenance organizations; only four other states have a higher percentage. Such enrollment reduces or eliminates the cost differences (copayments) between VA and private providers. Also, the disparity may be related to differences in operating practices among VA’s hospitals in Florida and its hospitals in other states. For example, our visits to the three Central Florida hospitals suggest that these hospitals may be more aggressively adopting private sector efficiency initiatives, such as shifting inpatient care to lower-cost outpatient settings or shortening lengths of hospital stays by moving patients to alternative settings. The reliability of national use rates as an indicator of future bed use in Florida also seems to be undermined by the results of VA’s 1983 study of veterans’ bed needs in Florida. VA’s Final Report on Future Bed Need and Potential Sites for New VA Hospitals in Florida significantly overestimated the number of beds needed. At the time of its report, VA had 2,916 hospital beds in Florida. The report estimated that veterans would need 5,037 beds in VA hospitals in the state in 1995, an increase of 2,121 beds. By 1994, however, VA reported having 2,642 beds in Florida—274 fewer beds than were cited in VA’s report. Of the 2,642 beds, veterans used, on average, 1,722 beds a day in VA hospitals in Florida, leaving 920 beds unused. With the new 400-bed hospital in West Palm Beach, VA has a total of 3,042 hospital beds in Florida. Our review of the report has identified two factors that may help to explain this disparity. First, VA deviated from its policy of using local VA hospital utilization rates (for example, those for Florida), and instead used nationwide average utilization rates for VA hospitals to project the future need for hospital beds in Florida. Because the average VA national rates were higher than Florida’s rates, VA’s report contained bed estimates that were higher than they would have been if rates for Florida had been used. Second, VA’s report relied solely on historical use to estimate future use. However, changes in medical practice have occurred, such as reduced lengths of stay and shifts from inpatient to outpatient care. These and other changes, in the nation’s rapidly evolving health care delivery practices have contributed to a considerable reduction in hospital bed use. To achieve the increased utilization in VA’s report, Florida VA hospitals would have needed to serve a larger share of the veteran population than they previously did. In our view, the hospitals were unable to achieve the expected level of utilization growth, possibly because VA could not attract enough new veterans or the changing nature of medical care delivery may have reduced veterans’ need for hospital care. VA has evaluated the future use of its Brevard hospital by East Central Florida veterans through the year 2005, about 5 years after the Brevard hospital is expected to open. Using 2005 as the target planning year gives VA its highest estimate of future use. By using the year 2005 without any adjustments for the expected future decrease in veteran population and increased emphasis on outpatient care, VA has essentially assumed that hospital usage will remain fixed over the useful life of the hospital. This would require the hospital to attract an increasingly larger share of a decreasing veteran population that will be receiving outpatient care intended to keep patients out of the hospital. VA’s proposed hospital in Brevard can be expected to have a 25- to 45-year useful life, based on the operating experiences of other VA hospitals. Even if veterans’ use meets VA’s expectations in 2005, it seems likely, based on VA’s estimates, that the Brevard hospital will face a decreasing work load for most of its useful life. This would result in an increasing supply of unused beds, as is now being experienced by the three VA hospitals in Central Florida as well as others throughout the nation. If veterans’ use falls below VA’s expectations, the surplus of unused beds will be exacerbated. VA has not adequately evaluated the economic impact of shifting large numbers of veterans from private care and other VA hospitals to Brevard. As previously discussed, East Central Florida veterans used about 70 beds a day in VA hospitals during 1994. Thus, VA hospitals would appear to lose this work load because the veterans could be expected to use the Brevard hospital, which would be closer to their residence. Moreover, many new veterans will need to use the Brevard hospital in order to fill the remaining 400 beds. Because these veterans would likely use community hospitals in the absence of Brevard, the local hospitals may realize a comparable decrease in work load. Currently, these hospitals have over 2,300 unused beds, on average, with almost all 22 local hospitals reporting occupancy rates of 56 percent or lower. VA’s decision to convert the former Orlando Naval Hospital to a nursing home and build a new hospital in Brevard County was driven by its Integrated Planning Model data. VA’s plans, however, rely on several questionable assumptions concerning the future availability and use of hospital and nursing home beds in Central Florida. Foremost of these is VA’s assumption that its proposed hospital in Brevard County will serve almost twice the number of veteran users as are now served in existing VA hospitals in Central Florida. VA’s ability to attract such a large supply of new users appears uncertain, given the large supply of unused hospital beds in VA and private hospitals in Central Florida as well as the decreasing veteran population and the rapid shifting of medical care from inpatient to outpatient settings. Such uncertainties subject VA to the risk of spending federal dollars to build a hospital with a large supply of beds that may not be used in future years. VA’s use of lower-cost alternatives could meet its service delivery goals and would also avoid the unneeded expenditure of government resources. For example, using available beds at the former Orlando Naval Hospital and converting unused beds at existing VA hospitals for psychiatric or nursing home care will reduce the risk of large unused bed capacity at the proposed Brevard hospital, which appears likely because of expected decreases in the veteran population and VA’s increased reliance on outpatient care to serve veterans. Also, this approach appears consistent with VA’s new network planning strategy, in that it will help to maintain the viability of existing VA hospitals. Without such planning, the existing VA hospitals’ viability may be jeopardized by declining work loads associated with a shifting of veterans to the new Brevard hospital. We recommend that the Congress deny VA’s request for funds to construct a new hospital and nursing home in Brevard County, Florida. Instead, the Congress should direct the Secretary of Veterans Affairs to develop a lower-cost alternative that reflects a network planning strategy. In this regard, the Secretary should consider using available beds at the former Orlando Naval Hospital, converting unused medical and surgical beds at existing hospitals for psychiatric use, and purchasing care in community nursing homes when beds are unavailable in existing VA nursing homes. We obtained comments on a draft of this report from VA officials, including the Deputy Under Secretary for Health. The officials disagreed with our overall conclusion that there is a more prudent and economical way to achieve VA’s service delivery goals in Central Florida than building a new 470-bed hospital and 120-bed nursing home in Brevard County and converting the former Naval Hospital in Orlando to a nursing home. They stated that their planning efforts clearly documented the need for a new hospital in Brevard to provide access to care for a veteran population that currently does not have reasonable access and gave strong justification for additional nursing home beds in East Central Florida by the year 2005. VA specifically disagreed that it should be able to obtain 240 beds by contracting with community nursing homes. Rather, VA strongly contends that the beds will not be available. This difference of opinion revolves around the soundness of two key assumptions as well as concerns over the adequacy of VA’s survey of current and future community nursing home beds. As previously discussed, VA assumes that more than 6,856 beds will always be unavailable to VA because they were occupied at the time of VA’s survey and that 215 empty beds in homes will always be unavailable to VA because the homes have occupancy rates of 95 percent or higher. VA agreed that its survey had missed homes but argues that the 580 beds would have been unavailable because the homes had an occupancy rate of 95 percent or higher. On this basis, VA determined that it could obtain only 105 beds for veterans in community nursing homes. VA’s assertion does not appear sound given the large number of community nursing home beds in East Central Florida. At the time of its nursing home survey, VA was using about two-tenths of 1 percent of the 7,100 existing community beds. At issue is whether VA could increase its use to 3 to 4 percent (240 beds) of these beds. Our report clearly demonstrates that hundreds of beds in community nursing homes will become available during each year and that VA has a reasonable opportunity to secure needed nursing home beds for veterans. Should this demand exceed supply, our evidence suggests that it is likely that more community nursing homes will be built; thereby providing beds for veterans and nonveterans. VA assumed that it is the one responsible to build new nursing home bed capacity, rather than allowing the private sector to provide the beds as needed. VA agreed with us that occupied beds will turn over during a year, but VA asserts that such turnover will be infrequent. VA officials stated that patient stays in Florida nursing homes average 247 days a year. We believe that it is misleading to use an average length of stay when assessing nursing home turnover. This is because patients with long stays tend to skew the average. As previously discussed, about one-half of the patients admitted to community nursing homes stayed fewer then 83 days, according to the latest government survey of nursing homes. We used the median duration of nursing home stays because it represents one-half of all patients that used nursing homes. The study that we cited had reported an average length of stay of 401 days. VA stated that it excluded the 215 empty community nursing home beds because the maximum occupancy rate for efficient operation of a nursing home in VA and the private sector is 95 percent. The 60 nursing homes in East Central Florida had an average occupancy rate of 96 percent, and 50 had rates over 95 percent. Given the community nursing homes’ operating practices, it seems reasonable that VA would be able to place some veterans in these beds. Therefore, VA should not exclude such beds from its consideration when planning for nursing home care. These beds seem to be a resource that can be used by VA. VA also stated that its methodology adequately considered future construction of new community nursing homes. For East Central Florida, VA’s methodology for factoring in new community nursing home construction resulted in an increased use of 14 additional community nursing home beds. As discussed in our report, East Central Florida has 7 additional community nursing homes with a capacity of 900 beds that VA had not included in its survey. In addition, in 1994 the state of Florida approved for construction 1,546 additional community nursing home beds for East Central Florida. We do not believe that VA’s addition of 14 community nursing home beds adequately considers new community nursing home beds. VA disagreed that there was a lower-cost way to improve veterans’ access to VA inpatient care than to construct a new hospital in Brevard. VA expressed concern that we reached our conclusions based on misleading use of data. First, VA questioned our analysis of hospital bed use at the three existing hospitals in Central Florida and its usefulness in evaluating lower-cost alternatives to meet VA’s service goals in Central Florida. Second, VA questioned our use of data on unused beds in community hospitals. Third, VA questioned our assumption that unused beds in VA hospitals will increase over time. VA pointed out, and we agree, that providing VA hospital beds in East Central Florida would give veterans more reasonable access to VA inpatient care than now exists. VA stated that our references to unused beds in the three existing hospitals leave the impression that those hospitals are readily accessible to veterans in East Central Florida. It is not our intent to suggest that the three VA hospitals are readily accessible and we have added the distances between the hospitals and East Central Florida to the report. Rather, our analysis shows that there are beds available for East Central Florida veterans if they desire to use them. However, we intended to demonstrate that the former Orlando Naval hospital would give veterans more reasonable access to VA inpatient care than now exists. Moreover, our analysis shows that the availability of unused beds in the three existing VA hospitals, when used in conjunction with the former Naval Hospital, could substantially enhance the availability of inpatient care to East Central Florida veterans. Our assessment of veterans’ use of the existing VA hospitals in Central Florida was twofold. First, we examined veterans use of existing VA hospitals in Central Florida to have a basis for assessing the adequacy of VA’s projections of veterans’ future demand for hospital beds in East Central Florida. Historical use data for existing VA hospitals show that VA’s use of national rather than local use rates may greatly overestimate the potential use of the proposed hospital in Brevard. Second, we identified unused beds in VA’s existing Central Florida hospitals to determine the potential bed capacity that could be available for (1) referrals if demand exceeds the capacity of the planned hospital in Brevard or the former Orlando Naval Hospital or (2) conversion for other uses, such as psychiatric care. VA stated that it was unclear why we used a work load projection methodology focusing on three existing VA hospitals in Central Florida. VA asserts that our analysis was not focused on the same planning assumptions used by VA, which focused on East Central Florida demographics. We used VA’s work load projection methodology without adjustment. We did, however, apply different veteran utilization data to VA’s East Central Florida demographics; that is, we used historical use rates for three existing VA hospitals in Central Florida, while VA used historical VA hospital use rates for veterans nationwide. As our report shows, the number of beds projected based on national rates is about double the number of beds projected based on local rates. VA stated that we have overestimated the numbers of unused beds in existing VA hospitals. VA contends that there are 158 available unused beds rather than the 535 beds we cited. VA’s adjustment is based on (1) an occupancy rate of 85 percent, which it states is the maximum occupancy rate for operating an efficient hospital, and (2) 1,433 beds in-service at the three existing VA hospitals. While we recognize that using an 85-percent occupancy rate standard may provide a reasonable means of estimating unused beds, we believe that it should be applied to the hospital’s total bed capacity rather than to just those beds now in-service. In this regard, VA’s three hospitals had 162 beds out of service. Using VA’s suggested methodology, this would result in about 300 unused beds in the three hospitals rather than the 158 VA estimated. In any case, our assessment of unused beds was intended to determine whether beds would be available for referrals from Brevard or the former Orlando Naval Hospital or for conversion to other uses, such as psychiatric care. By either VA’s or our estimate, a significant number of beds appear to be available for those purposes. VA also questioned whether the number of unused beds will increase over time. VA stated that whether this will occur due to unresolved issues of health care and eligibility reform or VA’s initiatives to improve patient privacy and increase ambulatory care activities is not known. Our position that unused beds will increase is based on VA’s future bed use estimates derived from its 1994 Integrated Planning Model. We share VA’s concern about the potential effects of such outside factors on the accuracy of its bed projections. As discussed in our report, such uncertainties raise concerns about the usefulness of basing VA’s estimate of future bed needs solely on veterans’ historical use of VA facilities. VA also expressed concern that our estimate of 2,300 unused beds in local community hospitals was overstated for the same reasons as previously expressed for VA’s unused beds. VA also stated that these beds may not be totally suitable for its use. Our discussion of community beds was focused on the potential economic impact of VA adding more hospital beds in areas that appear to have excess beds and VA’s failure to consider such impact in its planning process. VA disagrees that “unused” beds at VA hospitals in Tampa, Bay Pines, and Gainesville, Florida, could be converted to meet estimated psychiatric bed needs. VA states that there are not enough beds in contiguous space available at these VA hospitals to meet the projected need of 230 psychiatric beds, which are proposed for inclusion in the Brevard facility. Second, VA states that the psychiatry programs planned at Brevard are not comparable to care now provided at existing Florida VA hospitals. VA has 10 years to convert beds at its existing hospitals in order to achieve projected use of 230 psychiatric beds proposed for Brevard in the year 2005. While we agree that there are not now 230 unused beds in contiguous space at any one hospital, more beds will become available if VA’s inpatient work load continues to decrease as it has over the last 4 years. We believe that VA has the flexibility to consolidate wards at each hospital to provide a portion of the 230 beds. This would appear to better meet veterans’ needs, because VA expects the veterans to travel from all over the state of Florida to use Brevard’s psychiatric beds. The existing VA hospitals provide some of the same services proposed for Brevard even though these services are not available as separate programs. In discussing these programs with the officials of the existing hospitals, we found that they were planning to introduce some of the programs planned for Brevard or believed that they could introduce them if resources were available. In addition, VA may not need to provide hospital beds to serve chronically mentally ill veterans. Three of the four programs designed for the chronically mentally ill (a total of 80 long-term care beds) are residential treatment programs. These residential psychiatric treatment programs may be on VA medical center grounds or on VA-owned, -rented, or -donated property in the community, according to VA’s manual for mental health programs; that is, this care is not considered to be hospital care. We are sending copies of this report to the Secretary of Veterans Affairs; the President of the Senate and the Speaker of the House of Representatives; the Senate and House Committees on Veterans’ Affairs; the Senate and House Committees on Appropriations; and other interested parties. We also will make copies available to others upon request. Please call me on (202) 512-7101 if you or your staff have any questions concerning this report. Contributors to this report are listed in appendix VI. Representative Bill McCollum asked us to examine VA’s acquisition of the former Orlando Naval Hospital and its intended use for this facility. More specifically, he questioned whether the conversion of the former Naval Hospital to a nursing home is the most economical and prudent use of resources. Also, he asked us to explore available options and, if possible, suggest a more prudent and economical way for VA to meet its service delivery goals for Florida veterans. We reviewed VA’s policies and procedures and discussed them with officials in VA’s headquarters and its southern region and Florida hospitals. We visited VA’s Central Florida facilities— in Tampa, Bay Pines, and Gainesville—and the former Orlando Naval Hospital and discussed operating procedures and practices with directors, associate directors, and their staff. We used VA’s data from various soures, such as its Summary of Medical Programs, bed availability reports, Integrated Planning Model, Distributed Population Planning Base, strategic management planning documents, Five Year Medical Facility Development Plans, budget submissions, annual reports, and medical center documents. We also reviewed several VA studies, including A Thirty Year Study of the Needs of Veterans in Florida, December 1982; Final Report on Future Bed Need and Potential Sites for New VA Hospitals in Florida, June 1983; Florida VA Health Care Plan, July 1991; East Central Florida Siting Options, September 1991; Psychiatric Program Needs in Florida, Results of a Comprehensive One-Day Survey, December 1992; and Study for Conversion of Orlando Naval Hospital to VA Satellite Outpatient Clinic and 120 Bed Nursing Home Care Unit, July 1993. To assess VA’s nursing home planning for Central Florida, we reviewed its planning methodology, assumptions, and data. We reviewed VA’s 1993 Community Nursing Home survey and VA’s nursing home directives and guidance. We interviewed VA’s nursing home planners in VA’s central office and its southern region. In addition, we interviewed chiefs of social work services at the VA hospitals in Tampa, Bay Pines, and Gainesville and reviewed their nursing home data. We obtained nursing home cost data from the southern region and other VA documents. In addition, we contacted Florida state officials from the Agency for Health Care Administration and the Certificate of Need Office to obtain information about community nursing home beds approved for construction and the state’s future plans to approve additional community nursing home beds. Also, we contacted Florida state officials from the Department of Veterans Affairs to determine its future plans for constructing additional state nursing home beds. Veterans from East Central Florida are included in service areas of the VA hospitals in Gainesville, Tampa, and Bay Pines. To determine the total number of VA hospital beds available in these hospitals, we reviewed VA’s data, interviewed VA officials from these hospitals, and toured each hospital to observe closed and converted hospital beds. Also, we obtained documents from each facility explaining the changes in the number of beds over time. In addition, we obtained information from VA’s reports on the number of hospital beds used by veterans on an average daily basis over the last 4 years. We compared the total number of hospital beds available with the number of beds used on an average annual daily basis to determine the estimated number of unused beds at these VA hospitals. Unused VA hospital beds include beds in operating and closed wards. VA uses its Integrated Planning Model to project future veteran inpatient, outpatient, and nursing home work loads. The model assists VA in determining the future size and scope of VA health care, developing construction and operational plans, and contributing data for budget requests. The model is applied at the facility-specific level. The model is primarily driven by three variables: veterans’ ages, average lengths of hospital stays for selected medical services (for example, surgery or psychiatry), and number of patients treated in the selected medical services. VA requires that any deviations must be quantitatively justifiable. To compare the number of available VA hospital beds to the expected future veteran demand for VA hospital care in Gainesville, Tampa, and Bay Pines, we used the results from VA’s 1994 Integrated Planning Model. We totaled the VA model’s estimates of the number of future hospital beds for each of these facilities to determine veterans’ future demand for Central Florida hospital beds in the years 1995-2010 (in 5-year increments). The difference between the number of VA hospital beds available today and the total estimated future demand equals the estimated surplus or shortage of VA hospital beds in the future. For estimating the number of future hospital beds for its new hospital in Brevard County, VA used its national historical hospital use rates. To update VA’s estimate based on its 1993 Integrated Planning Model, we used more current information from VA’s 1994 Integrated Planning Model and applied it to the veteran population in VA’s defined service area for the hospital in Brevard County. In addition, we combined VA’s 1994 Integrated Planning Model results (based on historical facility usage) for Tampa, Bay Pines, and Gainesville to estimate the future number of beds for VA’s proposed hospital in Brevard County if veterans in the future continue to seek hospital care at the same level as they have in the past. VA’s proposed hospital in Brevard will serve as a statewide psychiatric resource for Florida. To assess and compare psychiatric services at the VA hospitals in Tampa, Bay Pines, and Gainesville and VA’s planned psychiatric services for its hospital in Brevard County, we interviewed the chiefs of psychiatric services at the hospitals, VA’s regional planners, the psychiatric consultant for the region who is designing the services for VA’s hospital in Brevard (VA’s chief of psychiatry in Dallas). We reviewed VA manuals and studies pertaining to psychiatric services and toured psychiatric wards in Tampa, Bay Pines, and Gainesville. In addition, we interviewed chiefs of psychiatry to gain an understanding about caring for long-term psychiatric patients and to identify studies that may assist in estimating the number of long-term care patients that may need hospital beds. In addition, we interviewed the chiefs of psychiatry at VA’s psychiatric hospitals in Tuscaloosa, Alabama, and Augusta, Georgia, to obtain information about bed availability and acceptance of patients from outside their service areas. These hospitals also serve as referral centers for Florida veterans. Also, we interviewed officials from the four Florida state psychiatric hospitals about current and future bed availability. We used three basic criteria to guide our assessment of VA’s prudent and economical use of resources in East Central Florida. First, VA should make the best use of existing space before constructing new space. Second, VA should purchase from private providers rather than constructing new facilities if needed services can be purchased at a cost savings. Third, VA should design new construction to meet veterans’ expected use over a facility’s useful life as efficiently and effectively as possible. We conducted our review between June 1994 and June 1995 in accordance with generally accepted government auditing standards. Central Florida VA hospitals are located in Bay Pines, Tampa, and Gainesville. The current service areas for these hospitals include the veterans from East Central Florida. Recent VA experience shows that: hospital bed use is declining, hospital beds are unused, and the number of unused VA hospital beds is expected to increase in future years. VA hospital bed use in Central Florida declined steadily between 1991 and 1994. The decline in bed use affects medicine, surgery, and psychiatry, as the figures below illustrate. Figure II.1: Decline in Central Florida VA Medical and Surgical Hospital Beds Occupied (Fiscal Years 1991-94) Almost all veterans receiving hospital care in Central Florida had medical conditions related to military service or low incomes. However, VA’s Central Florida hospitals reported providing/scheduling more discretionary veterans for hospital care in 1993 than other VA hospitals, on average, nationwide, as figure II.3 shows. While the veteran population was decreasing nationwide during fiscal years 1991 to 1994, the veteran population in Central Florida remained stable, as figure II.4 shows. Veteran Population (in thousands) Outpatient Visits (in thousands) In its 1983 Final Report on Future Bed Need and Potential Sites for New VA Hospitals in Florida, VA reported a need for additional hospital beds in Florida. Since then, however, the Central Florida VA hospitals converted 263 hospital beds to other uses, most of them for ambulatory services. The conversions reduced their total bed capacity from 1,858 to the present 1,595 beds. In addition, as a result of the steadily declining inpatient work loads, the VA hospitals in Central Florida have unused beds. If veterans’ hospital usage continues at the 1994 level (average 1,060 hospital beds daily), 535 of the 1,595 VA hospital beds may be unused in fiscal year 1995. All three Central Florida VA hospitals reported having unused beds, as shown in table II.1. The unused VA hospital beds are in each of its hospital services, as depicted in figure II.6. VA Hospital Bed Supply (Central Florida) VA Hospital Beds Used (Central Florida) VA planning data projects that the future veteran population in Central Florida will be decreasing. Figure II.7 shows the future veteran population estimates through fiscal year 2010. Veteran Population (in thousands) The veteran population nationwide began decreasing (1980) 14 years before Central Florida (1994). The veteran population in Central Florida is expected to decrease at a slower rate from 1995 to 2005 compared with the national rate. VA’s 1994 Integrated Planning Model estimates that the hospital bed use at its three facilities in Central Florida will be declining over the next 15 years. Figure II.8 shows that the number of unused VA hospital beds is expected to increase. The increase is depicted as the gap between the two lines in figure II.8. Estimated Bed Use (Central Florida Rates) VA Hospital Bed Supply (Central Florida) The decline in future estimated beds is attributable, in part, to the decreasing veteran population and changes in medical practice, such as shorter lengths of stay and VA’s emphasis on ambulatory care. There are three types of nursing home providers and VA has established target goals to guide hospitals in achieving a desired mix among the providers. Generally, VA discharges veterans from its hospitals to nursing homes for rehabilitation. VA’s cost of providing veterans nursing home care varies by type of provider. The number of nursing home beds that VA provides for veterans in Central Florida has been increasing over the last 4 years. The nursing home bed usage increased by about 16 percent from fiscal year 1991 to 1994, as figure III.1 shows. VA sponsors nursing home care through three programs: (1) VA-owned and -operated nursing homes, (2) contract community nursing homes, and (3) state veterans’ nursing homes. All three programs treat veterans with conditions that may be either service-connected or nonservice-connected, and all can provide either skilled or intermediate nursing home care. VA-owned nursing homes usage increased in Central Florida. VA has three nursing homes in Central Florida with a total of 600 beds. These homes served 1,218 veterans in fiscal year 1994. Figure III.2 shows veterans’ usage of VA-owned and -operated nursing homes. As of its 1993 survey, VA had contracts with 108 community nursing homes in Central Florida that have a total of 13,995 beds. In fiscal year 1994, VA’s contract nursing homes served 1,040 veterans. Figure III.3 shows veterans’ use of community nursing homes in Central Florida for the past 4 years. Florida opened its first state nursing home (120 beds) for veterans in December 1993. It reported that 135 veterans used 38 beds on an average daily basis for fiscal year 1994. The home is expected to reach its normal operating capacity in fiscal year 1995. In Central Florida, VA provides much more of the veterans’ nursing home care in its own homes than it pays for in the community or state homes. VA nursing home care is more expensive than the other two programs. VA’s nursing home goals are to provide 30 percent of the care in VA homes, 40 percent in community homes, and 30 percent in state homes. Figure III.4 shows the percentage of nursing home care that veterans received by type of provider in 1994. Provider Share (percent) VA’s 1994 Use Rate (Central Florida) VA’s costs of providing nursing home care to veterans varies by the provider. Placing veterans in state nursing homes is the cheapest to VA, followed by community nursing homes. The most expensive care is provided at VA nursing homes. VA’s nationwide average costs for providing nursing home care are shown in table III.1. Average length of stay (days) According to VA, nursing home costs are higher in VA than in community nursing homes because VA nursing homes are hospital based, with all the clinical resources VA has a much higher ratio of registered nurses; VA treats a much higher ratio of patients requiring skilled care; and VA pays its nurses more than do community nursing homes. The state veterans’ nursing homes provide a range of nursing home care that is cost effective to VA in that costs are shared by VA, veterans, and the states. The state nursing homes are state-owned and -operated. VA makes per diem payments to offset part of the cost of care for veterans residing in state homes and pays up to 65 percent of the costs of constructing or renovating state homes. VA’s planning for nursing home care consists of two principal activities. First, VA estimates veterans’ future use for a target year. Second, VA surveys the availability of community and state nursing homes. VA makes its construction decisions based on a comparison of veterans’ projected use and the potential availability of beds in community and state homes. VA has established a national nursing home care goal and VA makes construction decisions to build new VA facilities based on future demand estimates required to meet that goal. Veterans’ future demand for nursing home care is based on the premise that veterans will require nursing home care at the same rate as did male civilians. Using the male civilian nursing home use rate, VA applies it to the estimated veteran population to determine the total estimated future veteran demand for nursing home care. VA’s goal is to provide nursing home care under VA auspices to 16 percent of the total estimated future veteran demand—commonly referred to as VA’s market share. Although VA’s goal is to provide 16 percent of the total estimated future veteran demand, VA’s actual share was about 9.2 percent in the Central Florida area in fiscal year 1994. VA’s share has remained stable over the last 4 years, as figure IV.1 shows. The number of nursing home beds needed in East Central Florida depends on whether veterans will continue to use Florida nursing homes at the same rate as they have over the past 4 years or whether their use rate will increase to the higher level that VA is expecting. Table IV.1 shows the differences in estimated demand and bed supply shortage. Based on East Central Florida’s actual market share (beds) During 1993, VA evaluated 71 community nursing homes in East Central Florida. VA made judgments about future availability of 8,435 community nursing home beds based on the homes’ occupancy rates, personal knowledge, or by contacting selected homes. Table IV.2 shows the results of VA’s assessment in East Central Florida. VA determined that 11 community nursing homes in East Central Florida that had 1,259 beds were not suitable for placing veterans because these homes (1) were not interested in contracting with VA or (2) did not meet VA standards. This reduced the number of potential community nursing homes to 60 and the number of beds to 7,176. VA determined that the remaining 60 homes in East Central Florida would be able to provide 105 beds in the future. VA excluded from its consideration for future use the remaining beds based on two questionable assumptions concerning bed availability. First, VA assumed that beds occupied at the time of its survey would not be available for VA’s future use. Second, VA assumed that a nursing home was fully occupied if it had an occupancy rate of 95 percent or higher. The numbers of occupied and empty community nursing home beds in East Central Florida are shown in table IV.3. VA excluded 6,856 community nursing home beds in East Central Florida from its consideration based on its assumption that beds occupied would not be available for future VA use. Patient turnover in community nursing homes provides VA opportunities to place veterans in some of these beds. VA excluded 215 of the 320 empty community nursing home beds from its consideration based on its assumption that community nursing homes are fully occupied at 95-percent capacity. Of the 215 empty beds, 86 were in community nursing homes that had contracts with VA. VA’s determination of available and unavailable empty community nursing home beds is shown in table IV.4. VA overlooked community nursing homes in East Central Florida. At the time that VA conducted its survey, four nursing homes with a total of 580 beds were inadvertently omitted from the list of homes under consideration. In addition, we subsequently identified three new community nursing homes that are operating in East Central Florida. The three homes have a total of 320 beds. The total number of community nursing home beds in East Central Florida is 9,335, some 900 beds higher than the number VA surveyed in 1993. During 1993, VA evaluated 322 community nursing homes in Central Florida. VA made judgments about future availability of 37,892 community nursing home beds based on the homes’ occupancy rates personal knowledge, or by contacting selected homes. Table IV.5 shows VA’s survey determinations concerning licensed community nursing home beds in Central Florida. VA determined that 58 community nursing homes in Central Florida that had 6,445 beds were not suitable for placing veterans because these homes (1) were not interested in contracting with VA, (2) did not meet VA standards, or (3) were not Medicare/Medicaid certified. This reduced the number of community nursing homes to 264 and the number of beds to 31,447. VA determined that the remaining 264 community nursing homes in Central Florida would be able to provide 382 beds in the future. VA excluded from its consideration for future use the remaining beds based on two questionable assumptions concerning bed availability. First, VA assumed that beds occupied at the time of its survey would not be available for VA’s future use. Second, VA assumed that a nursing home was fully occupied if it had an occupancy rate of 95 percent or higher. The numbers of occupied and empty community nursing home beds in Central Florida are shown in table IV.6. VA excluded 30,015 community nursing home beds in Central Florida from its consideration based on its assumption that beds occupied would not be available for future VA use. Patient turnover in community nursing homes provides VA opportunities to place veterans in some of these beds. VA excluded 1,050 empty community nursing home beds in Central Florida from its consideration based on its assumption that community nursing homes are fully occupied at 95-percent capacity. Of the 1,050 empty beds, 496 were in community nursing homes that had contracts with VA. VA’s determination of available and unavailable empty community nursing home beds is shown in table IV.7. VA overlooked community nursing homes in Central Florida. At the time VA conducted its survey, nine nursing homes with a total of 1,138 beds were inadvertently omitted from the list of homes under consideration. In addition, we subsequently identified 15 new community nursing homes that are operating in Central Florida. The 15 homes have a total of 1,534 beds. The total number of community nursing home beds in Central Florida is 40,564, some 2,672 beds higher than the number VA surveyed in 1993. VA’s 1993 nursing home survey did not consider the addition of new community nursing home beds in Florida. The state’s Certificate of Need Office approved for construction 5,176 community nursing home beds in the Central Florida area, 1,546 of which will be located in East Central Florida. The certificates of need require construction to commence within one year from approval or the approval becomes void. The certificates were effective on July 1, 1994, and September 16, 1994. When completed, these additional community nursing home beds will be available to help VA better serve Florida veterans, enable VA to expand its community nursing home program, and reduce VA’s need to construct new homes of its own. VA’s 1993 survey included consideration of the one state nursing home in Florida. However, officials at the Florida Department of Veterans Affairs told us that their long-term plans include building four more 120-bed state nursing homes by 2010. Funding for the second state home is being discussed in the Florida legislature and the remaining three homes are proposed for the future. The location of the three future state nursing homes has not been determined. According to a VA official, the state nursing home currently being discussed in the state legislature will be a state home for veterans with dementia and Alzheimer’s disease. In Florida, VA has hospitals in Tampa, Bay Pines, Gainesville, Lake City, Miami, and West Palm Beach; each hospital provides psychiatric care. Recent experience shows that veterans’ use of psychiatric beds has declined slightly. The proposed VA hospital in Brevard County will also provide inpatient psychiatric care, which appears comparable to care now provided at VA’s existing hospitals in Florida. The three levels of psychiatric care traditionally identified by VA are acute, intermediate, and long-term care. Acute psychiatric care is used to diagnose and stabilize psychiatric patients and has a length of stay of about 30 to 60 days. Intermediate care is used for rehabilitation and transitional care and has a length of stay of up to 90 days. Long-term care has an indefinite length of stay and is used for chronically mentally ill veterans. VA has no designated long-term care hospital psychiatric beds in its five Florida hospitals. Patients requiring long-term psychiatric care are being evaluated and diagnosed in available hospital beds. VA attempts to transfer some of these patients either to one of Florida’s four state psychiatric facilities or to a VA psychiatric facility out of state. In addition, some of these patients are being treated in VA and community nursing homes that have such capability. VA plans to treat some of these patients in residential programs. The five VA hospitals in Florida operate a total of 587 psychiatric beds. Table V.1 shows the number of psychiatric beds in each VA hospital. Since fiscal year 1991, the availability of psychiatric beds has increased because veterans have used fewer beds, as shown in figure V.1. For fiscal year 1994, veterans occupied on average 454 beds daily, leaving 133 beds unused. VA plans to increase its number of psychiatric beds from 587 to 877. The new VA hospital in West Palm Beach adds 60 psychiatric beds. The proposed VA hospital in Brevard County will add 230 psychiatric beds. VA’s 1994 Integrated Planning Model estimates that the psychiatric bed use at its three facilities in Central Florida will be declining over the next 15 years. Figure V.2 shows that the number of unused VA psychiatric beds is expected to increase. The increase is depicted as the gap between the two lines in figure V.2. Figure V.2: Estimated Increase in Unused VA Psychiatric Beds in Central Florida (Fiscal Years 1995-2010) Estimated Bed Use (Central Florida Rates) VA Hospital Bed Supply (Central Florida) In addition to out-of-state VA facilities, many veterans in Florida in need of long-term psychiatric care received this care at one of the four state psychiatric hospitals. In December 1992, VA reported that 414 veterans resided in the state facilities, representing 14 percent of the total population in Florida state hospitals. Florida pays for this care. Current VA policy emphasizes rehabilitation of psychiatric patients. Thus, VA’s medical practice is shifting away from the custodial role. Long-term psychiatry is no longer described as a level of VA care. Rehabilitative programs are offered as alternatives to long-term care. Outpatient, residential, and community-based treatment programs are also presented as alternatives to inpatient psychiatric care. VA’s policy states that a significant number of patients who now reside in long-term care facilities may be reintegrated into the community when a comprehensive, flexible case management policy is implemented. Case management is used to provide veterans with an ongoing connection to VA so that medical, psychosocial, and vocational services can be planned and maintained for veterans whose symptoms affect their life management skills. The approach for case management involves a planned and systematic use of the full range of VA and community services and requires a dual focus on meeting the veterans’ needs and conserving agency and community resources. VA’s policy also states that patients should be encouraged to receive their treatment near their homes and within one medical center. Although VA focuses now on rehabilitative care, it recognizes that some patients may require prolonged hospital treatment because they do not respond to current medications and they behave in unpredictable and destructive ways. In addition to focusing on rehabilitative psychiatric care, more psychiatric services will be provided on an outpatient basis. These outpatient services will be provided through clinics, residential, and community-based care. For example, the Chief of Psychiatry at Bay Pines is planning to consolidate and reduce the current number of psychiatric beds from 149 to 120 to provide more outpatient psychiatric services. In another example, the Psychiatric Service at the VA hospital in Houston, Texas, adopted ambulatory care as the main mode of treatment and integrated inpatient and ambulatory care to provide a continuum of care. The state psychiatric hospitals are also considering community programs as a viable alternative to inpatient care. One state hospital closed 112 beds to use the savings for community programs. A second hospital has diverted money in the budget towards developing community programs, and a third hospital is considering closing beds to use the savings for community programs. VA’s Florida network officials justify providing psychiatric beds at the hospital planned for Brevard County on the basis that Florida currently has a lower ratio of VA psychiatric beds to veterans than the national average. The hospital is intended to provide a statewide resource of long-term care psychiatric beds that are not currently available in VA’s Florida network. Generally, long-term psychiatric care requires lengths of stay longer than 12 months. However, VA’s psychiatric design consultant told us that no long-term psychiatric hospital beds are planned for the Brevard facility. Instead of long-term inpatient care, residential psychiatric treatment programs will be used when appropriate. Furthermore, most of the inpatient psychiatric services planned for the hospital in Brevard are comparable to existing VA services or are planned at VA’s three facilities in Central Florida. The psychiatric beds planned for Brevard consist of acute, intermediate, and long-term. The psychiatric treatment programs designed for long-term care patients generally have unspecified lengths of stay. Of the four programs, the 15-bed sustained medical/psychiatric unit is the only inpatient program and the defined length of stay is shorter than 12 months. The other three programs are residential programs (nonhospital) having a total of 80 beds; including 20 beds for a substance abuse residential rehabilitation treatment program, 30 beds for a posttraumatic stress disorder residential rehabilitation program, and 30 beds for a psychiatric residential rehabilitation treatment program. VA’s manual for mental health programs states that residential programs may be on the VA medical center grounds or on VA-owned, -rented, or -donated property in the community. The following provides a description of psychiatric services planned for VA’s hospital in Brevard County as defined in VA’s manual for mental health programs. The general psychiatric unit offers psychiatric and psychosocial diagnosis and treatment in a hospital environment for new patients as well as for those patients experiencing a recurrence of an illness who cannot be assessed or treated in a lesser level of care. The primary objective is to provide this treatment in a relatively short duration, such as 10 to 20 days, and occasionally 30 to 40 days, and then assist in location of appropriate follow-up needed for successful treatment at a less intensive level of care. Length of stay: Fewer than 30 to 40 days. This unit offers the same diagnosis and treatment described above but for patients with dual diagnoses of both psychiatric and medical problems. Length of stay: Fewer than 30 to 40 days. PICU offers a smaller size unit, increased staffing, security (safe quiet/seclusion rooms), and more specialized clinical expertise than a general psychiatric ward. A PICU unit may be within or adjacent to a 20 to 30 bed admitting or general psychiatric ward. Patients admitted to this level of care will have the most severe behavioral problems including high suicide risk, assaultive behavior, severe agitation, disorganized behavior secondary to psychosis, confusion, or other severe psychiatric disorders. Psychiatric patients with such symptoms may be rapidly stabilized in such a unit, obviating the need for transfer to a long-term or more secure facility often some distance away. Length of stay: Fewer than 30 to 40 days. These programs are designed as part of a continuum of care for elderly patients with depressive, organic brain (for example, dementia), or other psychiatric disorders, including patients with medical comorbidities. Focus is on evaluation, stabilization, and relatively brief stay. Programs may include respite beds to relieve caretakers and a brief-stay Alzheimer’s/dementia unit. Length of stay: Fewer than 30 to 40 days. This program offers a short-term high-quality setting in selected VA medical centers to veterans with combined medical and psychiatric problems who are unable to be evaluated, treated, or managed appropriately in existing medical or psychiatric settings. The setting concentrates staff skilled in both medical and psychiatric areas. Length of stay: Fewer than 30 to 40 days. The essence of this level of care is its emphasis on sustained treatment and rehabilitation for varied groups of patients who have failed to achieve sufficient recovery in 90 days to be discharged to a nursing home, domiciliary, or community residential level of care. Patients in STAR I have medical, neurological, and psychiatric disorders that interact in such a way as to make care in traditional long-term psychiatric or medical programs (including traditional nursing homes) difficult or impossible. Length of stay: Fewer than 12 months. This program offers patients with drug, alcohol, and other chemical abuse and dependency disorders an intense, brief treatment of withdrawal symptoms; evaluation of physical, psychological, social, and vocational problems; family interventions; and initiation of individual and group therapies and support groups that may be continued on an outpatient basis. Patients who require longer periods of inpatient treatment may be transferred to a less intensive level of care or to community Contract Half-Way House Programs. Length of stay: Fewer than 30 days. These programs provide an inpatient rehabilitation setting for veterans with serious chemical dependency who require more than detoxification or a brief stay because they still have a significant risk of resumption of their abuse problems on return to the community. Length of stay: Fewer than 90 days. Residential programs are structured, supervised, 24-hour-a-day therapeutic settings that embody strong treatment values with peer and professional support to chronically mentally ill (CMI) veterans in need of extended rehabilitation and treatment. These veterans have mental disorders such as schizophrenia, depression, and anxiety. Residential programs may be on VA medical center grounds or rented or donated property in the community. Length of stay: Not specified. A residential program that provides intense rehabilitation for drug and alcohol addictions. Length of stay: Not specified. A residential program that provides treatment for patients with PTSD who are unable to be treated in an outpatient setting. Length of stay: Not specified. For Central Florida, the VA hospitals have or plan to have psychiatric services similar to the proposed VA hospital in Brevard County. The VA hospitals in Central Florida discharge long-term care psychiatric patients to other facilities or programs. Table V.2 shows the psychiatric bed sections currently available to veterans in Central Florida. In addition to those named above, the following individuals made important contributions to this report. Beverly Brooks-Hall provided the information on VA’s nursing home program. Bonnie Anderson provided information on VA’s psychiatric care. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Veterans Affairs' (VA) plans to provide accessible medical and other services to veterans in East Central Florida, focusing on: (1) VA acquisition of the former Orlando Naval Hospital; (2) whether the conversion of the hospital to a nursing home is the most economical use of VA resources; and (3) whether more prudent and economical options exist to meet VA service delivery goals for Florida veterans. GAO found that: (1) VA conversion of the former Naval hospital to a nursing home and the construction of a hospital and nursing home in Brevard County are not a prudent and economical use of VA resources; (2) VA planning assumptions are questionable, particularly those regarding the availability of community nursing home beds and unused VA hospital beds, and the potential decrease in future demand for VA hospital beds; (3) VA could meet its service delivery goals by using existing capacity, which would result in lower costs and greater convenience for the veterans; (4) preserving the Orlando hospital as a hospital would improve the geographic accessibility of VA medical and psychiatric services at a lower cost; (5) the number of unused VA hospital beds is expected to increase because of the projected decline in the veteran population; (6) VA could convert some of the unused medical and surgical beds in the three central Florida hospitals to psychiatric beds to make those beds more geographically accessible to all Florida veterans rather than concentrating them at the new hospital in Brevard County; (7) construction of the Brevard hospital is not justified, since VA greatly overestimated veterans' potential use of Florida VA facilities; and (8) VA needs to focus its strategy on the most prudent and economical use of its limited resources and avoid unnecessary expenditures while meeting its service delivery goals in a more timely manner. |
To help respond quickly to crises overseas, the military services store, or preposition, military equipment and supplies on land and on ships near potential conflict areas. With these stocks prepositioned near danger spots, U.S. response times to a crisis are shortened, since only the troops and a relatively modest amount of materiel must be brought by air to an area where the stocks are located. With fewer troops stationed abroad today, prepositioning has become increasingly important. All four military services have programs to preposition a broad range of stocks to be used for various purposes. Some stocks are positioned afloat, which allows responsiveness nearly anywhere in the world, and other stocks are stored ashore near the likely areas of conflict in the Persian Gulf and Korea. This report focuses on the Army and Air Force prepositioning programs because of concerns that emerged about the sufficiency, condition, and management of their prepositioned stocks. The Navy and Marine Corps prepositioning programs are discussed in appendix I. The goal of prepositioning programs is to make military equipment and supplies available to deploying forces faster than would otherwise be possible. The U.S. military can deliver equipment and supplies in three ways: by air, by sea, or by prepositioning. While airplanes travel quickly, they are expensive to use and impractical for moving all the materiel needed for a large-scale deployment. And though ships can carry large loads, they are slow. Prepositioning lessens the strain of using expensive airlift and reduces reliance on relatively slow sealift deliveries. In its 1997 Annual Report to the President and the Congress, DOD noted that moving an Army brigade of soldiers and 20,000 tons of equipment from the United States to the conflict area (by sea and air) would take 20 to 30 days. By contrast, fully deploying a prepositioned brigade should take just 4 days because only the soldiers, with a small amount of equipment, would be flown to the location of the prepositioned stocks. Although the concept of prepositioning is not new, it has gained importance in the post-Cold War world. Since its 1993 Bottom-Up Review, DOD has focused on maintaining capabilities to fight and win major conflicts in the Persian Gulf region and on the Korean peninsula. Concerned about the reduction in U.S. forces overseas and their ability to move forces in the time required to resolve potential conflicts quickly, the services have expanded prepositioning ashore and on ships in those regions. In the Persian Gulf, where the United States has few permanent forces, prepositioned stocks would be the primary source of combat equipment for ground troops and would be critical in setting up air bases there. In Korea, the Army has prepositioned a brigade set to augment the combat capabilities of U.S. forces there. While prepositioning figured prominently in the previous mobility studies performed by DOD, the Quadrennial Defense Review completed in 1997 did not consider prepositioning as a major part of its scope. Instead, DOD officials told us that prepositioning was to be reconsidered as part of the planned update to the mobility studies, scheduled to begin in 1999. The services generally measure the readiness of prepositioned stocks by determining their inventory fill and maintenance condition—that is, do they have the required stocks on hand and are those stocks in condition to fulfill the mission. To assess inventory fill, the rate of fill is compared to the requirements. These requirements must be valid to achieve a reliable objective assessment. To assess both inventory levels and maintenance condition, the services must have reliable information about the on-hand stocks and their current condition. Other factors affecting the readiness of the prepositioned stocks are their location, that is, are they close to where they are needed, and the training of the units to use them. Unless required stocks are available and in good condition, the purpose of prepositioning may be defeated because the deploying unit will lose valuable time repairing or replacing equipment. To provide a context within which to assess the services’ programs, we used the Government Performance and Results Act of 1993 (GPRA), which suggests that agencies working toward results-oriented management should take three steps, including (1) defining their mission and identifying desired outcomes; (2) measuring performance; and (3) using performance information to improve organizational processes, identify gaps, and set goals for improvement. We have published several reports about prepositioning, including three in 1997 about various aspects of the Army’s program. A list of related reports by GAO and other organizations is at the end of this report. The Army prepositions materiel for three primary programs: prepositioned equipment sets, operational projects, and sustainment stocks. This materiel ranges from Abrams tanks to cold weather clothing. In 1992, the Army shifted responsibility for managing these stocks, except for medical items, from its theater commanders to the Army Materiel Command. The purpose of the shift was to establish a common stockpile of equipment to support worldwide requirements. According to the Army, the budget for operating and maintaining its prepositioning programs in fiscal year 1997 was about $536 million. The Army’s goal for prepositioning is to establish eight brigade sets, seven of which are fully or partially in place. Each brigade set contains tanks, Bradley fighting vehicles, artillery pieces, trucks, and other rolling stock to support three or four battalions of Army combat troops, or about 3,000 to 5,000 soldiers. A support battalion is placed with each brigade set to maintain it and provide other critical support unit equipment. In addition to the brigade sets, the Army also has a division base set planned for Southwest Asia, which would provide support equipment for aviation and other equipment, and an artillery battalion and ammunition in Norway. Of the seven established brigade sets, six are ashore and one is afloat. Three of the six ashore are in Europe; the other three are in Kuwait, Qatar, and Korea. The brigade set afloat is being placed on a fleet of ships being bought for prepositioning purposes. The eighth brigade set, approved in mid-1998 by DOD, is to be placed afloat in 2001. This brigade set will be smaller than the others and is designed to complement equipment already afloat. Table 1.1 shows the location of and major combat systems in each brigade set. Operational projects provide equipment and other items for specific missions. Prepositioned materiel for these projects includes equipment and supplies that are not usually maintained by units. For example, some projects provide petroleum distribution and water systems, aircraft landing mats, and bridges. Projects can contain a single type of materiel, such as aircraft landing mats, or hundreds of different items such as hot and cold weather clothing. Of the 15 operational projects authorized, 10 are prepositioned on ships or outside the United States. These projects are lower in priority for funding than the prepositioned brigade sets. Sustainment stocks are intended to provide consumable supplies and support troops by repairing and replacing equipment that is damaged or lost during a conflict until resupply lines are opened. They include items from almost all classes of supply, including meals, clothing, petroleum, barbed wire, ammunition, tanks, trucks, medical supplies, and repair parts. Major items such as tanks and trucks are authorized only to support operational plans for the Persian Gulf and Korea. Other stocks are stored afloat and on land and can be used to support any scenario. These stocks are among the lowest in priority for prepositioning funding. The Army owns and controls reserve materiel that is excess to U.S. needs and may be turned over to allies during a crisis. This materiel is located primarily in Korea, Israel, and Thailand. Initiated in 1972, the materiel in Korea now includes over 550,000 short tons of ammunition and some older equipment that would normally be disposed of through foreign military sales or other means. The materiel in Thailand and Israel consists primarily of ammunition, but in much smaller amounts than that in Korea. The Air Force prepositioning program includes bare base sets; vehicles; munitions; and a variety of consumable stocks such as rations, fuel support equipment, aircraft accessories, and medical supplies. These programs are to initiate and maintain flight operations until supply channels can be established. The prepositioning programs of the Air Force are managed regionally. According to the Air Force, the budget for operating and maintaining these programs in fiscal year 1997 was about $72 million. The Air Force’s bare base program comprises air transportable sets of equipment to be used to quickly establish or augment air bases worldwide in support of combat forces and aircraft. Each location must have minimal infrastructure such as usable runways, taxiways, parking areas, and a source of water that can be made drinkable. Equipment in the sets includes tents for troops, latrines, kitchens, aircraft hangars, maintenance shops, generators, and environmental control systems. These sets are especially critical in austere environments, such as the Persian Gulf, where they would provide the bulk of living and working facilities at several planned operating locations. Figure 1.1 shows a bare base facility that is set up in Bahrain. The bare base program is authorized 109 prepositioned bare base equipment sets worldwide. The bare base sets designated for the Persian Gulf, called Harvest Falcon, includes 93 sets of prepositioned materiel. The Air Force said that for this program, it requires bare base facilities to house 55,000 personnel and support over 800 aircraft at 15 different locations in the Persian Gulf region. The Air Force established the number of sets in the late 1980s, and it has remained constant since then. The bare base sets designated for Europe and Korea, called Harvest Eagle, includes 8 sets each authorized for Europe and Korea. These sets are designed for more temperate climates, and augment existing base facilities. Each set provides living facilities for 550 personnel. The Air Force prepositions a wide variety of vehicles worldwide, including general purpose vehicles such as trucks and buses and special purpose vehicles such as materiel-handling and fire-fighting vehicles. These vehicles, particularly special purpose vehicles, are critical to the Air Force’s ability to generate combat sorties and sustain flight operations. Requirements for the program are established based on the number of aircraft and personnel that will be deployed to each operating location. To establish the requirement, the Air Force reviews the operational plan for each location and calculates how many vehicles would be needed to support the plan. According to Air Force guidance, this requirement is then to be reduced by the number of vehicles that the Air Force can obtain from the host nation or through local purchases. Funding for the vehicles program has in recent years been a low priority, and the Air Force has been operating with some equipment that was purchased during the Cold War. The Air Force prepositions a wide variety of other materiel at different locations worldwide. This materiel includes fuels; rations; medical equipment; and expendable aircraft equipment such as fuel tanks, racks, adapters, and pylons. The Air Force also prepositions munitions on land and on three ships, where it can provide maximum flexibility to support the two-war scenario—in the Persian Gulf and Korea. Two of the ships are located in the Indian Ocean; the other is in the Mediterranean Sea. The Air Force used some of these stocks during Operation Desert Storm to support its requirements. In 1996, DOD’s Inspector General found that the Air Force munitions afloat program was well managed. At the request of the Chairman, Subcommittee on Readiness, Senate Committee on Armed Services, we assessed the readiness of prepositioning programs. Specifically, we examined (1) the basis for program requirements and (2) the rates of inventory fill and maintenance condition of prepositioned stocks and the reliability of this readiness data. Our review included the prepositioning programs of the Army, the Navy, the Air Force, and the Marine Corps. We concentrated our efforts on the Army’s brigade set, operational projects, and sustainment programs and the Air Force’s bare base and vehicle programs because of concerns that emerged about the sufficiency, condition, and management of these programs. We describe the Navy and the Marine Corps prepositioning programs in appendix I. We gathered information on, but did not review, the programs of the Defense Logistics Agency, which manages food and bulk fuel to meet requirements of the services. To determine the basis for program requirements, we reviewed requirements documents and processes for each program to see whether they reflected current war-fighting needs and were based on sound analysis. We discussed the validity of program requirements with officials from the services and the unified commanders and obtained the results of recent or ongoing reviews of requirements. We reviewed the results of the Bottom-Up Review, the Quadrennial Defense Review, and recent mobility studies to determine the basis for the brigade sets, and we discussed the need for the European brigade sets with officials from the Army, Joint Staff, and U.S. European Command. For the Army’s operational projects program, we reviewed the documents authorizing each project, if available, and gathered information about the Army’s ongoing efforts to revalidate the projects. For the Army’s sustainment program, we discussed the models used by the Army to determine requirements and gathered information about the Army’s ongoing efforts to improve the inputs to these models. For the Air Force bare base and vehicle program, we reviewed requirements documents and discussed how required levels were determined with cognizant Air Force officials. We reviewed the process the Air Force uses to determine the gross requirements to support operational plans and obtained information about the Air Force’s efforts to determine what host nation support will be available at planned operating locations. To determine the rates of inventory fill, we compared inventory information from service managers to program requirements. To determine the condition of prepositioned material, we reviewed available maintenance reports used by the services to measure condition. We also examined the physical condition of stored materiel in prepositioning sites in Korea, Bahrain, Oman, Qatar, Kuwait, Italy, Belgium, Luxembourg, and the Netherlands. To selectively verify the maintenance condition reported by the services, we reviewed the maintenance records for judgmentally selected pieces of equipment, as well as summary reports, data, and maintenance plans available at the prepositioning sites we visited. We reviewed formal readiness reports from the Status of Resources and Training System, if available, to determine the readiness ratings assigned to the prepositioned stocks. We discussed reporting processes and data reliability with responsible officials in the services and with the unified commanders. To determine the impact of reported shortfalls and obtain a broad perspective on the readiness of prepositioned stocks, we reviewed joint monthly readiness reports provided by the services and unified commanders and recent quarterly reports to the Congress. We also interviewed officials of the Central Command, the European Command, the Pacific Command, and U.S. Forces, Korea to obtain their views regarding the sufficiency of prepositioned stocks to execute operational plans. We did not do a detailed assessment of medical stocks or munitions. We obtained information, documents, and perspectives from headquarters officials in the Office of the Secretary of Defense, the Joint Staff, and the four services. We obtained information from Army officials at the following locations: the U.S. Army Materiel Command; the U.S. Army War Reserve Support Command and its subordinate commands in the Netherlands, Korea, Qatar, and Charleston, South Carolina; the Army Materiel Support Analysis Activity; the U.S. Army, Central Command; the U.S. Army, Pacific Command; and the Eighth U.S. Army and selected subordinate commands in Korea. We obtained information from Air Force officials at the following locations: the Air Combat Command; the U.S. Air Force, Central Command; the U.S. Air Forces in Europe; and the U.S. Air Force, Pacific Command, and its subordinate command in Korea. We obtained information from Navy officials at the Naval Facilities Engineering Command, the Military Sealift Command, and the Naval Supply Command Fleet Hospital Program Organization. We also obtained information from Headquarters Marine Corps officials. To provide a context within which to assess the services’ programs, we used the Government Performance and Results Act of 1993 (GPRA), which suggests that agencies working toward results-oriented management should take three steps, including (1) defining their mission and identifying desired outcomes; (2) measuring performance; and (3) using performance information to improve organizational processes, identify gaps, and set goals for improvement. These steps are suggested in our Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June, 1996). We built on our past reports and reviewed reports of the Congressional Budget Office, Congressional Research Service, and DOD and service auditors. We discussed these reports with responsible service officials but did not verify the findings of other organizations. We performed our review between September 1997 and October 1998 in accordance with generally accepted government auditing standards. The Army has poorly defined, outdated, or otherwise questionable requirements that limited our ability to provide reliable composite readiness assessments for the Army’s three major prepositioning programs. Within the Army’s high priority brigade set program, overall readiness was difficult to assess due to questions about European brigade set requirements. The Army’s brigade sets in Kuwait, Qatar, Korea, and afloat reflect the current two-war strategy, but Army officials have expressed a need to reevaluate the requirements for three brigade sets in Europe. Despite concerns about overall brigade set program requirements, we were able to assess the readiness of the individual sets. The Kuwait set is currently at a high, stable level of readiness. The readiness of the afloat, Korea, and Qatar sets is improving, and despite present shortages these sets could provide a significant combat capability, if needed. Readiness is declining in the European sets, and the Army has no immediate plans to fill equipment shortages caused by the transfer of equipment to support troops in, or returning from, Bosnia. Army’s managers lack the critical information that they need to effectively administer the operational projects and sustainment stock programs. Our readiness assessments of these programs were hindered by both requirements and inventory reporting problems. Requirements for most of the Army’s operational projects were outdated, and the Army was working to revalidate the requirements. However, revalidation dates for many projects had slipped, and requirements were still questionable for one project that had been revalidated. Inputs were being updated for one of the Army’s sustainment requirement models, but valid requirements were not expected until the spring of 1999. In addition, inventory summary reports for both the operational projects and sustainment stock programs were incomplete or unreliable. The Army has recognized these problems with its programs and has begun taking steps to correct them, but it may be several years before the problems are fully resolved and it can reliably assess the readiness of its prepositioning programs. The Army’s positioning of brigade sets in Kuwait, Qatar, Korea, and afloat reflects DOD’s current two-war strategy. However, the requirement for three brigade sets in Europe is questionable. We found wide variations in the readiness of the individual brigade sets, but each is intended to provide a fully outfitted combat brigade within a few days. The readiness of each brigade set reflects the Army’s priorities and ranges from high in Kuwait to low in Europe. The high readiness of the brigade set in Kuwait reflects its importance in the Persian Gulf region, whereas the sets in Europe are not combat ready, reflecting the Army’s low priority for those sets. Despite the differences in its sets, the Army has established standard issuance procedures, timelines, maintenance requirements, and readiness reporting requirements to ensure that its brigade sets are available and in good condition. It reports the readiness of these sets through the Status of Resources and Training System. Recently, the Army said that it has insufficient funds to properly maintain the seven brigade sets currently fielded and that as a result maintenance is being deferred and issue times could increase. In addition, the Army noted that it has insufficient funds to properly care for the additional eighth set planned to be put afloat in 2001. We could not validate the Army’s statement due to questions about the requirements for the three sets in Europe. The brigade set in Kuwait has almost all its required equipment and spare parts on hand and maintenance condition levels are high, according to recent Army readiness reports. Thus, this set is at a high level of readiness. Three unique circumstances enhance the readiness of the Kuwait set. First, the Kuwait set is kept ready to be issued with only a few hours notice, and it is never placed in long-term storage like the other sets. Second, exercises that use approximately one-third of the brigade set equipment are conducted on an almost continual basis and result in well- defined procedures for issuing the equipment. Third, the Kuwaiti government pays for most of the costs of this set—approximately $60 million annually, according to Army officials in Kuwait. Kuwait pays for all maintenance and operation costs associated with this set, including lease costs at the storage site, repair part costs, and the salaries of over 700 contractor maintenance personnel and hundreds of other support personnel. The Kuwaiti government has also agreed to pay for extensive military construction projects. The Army pays the vast majority of costs for its other brigade sets. Recent exercises have confirmed the readiness of the Kuwait brigade set. For example, of the 1,700 pieces of prepositioned equipment issued to Army troops deployed to Kuwait in February 1998, only 4 pieces did not work properly, according to Army and maintenance contractor officials. And in May 1998, additional forces arriving in Kuwait unloaded their planes, drew materiel from the prepositioning site, and moved to the field within 16 hours; one unit made it in about 10 hours. During our May 1998 visit to Kuwait, unit personnel training with the brigade set told us that the equipment was in good condition when it was issued to them in February 1998 and that they had maintained 95 percent or more of the equipment in operational condition during each month of their deployment. They also said that the deployment had offered excellent, realistic training opportunities unavailable at their home bases. Figure 2.1 shows soldiers using brigade set equipment for training in Kuwait in May 1998. Until recently, the brigade set on ships lacked equipment, and some of the equipment on hand was in poor condition. However, among its combat brigade sets, the Army views the afloat set as its most important due to its ability to quickly deploy to any conflict area. The Army has been steadily filling equipment shortages and repairing equipment within this set. By the end of 1998 the Army expects the set to have over 99 percent of its principal weapons systems and critical equipment on hand. The set will still have small shortages of some support equipment, but the equipment on hand should remain in good condition. As a result, this set should report a high readiness level by the end of the year. In 1995, the Army Inspector General reported that maintenance standards had not been enforced when the brigade set equipment was initially loaded on Ready Reserve Force ships. We reported in 1997 that about one-quarter of the set’s reportable units were not capable of fully performing their missions according to the Army’s standards. Since then, the Army has taken equipment off the Ready Reserve Force ships; repaired, maintained, or replaced the equipment; and then loaded it on large, medium-speed, roll-on, roll-off ships specifically designed to carry prepositioned equipment. In May 1998, the last Ready Reserve Force ship containing brigade set equipment unloaded its cargo. The Army plans to repair or replace this equipment and load it, with additional parts and equipment, onto the U.S. naval ship Watson, a new large, medium-speed, roll-on, roll-off ship. According to Army managers, when the Watson is fully loaded in fall 1998, all of the afloat brigade set equipment will have completed its initial repair cycle, and most major equipment shortages will be filled. However, fill rates for repair parts are expected to remain below 60 percent and will be the largest remaining concern for this brigade set. Although the brigade set is expected to be fully capable before the end of the year, the five ships carrying the brigade set are just a portion of the Army’s afloat prepositioning program. Seven additional ships are currently in the fleet, and the Army plans to eventually use 15 ships to carry prepositioned materiel. The additional ships carry operational projects and sustainment items. The brigade set in Korea has most of its required equipment, and maintenance conditions are high, according to recent Army readiness reports. Thus, the Army reports that this set is at a high level of readiness. However, the Commander of the U.S. Forces, Korea, described the set as “not fightable” because it has materiel shortages and has never been issued and exercised. Equipment on hand has increased dramatically in the set, from 8.5 percent in August 1996 to 88 percent in January 1998. Since January, inventory levels have continued to climb, and critical shortages of armored vehicle-launched bridges and fuel trucks have been filled. By the summer of 1998, the Korea brigade set had about 96 percent of its principal weapons systems and critical equipment on hand. However, the set had some support equipment shortages and only had about 40 percent of its required repair parts. Because the Army accelerated filling the brigade by about 18 months, most equipment arrived in Korea before storage facilities were completed or plans were developed for storing, issuing, and maintaining the equipment. In May 1995, the Army completed its first and largest controlled-humidity warehouse for prepositioned equipment in Korea. Two more warehouses were completed in September 1997. These three warehouses were able to hold the brigade set’s tracked vehicles. However, many of the set’s wheeled vehicles were stored outside for over a year, awaiting the scheduled completion of the final two warehouses in the fall of 1998. Storing vehicles outside results in increased maintenance costs or reduced maintenance conditions because equipment that is exposed to the elements must undergo maintenance every 6 months (versus every 4 years for equipment in controlled humidity warehouses). When the last two warehouses are finished, maintenance personnel will move equipment from outside into the warehouses. While the Army reports the brigade set in Korea at a high state of readiness, the Commander in Chief of U.S. Forces, Korea, said that he will not consider this set ready to fight until it is as ready as the brigade set in Kuwait. In the fall of 1998, a portion of the set will be issued for the first time for an exercise called Foal Eagle. This exercise should provide the Commander in Chief some measure of the set’s capabilities and limitations as well as a measure of the time required to issue the set. An officer from the unit scheduled to use the equipment during Foal Eagle said that he had inspected the equipment and was pleased with the condition of the tracked equipment but was concerned about the condition of the vehicles that had been stored outside. As of August 1998, the Army had not completed the prepositioning of its brigade set in Qatar, and the set still had significant shortages of both equipment and spare parts. According to recent Army readiness reports, maintenance conditions are generally high for equipment on hand, but shortages of major equipment still exist. In addition, dead batteries in some of the on-hand equipment may delay its issuance. In January 1996, the Army began fielding the Qatar brigade set. It shipped equipment for the first battalion task force to Qatar and placed it in temporary storage facilities. In the fall of 1997, a second battalion task force was added. These two shipments provided about two-thirds of the combat capability of the brigade, but they did not include the equipment for the forward support battalion, engineer battalion or other support equipment. Consequently, the overall equipment fill rates for the brigade remained low. In March 1998, fill rates declined somewhat when 105 vehicles and 24 other major pieces of equipment were transferred from Qatar to Kuwait to support Operation Desert Thunder. In June 1998, the set’s overall fill rate was about 28 percent, and repair parts were filled to about 53 percent. Although the brigade set is incomplete, Army officials said it could provide some limited combat capability if needed. On hand are 700 vehicles, including 88 Abrams tanks and 98 Bradley fighting vehicles. The next major shipment of equipment, scheduled to arrive in September 1998, includes major equipment for the brigade’s forward support battalion as well as repair parts and supplies. According to Army officials, this shipment will increase the overall fill rate for equipment to 38 percent, and the fill rate for repair parts will increase to 69 percent. Additional equipment is scheduled to arrive as facilities are constructed to house the equipment, and the Army plans to have the entire brigade set in Qatar by September 1999. The Army is constructing facilities to store prepositioned equipment on a 262-acre site outside Doha, Qatar. This three-phase project will eventually provide 2.1 million square feet of storage facilities, consisting mainly of humidity-controlled warehouses, at a cost of $149 million. The first phase of the project, which included six warehouses and a maintenance building, was nearing completion during our visit in June 1998, and the Army expects to begin storing equipment in the warehouses by the fall of 1998. While warehouse facilities are being constructed, the on-hand equipment for the two battalion task forces is being stored in humidity-controlled bags and tunnels, which protect the equipment and slow deterioration (see fig. 2.2). The condition of the equipment stored in bags and tunnels was generally good based on our observations and review of equipment records. However, the bags and tunnels are not air-conditioned, and outside temperatures of 120 degrees and above had caused the batteries in some equipment to fail. According to an Army maintenance official in Qatar, batteries would have to be replaced in at least 75 percent of the first battalion task force equipment before the equipment could be issued. Although Army storage procedures call for the removal of batteries, officials in Qatar were leaving the batteries in the equipment and exploring maintenance alternatives because, they contended, the removal of batteries increases the amount of time necessary to issue the equipment.The Army currently has batteries in storage in Qatar to replace the batteries that have failed. Despite concerns about the batteries, Army leaders in Qatar said the brigade set could be issued and moved where needed within the time envisioned by current operational plans. However, these officials doubted that the equipment could be issued within the standard brigade set requirement of 4 days, given its storage conditions and the dead batteries. The Army faces a difficult set of circumstances in Europe. It has used equipment from the three brigade sets in Europe to support operations in Bosnia and other higher priority brigade sets. At the same time, it has considerable potential excesses—over 50,000 pieces of equipment for which it has no identified need anywhere in the Army. To complicate matters, the Army is trying to reduce infrastructure and personnel in Europe. By 2000, after infrastructure is reduced, the budget requirements to operate and maintain the three brigade sets in Europe will be about $65 million, of which $48 million is funded. If a major conflict breaks out in Korea or the Persian Gulf region, the European brigade sets are likely to be used much later than those sets on ships or already in the regions. The European brigade set equipment has been used extensively to support ongoing operations in Bosnia, but the equipment has never been deployed in brigade, or even company, sets. As a result, some senior officials from the Army Materiel Command favor reconfiguring the European brigade sets into stocks tailored for contingencies. U.S. European Command and other Army officials told us that these sets are important as a sign of commitment to allies in the North Atlantic Treaty Organization, but none of these officials could produce a document formalizing this commitment. In our discussions, Army logistics and operations officials told us that they need to reevaluate the requirements for the European brigade sets, but they had not begun any formal process to do so by September 1998. Because the Army has transferred much of its European brigade set equipment to troops in or coming from Bosnia, these sets are no longer capable as brigade sets and have relatively low readiness ratings. The Army has no plans to fill equipment shortages until after the return of equipment from Bosnia, reflecting the relatively low priority given to the sets. Between the beginning of Operation Joint Endeavor in 1995 and June of 1998, the Army lent over 7,900 pieces of prepositioned equipment to units deployed to Bosnia. This equipment included Abrams tanks, Bradley fighting vehicles, and armored personnel carriers among other items. Although this equipment was hundreds of miles away and its condition was unknown, the Army continued to report the European brigade set readiness as high because Army policy allowed lent equipment to be reported as if it were on hand and serviceable. However, in 1998, the Army changed its policy and required that equipment be transferred, not lent, to a gaining unit if the equipment was expected to be issued for more than 6 months. One of the reasons the Army changed its policy was that it was losing accountability of the equipment in Bosnia. The Army Audit Agency reported that records did not accurately show the locations and units having physical custody of lent assets valued at about $165 million. When we visited Europe in June 1998, 37 percent of the lent equipment had been returned, but about half of the equipment that was still lent out had not been properly accounted for. Army officials said they would not issue additional equipment until the status of all the lent equipment was resolved, and they expected resolution soon. The Army expects equipment shortages in the three European brigade sets to increase in the fall of 1998 because equipment is scheduled to be transferred to units in Bosnia and units returning to Germany from Bosnia. The returning units will receive brigade set equipment because they are leaving their equipment in Bosnia for follow-on forces deploying from the United States. The brigade set in Italy will provide much of this equipment, and its inventory levels are projected to drop dramatically by late 1998. The two central region sets are being tapped as well and are also projected to have lower inventory levels by the end of 1998. Because the Army is issuing European brigade set equipment piece by piece, the sets can have significant shortages of some critical items such as ambulances and little or no shortages of other items. In June 1998, the equipment proposed to be transferred in support of operations in Bosnia included over 16,700 items—62 tracked vehicles, 1,365 wheeled vehicles, 398 trailers, and almost 15,000 pieces of other equipment such as telephones, antennas, and tool kits. Although much of this equipment has and will continue to come from the three brigade sets, some requirements for Bosnia have been met with undesignated equipment that has been located at the European prepositioning sites since the end of Operation Desert Storm and the drawdown of U. S. forces in Europe. When the Army Materiel Command became responsible for managing the Army’s prepositioned stocks in Europe beginning in 1993, much of the equipment was in poor condition. After the Gulf War and during the rapid drawdown of U.S. forces in Europe, equipment was left in poor condition and transferred “as is” by units departing Europe (see an example in fig. 2.3). A primary mission became repairing the equipment and redistributing it to the brigade sets on ships and in Kuwait, Korea, and Qatar. The maintenance condition of some equipment in Europe is still a concern today. For example, when we visited a prepositioning site in Italy in February 1998, recently repaired Abrams tanks were stored outside with very little protection from the weather, even though controlled-humidity storage bags were available (see fig. 2.4). Officials in Italy acknowledged that equipment may suffer significant deterioration as a result of exposure to the weather. Our analysis of their inspection data for several months between January 1997 and January 1998 supported this: inspectors found that roughly 30 percent of the equipment stored outside had problems that would limit the equipment’s ability to perform its intended missions. In addition, officials from the Combat Equipment Group-Europe told us that virtually all the prepositioning sites in Europe have deferred periodic maintenance on at least some of their equipment to give priority to maintenance and repair of equipment to be redistributed. The readiness of the Army’s operational projects and sustainment stock programs cannot be reliably measured because managers lack critical information on them, including valid requirements for equipment. In implementing GPRA, agencies must clearly define their goals and objectives and use reliable data to measure performance against those goals and objectives. Because the Army does not have validated requirements for its operational projects and sustainment programs, it does not have objective goals to measure readiness within these programs. The Army is taking steps to revalidate requirements for these programs, but it has not yet finished. Even after the Army develops valid requirements, however, unreliable and missing data concerning inventory fill rates and maintenance conditions will prevent Army managers from measuring the readiness of the operational projects and sustainment programs. The Army recognizes these problems and has begun taking steps to correct them, but it may be several years before it can reliably assess the readiness of these prepositioning programs. When the Army centralized management of prepositioned materiel at the Army Materiel Command, 54 worldwide operational projects were consolidated by functional purpose and mission, and the Army now has 15 operational projects. Ten of these projects have all or a portion of their stocks prepositioned on ships or at overseas locations (see table 2.1); the five remaining projects are stored at locations throughout the Continental United States. Officials at the Army Materiel Command and the Office of the Deputy Chief of Staff for Logistics have incomplete records documenting the consolidation of operational projects and could not say whether requirements were reviewed when the projects were consolidated. However, in 1995, an Army Inspector General report described the operational projects requirements as “old and now potentially invalid.” In a 1997 report, the Institute for Defense Analysis criticized the Army requirements, stating that the requirements process appeared to be designed more to build a stockpile requirement than to solve problems or minimize risks. The Institute estimated that requirements for two operational projects were overstated by $280 million. The Army was reviewing requirements for both these projects but had not finished by August 1998. Army managers are revalidating all operational projects requirements by asking proponents throughout the world to justify projects; the justifications are to be reviewed and approved or disapproved at the Department of the Army staff level. This process was to take place between July 1997 and September 1998. However, by July 31, 1998, the Army had completed revalidations for only 5 of the 15 projects, and revalidation dates had slipped for 7 of the other projects. Under Army regulation, commands must review their projects yearly and completely revalidate their projects, updating equipment lists at least once every 5 years. Our analysis of the $10.3 billion aircraft matting project, which consists of metal landing mats used to construct airfields, indicates that the Army may not have valid requirements even after completing its project revalidations. The aircraft matting project accounts for 83 percent of the Army’s total operational projects requirement of $12.4 billion and 87 percent of the Army’s reported $10.4 billion shortage in operational projects. This was one of the five projects reported as completely revalidated in 1998. However, we question the project’s requirements for two reasons. First, most of the matting sets (160 of 230) are required to support contingencies in Europe and Africa, not major wars in Korea or the Persian Gulf. While stocks may be used to support regional contingency plans under the Army’s policies and procedures, the overarching DOD instruction requires the services to size their prepositioning programs to meet the demand of the two-war strategy. Second, officials responsible for the Army’s operational projects could not supply documentation showing that the combat commanders in Europe had analyzed or otherwise validated this requirement. Based on our analysis of the aircraft matting requirement, the Army has considerable work to do to ensure that requirements are valid, and this may take years. Although by Army draft regulation operational projects are to be made visible through the Total Asset Visibility database, Army managers cannot use this system to effectively oversee the operational projects program. The system’s summary reports do not capture data on the condition of operational projects stocks, the inventory data that is captured is incomplete, and requirements and shortages are quantified only in dollars or in tons. Thus, neither field personnel nor the Army’s central managers can use the system to manage operational projects, and they do not know what inventory is on hand or its condition. In November 1997, the Army Audit Agency reported that Army managers did not generally use the Total Asset Visibility system to manage operational projects stocks. The Agency found that at three locations, asset balances were not properly reported for stocks worth over $390 million. Furthermore, users of the systems said that they could not rely on the system’s summary reports to manage operational projects because both requirements and inventory data are unreliable. Our review of system data confirmed the users’ statements: we found wide discrepancies between system data and figures provided by the Army Materiel Command. For example, in March 1998, the system showed total operational project authorizations as $1 billion, or only about 8 percent of the $12.4 billion reported by the Command. Likewise, the on-hand inventory for operational projects was only $367 million according to the system but about $1.966 billion according to the Command. Table 2.2 shows the differences between the Command’s figures and the figures in the Total Asset Visibility system. Another problem with the Total Asset Visibility summary reports is that they list shortages and on-hand quantities in terms of dollars or tons, rather than numbers of items. This type of reporting emphasizes heavy and expensive items. However small, inexpensive items can be just as critical in a war. For example, gas masks, which are relatively light, may be just as or more critical than aircraft landing mats, which weigh over 600,000 pounds per set. Because the Total Asset Visibility System’s summary reports are suspect and the system lacks information of the condition of on-hand assets, the Army’s central managers do not have sufficient information to oversee operational projects effectively. One manager told us that requirements for three of the operational projects (petroleum distribution, water distribution, and collective support) are not in the system and must be obtained directly from the program managers. Army procedures require personnel at the operational projects storage locations to provide quarterly reports on the maintenance condition and fill rates of each operational project. However, at the time of our review, reporting had not yet begun. During our visits to the Army’s prepositioning sites, we examined some operational project equipment and supplies. In Belgium, for example, we saw clothing and bridging stocks transfered from a U.S. Army, Europe, facility in Kaiserslautern, Germany. Maintenance personnel at the site were sorting the stocks and repairing them, but they did not have authorization documents and did not know whether the stocks were part of a validated operational project. In Italy, we found operational project stocks such as hot and cold weather clothing, vehicles, tents, and parachutes but no recent authorization documents for them. Site personnel did not know the maintenance condition of the stocks and said they were doing no maintenance on them. In Korea, Command personnel expressed frustration at the poor reporting procedures for the operational projects program and were concerned about shortages in chemical defensive equipment within their operational projects. The Army could not provide reliable requirements for the sustainment program during our review. The Army is working to resolve requirements problems but has not yet developed valid requirements for sustainment stocks. Army managers have set a goal to have justifiable requirements for the overall sustainment program by the spring 1999, when they submit the Army’s next budget request. Developing sustainment requirements is complicated and involves two different processes. Using one process and set of computer models, the Army determines requirements for ammunition and major items such as tanks and trucks. Using another process and a different set of computer models, the Army determines requirements for secondary items, including repair parts and other classes of supply. Sustainment stock requirements are further complicated because they rely on inputs from entities outside the Army’s control, namely industrial base companies and foreign host nations. If industrial base companies or host nations can produce and deliver equipment and supplies within the Army’s required timelines, the Army can reduce the amount of sustainment stocks it is required to hold. Until recently, the sustainment program received relatively little attention because it was among the Army’s lowest funding priorities. However, the Army claimed in 1997 that shortages in secondary items created a significant war-fighting risk. Concerned about this assertion, the Office of the Secretary of Defense brought in outside contractors to analyze the Army’s requirements process. In April 1997, the Institute for Defense Analysis reported that the Army’s requirements appeared to be significantly overstated because planning factors used in Army models were inappropriate, obsolete, or incorrect. It estimated that industrial base and host nation contributions were understated by almost $1 billion. In 1998, Coopers & Lybrand reported that although the Army had progressed since the Institute’s 1997 report, questions remained about model inputs and requirement offsets based on industrial base and host nation capabilities. The report also identified some shortages that were likely to be critical in the early phase of a conflict. These shortages included spare parts for the brigade sets outside Europe and medical items. Recent Army efforts to refine requirements have reduced the reported war reserve secondary item shortages to $1.8 billion—a significant reduction from the $3.1 billion reported in September 1996. The Army has reworked portions of the process to determine requirements for secondary items, replacing model inputs that were questioned in the two contractors’ reports. The Army is also trying to update its industrial base information, but a senior official said that response rates to industrial base surveys are still below 50 percent. The Army disagreed with the contractors’ conclusions that host nation support was understated. Army officials contend that guidance from the Office of the Secretary of Defense and the Unified Commanders does not require them to offset sustainment requirements unless a formal, signed host nation support agreement is in place. Host nation support is a known concern, within both the Army and the Department of Defense. In fiscal year 1997, the U.S. Central Command identified the lack of host nation support agreements in the Persian Gulf and host nation support planning as material weaknesses under the reporting requirements of the Federal Managers’ Financial Integrity Act of 1982, as amended. The Army could not provide us reliable data on the inventory fill and maintenance condition of sustainment stocks from its reporting systems; therefore, we could not reliably assess what stocks it had or their condition. As with its operational projects, the Army is trying to use its Total Asset Visibility System to manage sustainment stocks. It uses summary reports generated from the system, but these reports do not include all stocks on hand or provide centralized managers with information about the maintenance condition of stocks. For example, in March 1998, the system showed that for major equipment, the Army had on hand sustainment stocks worth $11,000. However, documentation from U.S. Forces, Korea, showed that its command alone had major equipment sustainment stocks worth almost $50 million. In addition, the Army Audit Agency recently reported that prepositioning sites in Europe had $258 million worth of major equipment that was unneeded in Europe and could be redistributed to offset reported shortages in the sustainment program. The Air Force does not have precise requirements established for its prepositioned bare base and vehicle programs. Without this foundation, it is impossible to reliably assess the impact of reported shortfalls and maintenance concerns and, thus, the overall readiness of the programs. The bare base program provides items critical in the Persian Gulf, but requirements for this program have not been thoroughly updated since the late 1980s. Because the Air Force has not assessed the infrastructure available in the region, current requirements are based on worst-case scenarios that assume the Air Force must provide virtually all of the living and operating facilities required by deploying forces and will not have any other sources of supply for housing, food, or laundry requirements. Similarly, the Air Force has not determined the number of vehicles it can obtain from host nation sources, a prerequisite for determining precise requirements. The Air Force is likely overstating requirements, since some host nation facilities and vehicles will probably be available. In addition, the Air Force is storing over 900 general purpose and specialty vehicles in Europe but has no current requirements for these vehicles to be stored there. The Air Force used bare base sets heavily during the Gulf War and has continued that use since the war; however, its efforts to reconstitute the sets have not kept pace. The Air Force reported in August 1998 that it had less than one-third of the sets it would need if a major conflict erupts in the Gulf. The Air Force and U.S. Central Command have expressed concern about the shortfalls they perceive in the bare base program. In the vehicle program, the Air Force does not require readiness reporting and has little comprehensive readiness data. However, the Air Force’s vehicle fleet is aging, and much of it is in poor maintenance condition. We found that significant numbers of the vehicles at major storage locations we visited were not mission capable. The Air Force recognizes that it needs to reevaluate its prepositioning strategy and improve inventory visibility and has begun a broad-based study to accomplish this. The results of this study were not available when we concluded our work, and Air Force officials told us that it will likely take several years to address the many issues facing the program. The Air Force has not precisely defined requirements for the bare base program in the Persian Gulf. Currently, the Air Force plans for a worst-case scenario for which it must provide virtually all of the facilities it will need to operate in the Persian Gulf. Current requirements were set in the late-1980s and have not changed substantially since. Some infrastructure, such as barracks and operating facilities, are available in the Persian Gulf region. According to its guidance, the Air Force is required to determine what infrastructure and resources are available at its planned operating locations, a process called base support planning. Base support plans cover virtually all functions required to support an air base. These plans are intended to provide detailed information about air base locations, including overall layout, aircraft parking plans, host nation support, available equipment, and prepositioned assets. The Air Force’s plan for addressing shortfalls in the bare base program notes that base support plans “must be completed to determine true requirements.” It set June 1997 as a target date for completion of these plans. However, as of August 1998, none of the 18 required base support plans had been completed, though 6 were partially completed, according to U.S. Air Force, Central Command, officials. These base support plans are essential in determining precise requirements, according to Air Force guidance. With the information from these plans, the Air Force can tailor bare base equipment to meet the needs at each location. Without them, however, planners have assumed a worst-case scenario that may provide too much or the wrong type of capability. Planning for a worst-case scenario may result in significantly overstated requirements. For example, at some planned operating locations in the Gulf, the Air Force has bought commercially available substitutes to replace some Harvest Falcon capabilities, according to program managers. In Bahrain, we saw trailers that were outfitted with showers and laundry equipment (that is, washers and dryers); these semipermanent facilities have been left in place and obviate the need for similar bare base capabilities. In addition, new housing and other facilities are being built at Prince Sultan Air Base in Saudi Arabia. Air Force officials pointed out that they did not consider base support plans a panacea for determining requirements but acknowledged the need to complete them. Officials told us that their efforts to complete base support plans in the Gulf region had been hampered by access restrictions imposed by host nations. As noted in chapter 2, host nation support is a general concern throughout the Gulf. This concern was demonstrated in fiscal year 1997 when the U.S. Central Command identified host nation support planning as a material weakness under the reporting requirements of the Federal Managers’ Financial Integrity Act. To effectively implement GPRA, a results-oriented organization should determine what its programs are intended to accomplish. The Air Force’s bare base program is generally intended to provide housing for personnel and equipment to support flight operations in austere locations. The Air Force mission has changed considerably since the late 1980s, and the Air Force must also consider how it will operate in the future when determining its bare base requirements and configurations. According to U.S. Central Command, the operational plan for the Persian Gulf region is soon to be revised, which may change the Air Force’s planned operating locations and thus its bare base requirements. For example, the Air Force plans to use a large new air base being built by the Qatar government. This air base will have many permanent facilities, such as barracks, shops, and hangars, that would normally be taken from bare base stocks, according to the Air Force. Moreover, the bare base sets were configured during the Cold War and do not reflect the Air Force’s emerging war-fighting approach, which involves smaller, more customized air expeditionary forces. These forces do not deploy with as many aircraft or personnel and thus may require less support equipment. When measured against the Air Force’s existing requirements, the bare base program shows significant shortages of equipment, particularly the Harvest Falcon sets designated for the Persian Gulf region. Although the number of on-hand prepositioned bare base sets has improved considerably since 1996, the Air Force has less than one-third (29 of 93) of the sets it currently says are required in the Persian Gulf region for a worst-case scenario. Generally, bare base sets are intended to be stored until needed for a major conflict; however, many have been used for contingencies and exercises, and many equipment items from these sets are being replaced or repaired. Since late 1996, the Air Force has made a concerted effort to increase the number of bare base sets in storage through implementation of its bare base “get well” plan. According to the Air Force, only 2 sets were available in late 1996 when the plan was established, versus 29 sets in storage today. The plan focused on improving on-hand levels through reconsititution and acquisition as well as through revision and enforcement of peacetime use policies, to include consideration of alternative means of supporting peacetime needs. The Department of Defense recently approved an additional $71 million, to be allocated over the next five years, to fix some of the bare base programs immediate shortfalls. The Air Force estimates that it will still take roughly 9 years and cost about $223 million to rebuild the Harvest Falcon sets, assuming that peacetime use is stopped. Harvest Falcon includes three types of equipment sets: housekeeping (for example, tents, showers, and latrines), used to house and sustain Air Force personnel; industrial operations (for example, utility equipment and civil engineering shop facilities), used to create and sustain air base infrastructure; and flightline (for example, aircraft maintenance shops and hangars), used to support flight operations. In total, the Air Force projects that it needs 93 sets of housekeeping, industrial operations, and flightline equipment to be prepositioned in theater. All three segments of the Harvest Falcon program have significant shortages, as shown in table 3.1. The number of sets in storage is somewhat overstated because the Air Force considers a set complete if missing components can be airlifted to the region. For example, the Air Force has counted 16 housekeeping sets in storage as complete, even though water distribution systems—an essential housekeeping capability, particularly in the arid Gulf region—are reported to be stored in the United States. Some Air Force officials questioned whether the airlift needed to move these systems to the region would be available during the initial phases of a large-scale conflict. If not, bare base operations could be delayed significantly. Other critical systems such as airfield lighting systems and runway repair kits are also in short supply and would either need to be airlifted or bought locally. The U.S. Central Command has raised concerns about these shortfalls in the Joint Monthly Readiness Review, a monthly report in which the command assesses its preparedness. Central Command views bare base shortages as a predominant prepositioning concern in the theater. These concerns have been mentioned in the Quarterly Readiness Report to the Congress, a process that we reported on earlier this year. The impact of these reported shortages proved difficult to pinpoint. Although of concern to the combat commanders, these shortages are not viewed as insurmountable because alternate means for housing personnel may be available. According to Air Force and U.S. Central Command officials, shortages would likely force the Air Force to house personnel outside of bare base facilities, raising force protection concerns. Since the 1996 terrorist bombing in Khobar Towers in Dhahran, Saudi Arabia force protection issues have been a paramount concern in the region. Without sufficient bare base equipment, airmen could be housed at host nation facilities (for example hotels, barracks, and apartments), rather than at bare base locations where special security precautions can be provided. The Air Force and U.S. Central Command could not be specific about the impact of the shortages mainly because base support plans have not been completed to determine what infrastructure is available, and they have no plan to mitigate the impact of these shortages should a large-scale contingency arise. Although shortages also exist in the Harvest Eagle program, the Air Force does not consider these to be as severe as the shortfalls in the Harvest Falcon program. The Harvest Eagle program is authorized 16 prepositioned sets to be split equally in Korea and Europe. In Europe, three of eight sets are not mission capable. However, these shortages do not have a severe impact, according to the Air Force, because the European sets would likely be used much later in a major conflict than the sets located in Korea or the Persian Gulf. In Korea, all eight sets are considered mission capable. The U.S. Air Forces, Pacific, has raised concerns, however, about the advancing age of those sets and low funding. Since the Persian Gulf War, the Air Force has repeatedly used its bare base sets to support numerous contingencies and exercises in that region. The heavy use of these sets during the last few years has outpaced efforts to repair and rebuild the sets. Efforts to restrict use of the bare base assets have been frustrated by continuing activities in the region. As of August 1998, approximately 14 sets were in use at locations throughout the Gulf region. Nine of these sets are in use at Prince Sultan Air Base, where the Air Force relocated its forces following the bombing of Khobar Towers. Since the Gulf War, items have been taken from the bare base sets to support a large number of contingencies and exercises. In 1992, bare base equipment was used to support two operations—Joint Endeavor in Bosnia and Provide Comfort in Iraq. In 1996, it was used to support 22 exercises and contingencies, ranging from the Dhahran bombing to Operation Desert Strike. Certain key items, such as tents, generators, and air conditioners, have been used the most and replaced most frequently. For example, between January 1996 and April 1998, more than 3,000 tents and nearly 4,500 air conditioning units—about the number required for 27 and 30 complete housekeeping sets, respectively—were deployed from storage locations in Oman and Bahrain to locations throughout the theater. At Prince Sultan Air Base alone, approximately 3,000 air-conditioning units are currently either in use or have been designated as backup units. Equipment from these operations has often been returned in poor condition and has required significant repairs, according to program managers. In a recently issued report, the Air Force Inspector General noted that prepositioned equipment was generally treated as a disposable, one-time use commodity, and that user attitudes had often led to equipment abuses. The contractor conducting reconstitution of Air Force equipment in the Gulf region told us that efforts to reconstitute assets and move them into storage to meet prepositioning objectives have been frustrated by the Air Force’s continuing heavy use of these assets. Figure 3.1 shows Harvest Falcon equipment before shipment compared to similar containers of equipment returned from a deployment. Bare base equipment was originally intended to be used as temporary facilities for short durations; however, much of this equipment has been used repeatedly and for long periods of time. For example, at Prince Sultan Air Base, bare base equipment has been in use for nearly 2 years. In the fall 1998, the Air Force is planning to move its personnel from tents into permanent buildings. According to the contractor responsible for reconstituting the assets at this location, many of the tents will be condemned. During a preliminary inspection in April 1998, they estimated that over 530 tents (about 68 percent) could not be reconstituted due to dry rot and general deterioration. According to Air Force Instruction 25-101, bare base sets are to be held in reserve for war and used only as a last resort for exercises and contingencies. This instruction encourages Air Force managers to identify and use alternative sources for bare base equipment to help ensure that it will be available should a major contingency arise. The instruction further states that the use of bare base equipment should be severely limited, since extended use reduces life expectancy and these assets need to be available to support operational plans. Concerned about the heavy use and degraded inventories, the Commander in Chief, U.S. Central Command, wrote a February 1997 message urging that the “use of these assets move from an option of first choice for exercises and peacetime operations to an option of last resort.” The Vice Chief of Staff of the Air Force issued a similar message in December 1997 stating that “bare base assets should be reserved for major theater wars” and that “alternative sources should be used to meet peacetime requirements.” The Air Force has recently begun to explore the use of commercial sources to support future exercises in the region. In the past, this option was dismissed because it was perceived that using commercial sources would be more expensive and less responsive than using existing bare base assets. In 1997, for example, the Air Force considered using contracted equipment for an exercise called Bright Star. They concluded that it would be much more expensive to use commercial sources than existing bare base equipment. The Air Force calculated that it would cost approximately $1.7 million to use existing bare base equipment compared to approximately $10.5 million to obtain this equipment through a commercial source. The Air Force is currently examining the use of commercial sources to provide support for the next Bright Star exercise, which is scheduled for 2000. No agreement has been reached, but officials are considering several options that would make commercial sources more attractive. These options include purchasing or leasing equipment such as tents, latrines, and showers that could be reused to support future exercises. The Air Force has not precisely defined requirements for its prepositioned vehicle program. Requirements in the Persian Gulf do not factor in host nation support, and requirements for Europe are based on outdated Cold War plans. The Air Force recognizes that it needs to refine its requirements for the vehicle program and has been working toward this. By late 1997, the Air Combat Command had determined the gross number of prepositioned vehicles it believes will be needed to support a major war in the Persian Gulf region. This worst-case assessment assumes no host nation support. The Air Force has not yet determined how many vehicles would be available from host nation sources, which will offset the number of vehicles that the Air Force must supply. This is part of the base support planning process. This information would be helpful in determining what vehicle requirements could be met by host nation sources. Like the bare base program, the Air Force needs to consider changes to operational plans and the move toward smaller expeditionary force deployments because these will likely change the number of vehicles required in the prepositioning program. The Air Force also has not defined requirements for prepositioned vehicles in Europe. The current requirement in Europe is outdated and is based on Cold War plans. As a result, at one location in Europe, the Air Force is storing and maintaining over 900 vehicles that may no longer be needed or that could be used elsewhere in the Air Force. Since no major conflict is envisioned in Europe, Air Force officials do not believe they will need a large number of vehicles there. Air Force officials told us that some vehicles may be needed to augment vehicle stocks elsewhere or to help move personnel, equipment, and supplies through European air bases to potential conflict areas. However, many of these vehicles, especially general purpose vehicles such as trucks and buses, are old and some are obsolete. Figure 3.2 shows vehicles in Europe awaiting disposition decisions. In recent years, the Air Force has sought in some locations to obtain vehicles from host nation sources or to lease vehicles when possible. In Korea, for example, the U.S. Air Forces, Pacific, is relying heavily on host nation support to provide general purpose vehicles for the prepositioning program. Air Force managers are concerned, however, that leasing vehicles will not solve the problems within the prepositioning program, since even general purpose vehicles may not be readily available in some areas outside of the United States. This is particularly the case in the Persian Gulf, where Air Force managers are concerned that leasing vehicles could be significantly more expensive than purchasing them. Based on experience, some support by host nations is likely. During the Gulf War, allies provided thousands of general purpose vehicles for use by U.S. forces, according to the Air Force. In addition to our concerns about the requirements underpinning the program, we found that the Air Force has little reliable data with which to measure the readiness of its vehicle program. The Air Force could not tell us precisely how many vehicles it had on hand worldwide or what condition these vehicles were in, and readiness is not routinely reported. In implementing GPRA, the Air Force must have reliable data—like inventory fill and maintenance condition—with which to measure the performance of its prepositioned vehicle program. In 1996, the Air Force Inspector General reported that the Air Force did not have an accurate accounting of the prepositioned vehicles in the Persian Gulf. In June 1998, officials from the Air Combat Command conducted physical inventories to determine how many and what type of vehicles were actually on hand. One Air Force manager estimated that it might take as much as a year to manually load this information into Air Force systems. Until that time, the Air Force will be unable to accurately assess its inventory levels in the Persian Gulf. The Air Force could not provide detailed information on the condition of its prepositioned vehicles. Thus, it is difficult to assess readiness comprehensively. However, much of the Air Force’s vehicle fleet is aging and in poor condition. In July 1996, the Air Force Inspector General reported that prepositioned vehicles were aging and that a high number of them were not mission capable at some locations. Our examination of vehicles at storage locations we visited indicated that the condition of the vehicles is similar today. During our field visits, we found large numbers of vehicles that were not mission capable. Air Force managers noted that many vehicles are old, have surpassed the end of their projected service life, and are difficult to maintain. Furthermore, Air Force officials told us that vehicles have received a relatively low priority for funding due to a concern that considerable excesses existed throughout the Air Force after the Cold War. In the Gulf region, the Air Force’s contractor reported that 977 of the 2,414 vehicles (40 percent) at major storage locations in Oman and Bahrain were not mission capable. About 13 percent were in use. (See fig. 3.3.) These figures represent the vehicles managed by the Air Force’s contractor in Oman and Bahrain but do not represent all vehicles in the Gulf region. The Air Force’s largest storage area in the Persian Gulf region is in Thumrait, Oman. At Thumrait, the Air Force stores over 1,700 of the vehicles depicted in figure 3.3. Over 40 percent of these vehicles were not mission capable as of July 1998. Officials estimated that it would take 2 to 3 years to repair the vehicles if there are no further deployments. Most vehicles are stored outside because the site does not have covered storage facilities. This exposes vehicles to the extreme heat and blowing sand of the Omani desert. During our visit, we found numerous vehicles with heat-related damage, including damaged windshields and blown tires. Figure 3.4 shows an example of damage caused by lack of storage facilities combined with extreme desert conditions. Even if the vehicles were mission capable, the storage site is several hours from the nearest port, and it would likely take weeks to move the vehicles from Thumrait to operating locations throughout the region. This may defeat the basic purpose of the program, which is to locate this equipment where it can be drawn quickly when needed. When vehicles are stored in centralized storage locations like Thumrait and not at the locations where they will be used, a plan for moving the vehicles quickly to their operating locations is needed. The Air Force, however, has not developed plans for moving the vehicles it has in theater to their planned operating locations. Moving these assets to their final operating locations is likely to be chaotic and prolonged, even with a plan, according to Air Force managers. Thumrait is located in a remote area of Oman that presents considerable challenges to moving vehicles to their eventual operating locations. The site is about 4 hours from the nearest port by mountainous roads. During the 3- to 4-month monsoon season, this road can be nearly impassable, and transporting vehicles could take even longer. In the event of a major conflict, quickly moving over 1,700 vehicles from this site would pose a significant challenge and is a concern to Air Force officials. Vehicles stored in other locations in the Gulf are also in poor condition. In an open storage location near Prince Sultan Air Base in Saudi Arabia, the Air Force has stored about 840 vehicles for several years without conducting maintenance, sheltering them from the elements, or establishing accountability. Many of the vehicles were left at this location in 1995 and are in poor condition, according to Air Force officials. In mid-1998, the Air Force estimated that about 600 of these vehicles could be salvaged. The Air Force is currently working to have these vehicles repaired and moved into storage at other locations in the region. The cost or time required to repair these vehicles has not been fully determined. The remaining vehicles, about 240, are not salvageable and have been, or will be, discarded. In some cases, maintenance problems have hampered deploying unit operations. For example, a unit that deployed in mid-1997 to an operating location in Bahrain found that 37 of the 130 vehicles (28 percent) they were issued from prepositioned stocks were not mission capable. Some of these vehicles were critical to generating combat sorties, for example, refueling trucks and aircraft towing vehicles, and needed immediate repair before they were used. Problems ranged from damaged tires to bad brakes and other major mechanical defects. These vehicles had been reported as mission capable when issued to the deploying unit. According to the inspection report of the incident, operations were hampered while unit maintenance personnel repaired the vehicles. Heavy peacetime use of war reserve vehicles to support operations has also contributed to condition problems. According to the Air Force’s contractor in the Gulf region, Airwork Vinnell, keeping pace with the constant requests for prepositioned vehicles is extremely difficult. Representatives told us that once they repair vehicles, many of them are shipped elsewhere in the theater to support ongoing operations. They ship vehicles that are in the best working condition, leaving nonmission-capable vehicles behind at the storage locations. This frustrates their efforts to improve mission-capable rates. Also, vehicles that are returned after deployment are often in poor condition and require significant repairs before they can be restored to mission-capable status. Sometimes, vehicles are cannibalized and are returned missing significant parts, like the high-mobility multipurpose wheeled vehicles shown in figure 3.5. During a recent review of the prepositioning storage sites, Air Force vehicle managers noted that some vehicles had been returned to the storage locations in unrepairable condition. In the Pacific, the Air Force reports that it has over 2,500 vehicles prepositioned. Officials from the U.S. Air Forces, Pacific, told us that their vehicle program had experienced significant maintenance problems during the early 1990s but was improving due to concerted efforts throughout the theater. Vehicle storage and maintenance problems currently exist in some locations in the Pacific. Air Force officials reviewed operations at each base in the Pacific region in November 1997 and in a report of this visit cited improvements but also significant storage and maintenance problems at some locations. For example, at Osan Air Base, Korea, problems with the 350-vehicle fleet included (1) delayed maintenance on some vehicles due to lack of orders to initiate the work, (2) improper storage practices and unreported damage, and (3) heavy use of the prepositioned vehicles to augment the peacetime fleet at this location. The report also raised vehicle maintenance as a problem area at Kunsan Air Base, Korea. During our review, no maintenance contract had been secured for the site, and the lack of local, trained mechanics as well as extensive peacetime use of these vehicles were noted as negatively affecting the program. A contractor is scheduled to begin maintenance at this site starting in October 1998, according to the Air Force. In Europe, the Air Force has stored most of its war reserve vehicles at a warehouse facility in Sanem, Luxembourg. This location holds the majority of the vehicles stored in Europe and provides humidity-controlled storage. According to Air Force officials at the site, many of these vehicles were brought to Sanem from other locations in Europe. Many are in poor condition or had not been inspected when they arrived. As of July 1998, 523 (56 percent) of the 926 vehicles stored at this location were not mission capable or had not been inspected. Our guidance for implementing GPRA provides a framework for moving toward a results-oriented organization. The first step is to determine what an agency’s programs are intended to accomplish. For the Air Force’s prepositioning programs, this would address the strategy and requirements concerns. The second step is to measure performance, which for the Air Force would require sound data on its inventories and maintenance conditions. Only after the Air Force has taken these fundamental steps can it move on to the third step in implementing GPRA—using performance information to improve the program. In September 1997, the Air Force tasked its Logistics Management Agency to assess its prepositioning programs. This study resulted from Air Force concerns that its strategy governing its prepositioning program had not been implemented as well as concerns over the visibility of its inventory. Officials cited long-standing problems in the prepositioning program, and one program manager indicated that concerns about the Air Force’s prepositioning program had been raised as early as 1993. The Air Force formed a working group of senior program managers to conduct the study; the results were not available when we concluded our work. Air Force officials admitted that it will likely take several years to address the many issues facing the programs. To operate and maintain the services’ prepositioning programs, DOD is making a significant annual investment—more than $1 billion. Despite this investment, these programs are not being managed efficiently. The Army and the Air Force have not validated requirements for these programs and determined what they need to support DOD’s strategy to fight and win conflicts in Korea and the Persian Gulf. Valid requirements that reflect this strategy should be the foundation of the programs, and such requirements are imperative for DOD to objectively assess the programs. As suggested, the first step for any agency is to determine what it is trying to accomplish and its desired outcomes. Even if the Army and the Air Force had valid requirements, they could not assess the on-hand inventories of prepositioned materiel or its condition because the two services have little reliable data for some programs. Without such data, they cannot measure performance of these programs. Such measurement requires complete, accurate, and consistent data. While the Army and the Air Force report readiness on brigade sets and bare base sets, reporting on their operational projects, sustainment, and vehicle programs is limited and unreliable. Today, these combined requirements and inventory reporting problems prevent us—and DOD—from comprehensively assessing the readiness of prepositioned stocks. This is a problem because the military envisions heavy reliance on prepositioned stocks in future conflicts. Service claims that the programs are underfunded or that shortfalls affect war-fighting ability are difficult to validate. Only after fundamental requirements and reporting problems are addressed can DOD begin to reliably assess the performance of the programs. Then it can move to the third and final step in implementing GPRA—using performance information to improve organizational processes, identify gaps, and set improvement goals. The services, Joint Staff, and DOD recognize the concerns raised in this report. The update to the mobility requirements study planned to begin in 1999 provides an excellent opportunity for the services and other stakeholders to work together to determine the future of these programs. We recommend that the Secretary of Defense direct the Secretaries of the Army and the Air Force to reassess their prepositioning programs with the goal of establishing sound requirements based on the two-war strategy and develop reliable inventory information to measure the readiness of all programs. Specifically, we recommend that the Secretary of Defense direct the Secretary of the Army to reevaluate the requirements for European prepositioning, including whether the current brigade set configurations best meet the envisioned missions; take steps to ensure that the operational projects requirements meet operational needs and are prioritized in accordance with DOD’s current wartime strategy; complete ongoing efforts to improve the processes used to determine sustainment requirements and work with other DOD stakeholders to determine what stocks will be available from the industrial base and host nations; develop reliable reports of inventory fill and maintenance conditions for the operational projects and sustainment programs so that their readiness can be reliably measured; and dispose of unneeded stocks. We recommend that the Secretary of Defense direct the Secretary of the Air Force to determine current requirements for European prepositioning; develop precise bare base requirements by assessing the infrastructure available in the Persian Gulf region; complete efforts to determine worldwide vehicle requirements, considering what is or will be available from the host nations; develop reliable reports of inventory levels and maintenance conditions for the vehicle program so that its readiness can be reliably measured; maintain needed prepositioned vehicles in good condition; and dispose of unneeded stocks. To reliably assess DOD’s readiness status and evaluate its future budget requests, the Congress may wish to consider having the Secretary of Defense periodically report on (1) the progress by DOD, the Army, and the Air Force to address the recommendations made in this report and (2) the impact of any shortages that remain after requirements and reporting problems are addressed, including how DOD and the services would mitigate shortages in the event of a major conflict. In commenting on a draft of this report, DOD concurred with the report’s recommendations and agreed that Army and Air Force prepositioning programs need to be reviewed with an emphasis on validating requirements based on a two-war strategy, streamlining maintenance, and improving readiness. DOD stated that the Joint Staff and the respective services are examining many of the issues raised in this report. Specifically, the Army is (1) reviewing its prepositioning requirements for Europe to assess whether, in light of projected missions, European stocks should be configured in brigade sets; (2) refining its sustainment requirements with the intent of redistributing or disposing of any excess war reserve stocks; and (3) resolving data accuracy problems for its operational project and sustainment programs to assist in management and readiness assessments. DOD said that the Air Force plans to complete its ongoing war reserve materiel study within a year. This study is expected to verify and validate European prepositioning requirements, develop base support plans for Southwest Asia, and address vehicle requirements determination problems. DOD also said that the Air Force would redistribute or dispose of any excess vehicles identified through its reassessment of this program. DOD did not agree with our observation that the Air Force has not updated its bare base requirements since the late 1980s. The Air Force indicated that it has reviewed this requirement biennially in conjunction with its updating of commanders-in-chief operational plans. However, the Air Force was unable to produce documentation to show it had conducted any rigorous, methodologically sound, reviews of its Persian Gulf bare base requirements. We found that the bare base requirements established in the late 1980s far exceeded the number of Air Force troops that were actually housed in bare base sets during the Persian Gulf War. Also, despite the fact that Iraq’s military is substantially smaller than it was during the Persian Gulf War, the Air Force’s bare base requirements have remained substantially unchanged since the late 1980s. In addition, base support plans that would identify available infrastructure within the region have not yet been completed. DOD’s comments appear in their entirety in appendix II. DOD also provided technical comments, which we have incorporated as appropriate. | Pursuant to a congressional request, GAO reviewed the readiness of the Department of Defense (DOD) prepositioning programs, focusing on the: (1) basis for the program requirements; and (2) rates of inventory fill and maintenance condition of prepositioned stocks and the reliability of this readiness data. GAO noted that: (1) the Army and Air Force have poorly defined, outdated, or otherwise questionable requirements in the major programs that GAO reviewed; (2) the Army and Air Force have reported significant shortages and poor maintenance conditions in their prepositioning programs; (3) reliable data to assess inventory fill and maintenance condition was unavailable; (4) while the services are taking steps to address the requirements and reporting problems, it may be several years before these problems are resolved and readiness can be reliably assessed; (5) the positioning of the Army's brigade sets in Kuwait, Qatar, Korea, and afloat supports the current two-war strategy; (6) the three brigade sets in Europe are in a state of flux, and the Army recognizes the need to revisit and evaluate the requirements for those sets; (7) the Kuwait set is at a high level of readiness, and the sets afloat, in Korea, and in Qatar are improving as additional equipment is added to these sets; (8) the readiness of the European sets is declining and the Army has no immediate plans to fill equipment shortages caused by the transfer of equipment to units in, or returning from, Bosnia; (9) the Army has not determined valid requirements for its operational projects and sustainment programs; (10) the Army is reviewing these programs to establish requirements; (11) until the Army establishes valid requirements and improves inventory reporting, their readiness cannot be reliably and comprehensively assessed; (12) the Air Force has not determined precise requirements for its bare base and vehicle programs; (13) in the Persian Gulf, the Air Force has not completed the detailed planning at each of its planned operating locations to determine what infrastructure and vehicles would be available to deploying forces; (14) current requirements are based on a worst-case scenario that assumes the Air Force must provide virtually all the facilities and vehicles it would need should a major war occur; (15) in Europe, the Air Force is storing over 900 vehicles but has no current requirements for the vehicles to be stored there; (16) in the vehicle program, the Air Force does not have reliable, comprehensive reports of inventories on hand or their maintenance condition; (17) at one location visited, GAO found that over 40 percent of Air Force's aging vehicles were in poor condition and would require repair before being used; and (18) until the Air Force determines requirements for these programs and improves reporting, the impact of shortfalls and poor maintenance conditions will be difficult to discern. |
SSA provides financial assistance to eligible individuals through three major benefit programs: Old-Age and Survivors Insurance (OASI)—provides retirement benefits to older workers and their families and to survivors of deceased workers. Disability Insurance (DI)—provides benefits to eligible workers who have qualifying disabilities, and their eligible family members. Supplemental Security Income (SSI)—provides income for aged, blind, or disabled individuals with limited income and resources. SSA projects that the number of beneficiaries and benefit payments for the three programs will increase over the next several years. DI and SSI are the nation’s largest federal disability programs, and applications for benefits have grown significantly over the last 5 years, due in part to baby boomers reaching their disability-prone years, as well as a sustained economic downturn and high unemployment. Retirement claims have also steadily increased in recent years. Although SSA’s disability programs account for only about 23 percent of its total benefit outlays, they represent 66 percent of the administrative expenses for these 3 programs. Complex eligibility rules and many layers of review with multiple handoffs from one person to another make the disability programs complicated, and therefore costly, to administer. Both OASI and DI face long-term financial challenges. In 2012, SSA’s Office of the Chief Actuary projected that the DI and OASI Trust Funds would be exhausted in 2016 and 2035, respectively. If the trust funds are depleted before legislative changes are made to restore long-term solvency, the agency projects that it will be able to pay benefits only to the extent that funds are available. In support of its mission and programs, SSA’s basic functions include maintaining earnings information, making initial eligibility determinations for program payments, making changes to beneficiaries’ accounts that affect their benefit payments, and issuing Social Security numbers. has over 80,000 state and federal employees and about 1,700 facilities nationwide. Almost 182,000 people visit one of the nearly 1,300 SSA field offices daily and more than 445,000 people call the offices daily to file applications, ask questions, or update their information. Social Security numbers have become the universal identifier of choice for government agencies and are currently used for many non-Social Security purposes. been used to keep up with increases in expenses such as personnel costs, rent, and security. Over the next decade, SSA will experience management challenges in four key areas: (1) human capital, (2) disability program issues, (3) information technology, and (4) physical infrastructure. Over the next decade, SSA’s ongoing retirement wave, coupled with a hiring freeze that has been in place since 2010, represents a significant challenge for the agency in meeting the projected growth in work demands. Although not all employees will necessarily retire when eligible, nearly 7,000 headquarters employees and more than 24,000 field employees will be retirement eligible between 2011 and 2020. The agency projects that it could lose nearly 22,500 employees, or nearly one- third of its workforce, during this time due to retirement—its primary The Commissioner stated in SSA’s fiscal year 2012 source of attrition.budget overview that as a result of attrition, some offices could become understaffed, and that without a sufficient number of skilled employees, backlogs and wait times could significantly increase and improper payments could grow. As SSA’s workforce decreases and its workload increases, our preliminary work suggests that the agency’s strategies for preventing a loss of leadership and skills may prove insufficient for a variety of reasons. Retaining institutional knowledge and developing new leaders. SSA could face a significant loss of institutional knowledge and expertise in the coming years. An estimated 43 percent of SSA’s non-supervisory employees and 60 percent of its supervisors will be eligible to retire by 2020. Regional and district managers told us they have already lost staff experienced in handling the most complex disability cases.and DDS managers told us that it typically takes 2 to 3 years for new employees to become fully proficient and that new hires benefit from SSA officials mentoring by veteran employees. Because of budget cutbacks, SSA has also curtailed its leadership development programs, which have historically been used to establish a pipeline of future leaders. Succession planning. SSA’s most recent succession plan was issued in 2006, even though the agency has experienced significant changes since that time, including a hiring freeze and greater movement toward online services. The most recent succession plan established a target of evaluating and updating the plan by the end of 2007. Internal control standards state that management should ensure that skill needs are continually assessed and that the organization is able to obtain a workforce with those required skills to achieve organizational goals. Our prior work also indicates that leading organizations use succession planning to help prepare for an anticipated loss of leadership. SSA’s 2006 succession plan states that without sound succession planning, SSA’s loss of leadership would result in a drain on institutional knowledge and expertise at a time when workloads are growing. This loss of knowledge and expertise could result in increasing workloads, backlogs, and improper payments. Several SSA officials told us individuals with less experience and training are beginning to assume supervisory roles and some have made poor decisions related to such things as providing reasonable accommodations to employees with disabilities. Some officials also told us that inexperienced managers are also less proficient at supervising others, which leads to inefficiencies in managing increasing workloads. Forecasting workforce needs. Findings from OIG reports raise additional concerns about SSA’s ability to accurately forecast workload demands and workforce needs. These reports found methodological flaws in the workload and work year data SSA uses to formulate and execute its budget. For example, the reports concluded that the internal controls and main processes related to work sampling—which SSA uses to measure work and assign direct and indirect costs to workloads—did not ensure the completeness and reliability of data in SSA’s Cost Analysis System. The reports found that work samples were not consistently performed. Furthermore, they noted no instances of peer or management review, which could improve the accuracy of the workload data collected.these findings. SSA continues to face challenges in modernizing its disability programs, while seeking a balance between reducing initial claims and hearings backlogs and conducting oversight activities to ensure program integrity. Modernizing disability programs. We designated federal disability programs as a high-risk area in 2003, in part because these programs emphasize medical conditions in assessing an individual’s work incapacity without adequate consideration of the work opportunities afforded by advances in medicine, technology, and job demands.Concerns have been raised that the medical listings being used lack current and relevant criteria to evaluate disability applicants’ inability to work, and that by failing to consider the role of assistive devices and workplace accommodations, SSA may be missing opportunities to help individuals with disabilities return to work. SSA has recently taken steps toward comprehensively updating the medical and labor market information that underlie its disability criteria. As of March 2013, SSA had completed comprehensive revisions of its medical criteria for 10 of the 14 adult body systems and initiated targeted reviews of certain conditions under these systems, as appropriate, according to SSA officials. SSA has recently made progress toward replacing its outdated occupational information system, including signing an interagency agreement with the Department of Labor’s Bureau of Labor Statistics to design, develop, and carry out pilot testing of an approach to collect data for an updated system. According to SSA officials, the agency still needs to determine exactly how many occupations it will include in its new system and the extent to which it might leverage aspects of the Department of Labor’s existing occupational database, the Occupational Information Network (O*NET). The agency also needs to determine the extent to which the new system will include cognitive information, according to agency officials. In addition, officials told us the agency has not yet formalized a cost estimate and lacks a research and development plan. SSA has also taken steps to more fully consider individuals’ ability to function with medical impairments in their work or other environments, which is consistent with modern views of disability. However, SSA disagreed with our prior recommendation to conduct limited, focused studies on how to more fully consider factors such as assistive devices and workplace accommodations in its disability determinations, stating that such studies would be inconsistent with Congress’ intentions. We noted, however, that Congress has not explicitly prohibited SSA from considering these factors and we believe that conducting these studies would put SSA in a better position to thoughtfully weigh various policy options before deciding on a course of action. SSA has two initiatives to expedite cases for the most severely disabled individuals: Quick Disability Determination and Compassionate Allowances. Using predictive modeling and computer-based screening tools to screen initial applicants, the Quick Disability Determination identifies cases where a favorable disability determination is highly likely and medical evidence is readily available, such as with certain cancers and end-stage renal disease. With Compassionate Allowances, SSA targets the most obviously disabled applicants based on available medical information and generally awards benefits if there is objective medical evidence to confirm the diagnosis and the applicant also meets SSA’s non-disability criteria. commissioners told us that simplifying disability policy, such as by streamlining work incentive and work reporting rules, could also help staff better manage disability workloads. SSA is processing more initial claims annually, but claims denied at the initial level can be appealed and often result in a request for a hearing by an administrative law judge. To reduce its hearings backlog, SSA has used strategies such as hiring additional administrative law judges and support staff, opening more hearings offices, and conducting more hearings via video conference. Our preliminary results indicate that, although SSA completed more hearing requests in fiscal year 2012 than in previous years, the agency fell short of its hearings completion target by more than 54,000 hearings, and at 321 days, the average wait time for hearings exceeded the agency’s target by 41 days. At the same time, the agency eliminated most of its oldest pending hearing requests. Ensuring disability program integrity. SSA also faces disability program integrity challenges due to budget decisions and the way it prioritizes competing workload demands such as processing initial claims. Continuing disability reviews (CDR) are periodic reviews that the agency is required to perform to verify that certain recipients still meet SSA disability rules. SSA reported that in fiscal year 2010, the agency did not conduct 1.4 million CDRs that were due for review, in part because of competing workloads. In June 2012, we also found that the number of childhood CDRs conducted fell from more than 150,000 in fiscal year 2000 to about 45,000 in fiscal year 2011 (a 70 percent decrease). During this time, the number of adult CDRs fell from 584,000 to 179,000. This figure represents the present value of future benefits saved for OASDI, SSI, Medicare, and Medicaid. The estimate includes savings to Medicare and Medicaid because in some cases eligibility for SSI and DI confers eligibility for certain Medicare or Medicaid benefits. continue to receive them, SSA officials reported that resource constraints have made it more difficult to balance competing workloads and remain current on the millions of CDRs it is required to conduct each year. SSA has taken steps to modernize its information technology (IT) systems to help keep pace with workload demands, but some entities have identified additional areas that could be improved. In addition, increased risk exists that sensitive information could be exposed because of internal security weaknesses. IT modernization efforts. SSA has begun to take action on several of our prior recommendations to improve the way it modernizes its IT systems. For example, in May 2012, SSA released its Capital Planning and Investment Control guide. The guide describes the roles and responsibilities of staff under the agency’s realigned IT organizational structure. SSA also issued an updated IT strategic plan that covers 2012-2016 and supports the updated agencywide strategic plan. Furthermore, SSA officials told us that they intend to revisit the IT strategic plan annually and refresh it as appropriate. Our prior work indicates that SSA has not always had an updated IT strategic plan to guide its modernization efforts. In the absence of regular updates, SSA based its IT modernization efforts on program activities that were tied to short-term budget cycles and not developed in the context of a long-term strategic plan. While we are encouraged that SSA issued an updated IT strategic plan, at present, it is too soon to assess the extent to which SSA will adhere to the plan and annual reevaluation cycle. SSA is modernizing its IT systems, in part, to support a shift toward offering more online services. However, SSA’s OIG has expressed concerns that the agency is continuing to rely on its legacy applications. Many of its programs are written in COBOL, which is one of the oldest computer programming languages and is difficult to modify and update. The OIG noted also that the agency risks losing key institutional knowledge relating to COBOL programming and its increasingly complex information systems. According to the OIG, SSA has indicated that modernizing its legacy applications will ultimately reduce operating costs and improve service delivery. However, agency officials told us they have conducted analyses that show the costs of moving away from using COBOL currently outweigh the benefits. Accordingly, the OIG found that SSA has developed an approach to gradually reduce its reliance on COBOL for core processing of program transactions but has not yet articulated a formal strategy for converting its legacy programs to a more modern programming language. SSA officials disagree that such a strategy is needed because they consider this programming language to be sufficient for their needs and point out that it is still used by other businesses. Information security weaknesses. SSA uses and stores a great deal of sensitive information, but has been challenged to effectively protect its computer systems and networks in recent years. Our prior work states that it is essential for agencies to have information security controls that ensure sensitive information is adequately protected from inadvertent or deliberate misuse, fraudulent use, and improper disclosure, modification, or destruction. However, in fiscal year 2012, several concerns were raised about SSA’s information security program. SSA’s OIG identified weaknesses in some of the agency’s information security program components that limited SSA’s overall effectiveness in protecting the agency’s information and information systems, constituting a significant deficiency in the agency’s information security program under the Federal Information Security Management Act of 2002 (FISMA). The OIG has also noted that weaknesses in certain elements of the agency’s information security program may challenge SSA’s ability to use its IT infrastructure to support current and future workloads. The agency’s independent financial auditor also identified a material weakness in information systems controls over financial management statements based on several concerns, many of which have been longstanding. SSA is implementing a multi-year plan to address many of these weaknesses. However, the OIG stated that one of the underlying causes for these weaknesses is that SSA needed to strategically allocate sufficient resources to resolve or prevent high-risk security weaknesses in a more timely fashion. Though SSA officials emphasized that the information security risks identified were internal,access to or misuse of sensitive information can have a significant impact. For example, according to the OIG, in 2012, a former SSA employee was found to have used her position to provide personally identifiable information to a person outside the agency, who is accused of using the information for criminal purposes. SSA is taking steps to centralize its facilities management, which may standardize facilities decisions, but our preliminary results show that the agency lacks a proactive approach to evaluate its physical infrastructure and identify potential efficiencies. Centralizing facilities management. SSA is beginning to centralize its facilities management, but officials indicated it may lead to a trade-off between efficiency and flexibility. The agency administers its programs and services through a network of over 1,700 facilities, at an annual cost of approximately $1 billion.facilities management process, but officials told us they are currently moving all facilities management under SSA’s Office of Facilities and Supply Management (OFSM). Some officials said that centralization can lead to greater efficiencies and standardization, but cautioned that there may be less flexibility and awareness of local circumstances—such as the layout of specific buildings—at the regional level. The agency has had a more decentralized Limited facilities planning efforts. A contractor hired by the General Services Administration (GSA) is currently working on a long-term plan for SSA’s headquarters facilities, called the Master Housing Plan. An SSA official told us that the contractor has solicited input and feedback from the agency on the draft plan. However, an SSA official told us the agency lacks a comprehensive planning effort that encompasses all of the agency’s facilities, including nearly 1,300 field offices. Efforts to reduce office space. SSA officials told us the agency is engaged in ongoing efforts to reduce the footprint of its headquarters facilities. According to an SSA official, vacant space in headquarters facilities has increased during the past few years as a result of the shrinking workforce. SSA officials told us that OFSM is analyzing the space needs of all offices in the headquarters area and will reassign space according to staffing levels and other criteria. According to an official, SSA’s efforts were motivated by several factors, including an OMB directiveagency’s ultimate goal of terminating all commercial leasing in the headquarters area; and, to a lesser degree, reducing current vacancies in headquarters. to make more efficient use of federal office space; the In addition to these headquarters-focused efforts, SSA is reducing office space in the field as opportunities arise, but our preliminary work shows that it lacks a proactive plan to assess field facilities for potential space reductions. When OFSM reviews a field-based space action (e.g., lease renewal, move, renovation), an SSA official told us that the proposed action is assessed to identify if there are opportunities to reduce or otherwise change the facility’s space allocation. However, OFSM’s standards do not call for wholesale reductions or reconfigurations of existing space. SSA has established a workgroup that is developing guidance to help identify opportunities to reduce space by co-locating certain field-based facilities, such as field offices and video-based disability hearing offices, but the workgroup’s proposals need to be reviewed and it is not yet clear if the agency will adopt them. Considerations for realigning the facilities structure. SSA has been advised to consider aligning its facilities structure with its changing methods of providing services. For example, the OIG reported in 2011 that SSA’s long-term planning efforts should assess whether the agency’s existing office structure will align with future methods of providing customer service. In 2011, the Social Security Advisory Board suggested that as SSA continues to increase electronic service delivery, it adapt its organizational structure to maximize the effectiveness of the agency’s transformation. In prior work, we have also reported that federal agencies may be able to increase efficiency and effectiveness by consolidating physical infrastructure or management functions. Several agencies—including the Internal Revenue Service, the U.S. Postal Service, and the Census Bureau—plan to or have already undertaken consolidation efforts to achieve efficiencies and save money. At the same time, SSA has long considered face-to-face interaction to be the gold standard of customer service, and an official has told us that any changes away from that model would represent a major cultural shift for the agency. SSA has begun to take advantage of opportunities to consolidate or co- locate offices in the regions. SSA regional commissioners told us that field offices have been consolidated in most of its of 10 regions and several regions have co-located with the Office of Disability Adjudication and Review to provide space to hold disability hearings within field offices. These consolidations and co-locations can save money on rent and guard services. Regional commissioners told us that a single office consolidation can save up to $3 million over a 10-year period. Despite these actions, our preliminary work indicates that SSA has not engaged in a systematic analysis of potential approaches for consolidating its facilities or realigning its facilities with the agency’s evolving service delivery model. The National Research Council recommends that federal agencies use their organizational mission to guide facilities investment decisions and then integrate these investments into their strategic planning processes. We previously reported that agencies should consider the potential costs and benefits of any widespread efforts to consolidate physical infrastructure before embarking on such an action. To support its rationale for consolidation and assess the potential impact and challenges of consolidation, we suggested that agencies consider issues such as how to fund up-front costs associated with consolidation and the effect on various stakeholders. SSA has ongoing planning efforts, but we have identified two major areas in which these efforts may fall short in addressing the long-term nature of the agency’s management challenges: (1) its planning efforts are short- term and do not adequately address emerging issues, and (2) it lacks continuity in its strategic planning leadership. Need for longer-term efforts to address emerging issues. SSA’s planning efforts, from an overall strategic plan to its service delivery plan, typically look no more than 5 years out. For example, SSA is finalizing a service delivery plan, but the draft document primarily contains detailed plans for the next 5 years and focuses on existing initiatives rather than articulating specific long-term strategies for the agency’s service delivery model. While the draft service delivery plan acknowledges the need to assess the agency’s workforce structure, it stops short of providing a vision for how the workforce structure should best make use of expanded virtual and automated service delivery channels. The plan also states that issues such as the need to strategically develop and place self-service options and to determine whether the Internet should be the primary service delivery mechanism for certain services, will need to be considered over the next 6 to 10 years, but it does not provide a specific strategy for how to resolve these issues. Further, the plan does not articulate SSA’s long-term costs and benefits for its investments, such as the specific impact that moving to online services is expected to have on backlogs and workforce needs. For many years, we have recommended that SSA develop a comprehensive service delivery plan that outlines how it will deliver quality service while managing growing work demands within a constrained budget. Similarly, our preliminary work shows that SSA’s current strategic plan largely describes the continuation, expansion, or enhancement of existing activities, rather than proposing new initiatives or broad changes to address emerging issues. One of the goals of the agency’s strategic plan is to increase the public’s use of online services, but several SSA officials and representatives of one SSA management group told us that this shift will not be sufficient to address growing service demands. For example, as discussed earlier, to meet service challenges, some officials said the agency will also need to simplify its disability policy and develop a strategy for meeting the needs of individuals who may not have access to computers at home or who may not be computer literate. At the same time, however, some SSA officials noted that the agency may need to limit the number of days per week that field offices are open to the public to contain costs. Various groups have called on SSA to acknowledge emerging long-term issues by articulating them in a longer-term strategy. In 2011, the Social Security Advisory Board called for SSA to develop a strategy for service delivery through 2020 that will serve as the cornerstone for its IT, human capital, policy review, and organizational restructuring plans. The SSA OIG also called on SSA to prepare a longer-term vision to ensure that it has the programs, processes, staff, and infrastructure necessary to provide service in the future. Several SSA officials we spoke with told us that developing a long-term service delivery plan should be the next Commissioner’s top priority. Moreover, regional commissioners and field managers said that such a plan could help to clarify issues such as what services will be available online in the future, how these services will be implemented, how IT modernization will support service delivery, and which offices will have responsibility for different workloads. SSA prepared its last long-term agency vision—which covered a 10-year period—in 2000, motivated by many conditions which remain true today, such as increasing workloads, advances in technology, and employee retirements. Senior agency officials told us that as an agency, SSA generally views long-term planning as a secondary responsibility and is more focused on addressing short-term, tactical issues. Several officials also noted that uncertainty about budget resources has made it difficult for SSA to engage in multi-year planning. One official commented that as a result of its budget situation, the agency has been reactive and failed to consider big picture issues. Strategic planning literature notes the success of organizations that are flexible and adaptive; these organizations plan for different scenarios and consider strategic options. Need for continuity in strategic planning leadership. The GPRA Modernization Act of 2010 charges top agency leadership with improving SSA previously had an Office of agency management and performance.the Chief Strategic Officer, which was responsible for overseeing strategic planning. This office worked with all SSA components to prioritize initiatives that would help the agency meet its goals and determined how to link these initiatives to the agency’s budget. However, the office was dissolved in May 2008, and since that time the agency has not had an office dedicated to strategic planning. Senior officials said that SSA should dedicate a position, such as a chief strategic officer, that will report directly to the Commissioner and be solely responsible for strategic planning in order to bring sustained, focused attention to long-term management challenges. In conclusion, the challenges SSA faces will substantially affect its ability to address critical concerns in the coming years. SSA’s efforts to meet many of its management challenges have been complicated by budgetary constraints and continued uncertainty about the current and future fiscal environment. Despite these constraints, the agency will need to balance competing demands for resources—both in terms of managing day-to-day budget decisions and planning for emerging and long-term budget issues. SSA already manages a substantial and diverse workload and the demands on SSA from new retirees and individuals with disabilities will continue to grow. SSA’s new Commissioner will face wide-ranging challenges that will require a comprehensive, long-range strategy that current planning efforts do not adequately address. In the absence of a long-term strategy for service delivery, the agency will be poorly positioned to make needed well-informed decisions about its critical functions, including how many and what type of employees SSA will need for its future workforce, how the agency will address disability claims backlogs while ensuring program integrity, and how the agency will more strategically use its information technology and physical infrastructure to best deliver services. Chairman Johnson, Ranking Member Becerra, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information about this testimony, please contact me at (202) 512-7215. Michael Alexander, James Bennett, Jeremy Cox, Larry Crosland, Alex Galuten, Isabella Johnson, Kristen Jones, Anjalique Lawrence, Sheila McCoy, Christie Motley, Walter Vance, Kathleen Van Gelder, and Jill Yost also made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | SSA is responsible for providing benefits and services that affect the lives of nearly every American. In calendar year 2012, SSA paid over 62 million people more than $826 billion in Social Security retirement and disability benefits and Supplemental Security Income payments. However, SSA faces increased workloads and large numbers of potential employee retirements in the long term. A new Commissioner will soon be leading the agency and will face many complicated issues confronting the agency. In this statement, GAO discusses initial observations from its ongoing review and describes (1) key management challenges SSA faces in meeting its mission-related objectives and (2) the extent to which SSA's planning efforts address these challenges. To examine these issues, GAO reviewed relevant planning documents and reports from SSA and others as well as SSA management information and data on workload and staffing projections, applicable federal laws and regulations, and interviewed SSA headquarters and regional officials, representatives of employee groups, and other experts. This work is ongoing and GAO has no recommendations at this time. GAO plans to issue its final report in June 2013. The Social Security Administration (SSA) will experience management challenges in four key areas over the next decade. Human capital. SSA has not updated its succession plan since 2006 although the agency faces an ongoing retirement wave and hiring freeze which will make it difficult to respond to growing workload demands. Disability program issues. SSA faces ongoing challenges incorporating a more modern concept of disability into its programs, while balancing competing needs to reduce backlogs of initial and appealed claims and ensure program integrity. Information technology (IT). SSA has made strides in modernizing its IT systems to address growing workload demands, but faces challenges with these modernization efforts--such as an ongoing need to refresh and adhere to its IT strategic plan and a continued reliance on legacy applications--and correcting internal weaknesses in information security. Physical infrastructure. SSA is moving toward centralized facilities management, but the agency lacks a proactive approach to evaluating its office structure that will identify potential efficiencies, such as consolidating offices. SSA has ongoing planning efforts, but they do not address the long-term nature of these management challenges. For example, SSA is finalizing a service delivery plan, but it only includes detailed plans for the next 5 years and focuses on existing initiatives rather than articulating specific long-term strategies for the agency's service delivery model. Its current strategic plan also largely describes the continuation, expansion, or enhancement of ongoing activities, rather than proposing broad changes to address emerging issues. Since 2008, SSA has not had an entity or individual dedicated to strategic planning. Various groups have called on SSA to articulate a longer-term strategy, which it last did in 2000, motivated by many conditions which remain true today, such as increasing workloads, advances in technology, and employee retirements, and which will need to be addressed in the future. |
The Forest Service, created in 1905, manages about 192 million acres of land that include about one-fifth of the nation’s forest lands. The Organic Administration Act of 1897 and the Multiple Use-Sustained Yield Act of 1960 guide the management of these lands. The Forest Service is to manage its lands under the principles of multiple use and sustained yield to meet people’s diverse needs. The Congress mandated forest plans in the Forest and Rangeland Renewable Resources Planning Act of 1974, as amended by the National Forest Management Act of 1976 (NFMA). NFMA provides guidance for forest planning by delineating a procedure to be followed in developing and periodically revising or amending forest plans. Under this act and its implementing regulations, the Forest Service is to, among other things, (1) involve the public in the planning process, (2) recognize wilderness as a use of the forests, (3) maintain biological diversity, (4) monitor and assess the effects of its management practices on the lands’ productivity, and (5) ensure a sustained yield of timber. The last of the 123 forest plans covering all 155 forests in the National Forest System was approved in 1995, and the first plans, approved in the early 1980s, are due for revision. The plans identify (1) different management areas or “zones” within a forest where one or more uses will be permitted for up to 15 years and (2) requirements and limitations for protecting the environment, such as those to protect species listed as endangered or threatened under the Endangered Species Act. Forest plans are implemented by identifying, analyzing, and undertaking specific projects, which must be consistent with the requirements and limitations in the plans. In developing forest plans and reaching project-level decisions, the Forest Service must comply with the requirements of the National Environmental Policy Act (NEPA). NEPA and its implementing regulations specify the procedures for integrating environmental considerations into an agency’s decisionmaking. Forest plans and projects must also comply with the requirements and implementing regulations of numerous environmental statutes, including the Endangered Species Act, the Clean Water Act, and the Clean Air Act. In a 1992 report, the Office of Technology Assessment (OTA) stated that, to improve forest planning under NFMA, the Congress could require the Forest Service to specify objectives (targets) for all uses in its forest plans. However, some Forest Service officials believe that if the agency is to achieve the objectives in its forest plans, other changes may be needed to reduce the influence of many variables that affect the outcomes of its decisions. These variables include changing natural conditions, such as drought, insects and disease, and wildfires, as well as changes in annual funding for the National Forest System. They also include information and events that occur after forest plans have been approved. In addition, Forest Service policy and planning officials believe that differences among the requirements and limitations in laws and regulations can sometimes be difficult to reconcile, and that reconciliation is further complicated by the fragmentation of authority for implementing these laws and regulations among several federal agencies and the states. As we stated in our January 25, 1996, testimony, because the Forest Service’s decisionmaking process is extremely complex and the issues surrounding it are interrelated, there are no quick fixes or simple solutions. Rather, a systematic and comprehensive approach will be needed to address them. Some options that may be considered in developing such an approach may help the Forest Service to achieve the objectives in its forest plans. Some of these options could be implemented by the Forest Service within the existing statutory framework, while others would require changes in law. Forest plans generally take from 3 to 10 years to develop and explain how forests will be managed for 10 to 15 years. Much can change over such extended periods of time. As a result, forest plans can be outdated by the time they are approved, and schedules for implementing the plans’ objectives cannot be established for 10 to 15 years. Options that have been suggested include shortening both (1) the time required to develop the plans and (2) the periods covered by the plans to 3 to 5 years. One drawback to shortening the periods covered by forest plans may be that 3 to 5 years might not provide companies and communities dependent on Forest Service lands with enough time to plan or develop long-range investment strategies. In addition, according to some Forest Service officials, events that occur after forest plans have been approved can significantly affect the agency’s ability to provide a high degree of confidence concerning the future availability of uses on national forest lands. These events can include listing a species as endangered or threatened or designating land as habitat under the Endangered Species Act, changing timber harvesting methods in response to increased environmental restrictions, and evolving judicial interpretations of procedural requirements in environmental statutes. For example, Forest Service officials note that recent federal court decisions have required the agency to re-initiate lengthy, formal consultations on several approved forest plans because a species of salmon was listed as threatened in the Pacific Northwest and the Mexican Spotted Owl was listed as threatened in the Southwest. These rulings have prohibited the agency from implementing projects under these plans until the new round of consultations has been completed, even though the Forest Service believes that some of the projects would have no effect on these species. These Forest Service officials believe that the Congress should provide legislative clarification so that projects unaffected by a subsequent event would not have to be delayed by the lengthy process to amend or revise forest plans. Forest Service officials also believe that annual appropriations have not always matched the funding assumptions incorporated in forest plans. This lack of connection has occurred, in part, because some forest plans have been developed without reference to likely funding levels. Options that have been suggested include linking forest plans more closely to budgeting and including objectives for commodity and noncommodity uses at various funding levels in forest plans. According to these officials, a possible complementary statutory option would be to appropriate funds for the duration of a shortened planning period. The process currently used to reach project-level decisions for implementing forest plans may also have to be shortened. For example, preparing timber sales usually takes 3 to 8 years. One option that might shorten the time required to reach project-level decisions would be to obtain better data to use in developing forest plans. Prior GAO reports have shown that the goals and objectives in some forest plans were developed using inadequate data and inaccurate estimating techniques. Information subsequently gathered at the project level showed that certain objectives in the plans could not be met. In addition, the Forest Service established a re-engineering team, consisting primarily of regional and forest-level personnel, and tasked the team with designing a new process for conducting project-level environmental analyses. According to this team, the agency is currently gathering and analyzing information at the project level that should have been analyzed at the forest plan level. Gathering and analyzing information in this manner is both time-consuming and costly and can result in delayed, modified, or withdrawn projects. The re-engineering team has made several recommendations whose implementation, it believes, would produce more timely and adequate information. These recommendations include (1) identifying issues that should be analyzed and resolved in forest plans or other broader-scale studies, (2) maintaining a centralized system of comparable environmental information, and (3) eliminating redundant analyses by focusing on what is new and using existing analyses to support new decisions when possible. In addition, Forest Service officials have told us that some effects cannot be adequately determined in advance of a project-level decision because of scientific uncertainty and/or the prohibitive costs of obtaining the necessary data. Therefore, they believe that, for some projects, monitoring and evaluation could be more efficient and effective than attempting to predict the projects’ outcomes. The Forest Service is currently evaluating the findings and recommendations that the re-engineering team believes could improve timeliness and reduce costs by 10 to 15 percent initially and by 30 to 40 percent over time. The agency is also considering or testing other actions that it believes could make its project-level environmental analysis process more efficient, including improving the monitoring and evaluation of decisions. According to Forest Service officials with whom we spoke, another difficulty at both the forest plan and project levels is that the authority to implement various environmental laws and regulations is fragmented among several federal agencies and the states. In developing forest plans and reaching project-level decisions, the Forest Service often must consult with other federal agencies, including the Department of the Interior’s Fish and Wildlife Service, the Department of Commerce’s National Marine Fisheries Service, the Environmental Protection Agency, and/or the U.S. Army Corps of Engineers. These agencies sometimes disagree on how environmental requirements can best be met in a forest plan or project, and they have difficulty resolving their disagreements, thereby delaying decisionmaking. According to federal officials with whom we spoke, these disagreements often stem from differences in the agencies’ evaluations of environmental effects that tend to reflect the agencies’ disparate missions and responsibilities. The officials believe that, to resolve these disagreements more quickly, they would need to place greater reliance on monitoring and evaluating the effects of prior decisions to derive guidance for future decisions on similar projects. Additionally, the Forest Service and other federal agencies recently have signed various memoranda of agreement to improve coordination. However, not enough time has passed to evaluate the effects of these agreements. The Forest Service receives over 1,200 administrative appeals to project-level decisions annually by parties seeking to delay, modify, or stop projects with which they disagree. While believing that appeals and litigation are legitimate ways for the Forest Service to resolve substantive conflicts and support its NEPA policy, the re-engineering team tasked with designing a new process for project-level environmental analyses recommended amending the current law and regulations to limit such appeals to the parties who participate in the decisionmaking process and to the concerns that are raised in reaching a decision. By establishing participation as a condition for appealing a decision, this change might increase public participation in the Forest Service’s project-level decisionmaking process. While these options may improve the ability of the Forest Service to provide a higher degree of confidence concerning the future availability of forest uses on national forest lands, they are unlikely to resolve the increasing difficulty the Forest Service is experiencing in reconciling conflicts among competing uses. For example, in its 1992 report, OTA stated that “Congressional efforts to change the judicial review process seem to be attempts to resolve substantive issues without appearing to take sides. However, such changes are unlikely to improve forest planning or plan implementation, or reduce conflict over national forest management.” In the past, the Forest Service was able to meet the diverse needs of the American people because it could avoid, resolve, or mitigate conflicts between commodity and noncommodity uses by separating them among areas and over time. For example, while timber harvesting was forbidden in wilderness areas and was secondary to other uses, such as recreation and wildlife, in some other areas, it was the dominant use in still other areas. Alternatively, the Forest Service sometimes avoided conflicts by using the same land for different commodity and noncommodity uses, but at different times. For example, it sometimes used harvested timberlands as browsing and hiding habitat for game animals while the lands were being reforested for subsequent harvests. However, according to Forest Service officials, the interaction of legislation, regulation, case law, and administrative direction, coupled with growing demands for commodity and noncommodity uses on Forest Service lands and activities occurring outside forest boundaries—such as harvesting timber on state timberlands and converting private timberlands to agricultural and urban uses—have made simultaneously meeting all of these needs increasingly difficult. According to the Chief of the Forest Service, the agency has placed increasing emphasis on maintaining or restoring noncommodity uses, especially biological diversity, on national forest lands, and this emphasis has significantly affected the agency’s ability to meet the demands for commodity uses. For example, increasing amounts of national forest land are being managed primarily for conservation, as wilderness, wild and scenic rivers, and recreation. In 1964, less than 9 percent (16 million acres) of national forest land was managed for conservation. By 1994, this figure had increased to 26 percent (almost 50 million acres). Most of the federal acreage set aside for conservation purposes is located in 12 western states. For example, of the 24.5 million acres of federal land in the western Washington State, Oregon, and California that were available for commercial timber harvest, about 11.4 million acres, or 47 percent of these lands, have been set aside by the Congress or administratively withdrawn under the original forest plans for such uses as wilderness, wild and scenic rivers, national monuments, and recreation. These figures do not take into account additional environmental restrictions that have reduced the amount of federal land available for commodity uses. For example, another 7.6 million acres, or 31 percent, of federal land in western Washington, Oregon, and California that were available for commercial timber harvest have been set aside or withdrawn as habitat for species that live in old-growth forests, including the threatened northern spotted owl, and for riparian reserves to protect watersheds. Limited timber harvesting and salvage are allowed in some of these areas for forest health. In total, 77 percent of the 24.5 million acres of federal land in western Washington, Oregon, and California that were available for commercial timber harvest have been set aside or withdrawn primarily for noncommodity uses. In addition, while the remaining 5.5 million acres, or 22 percent, are available for regulated harvest, minimum requirements for maintaining biological diversity under NFMA as well as air and water quality under the Clean Air and Clean Water acts, respectively, may limit the timing, location, and amount of harvesting that can occur. Moreover, harvests from these lands could be further reduced by plans to protect threatened and endangered salmon. Timber sold from Forest Service lands in the three states declined from 4.3 billion board feet in 1989 to 0.9 billion board feet in 1994, a decrease of about 80 percent. However, as we noted in an August 1994 report, many agency officials, scientists, and natural resource policy analysts believe that maintaining or restoring wildlife and their physical environment is critical to sustaining other uses on Forest Service lands. As the Forest Service noted in October 1995, demands for forest uses, both commodity and noncommodity, will increase substantially in the future. Thus, as we noted in our January 25, 1996, testimony, some Forest Service officials do not believe that the conflicts among competing uses will lessen substantially. As a result, some Forest Service officials have suggested that the Congress needs to provide greater guidance on how the agency is to balance competing uses. In particular, the Chief has stated that (1) the maintenance and restoration of noncommodity uses, especially biological diversity, needs to be explicitly accepted or rejected and (2) if accepted, its effects on the availability of commodity uses should be acknowledged. In summary, Mr. Chairman, I would like to offer the following observation. As indicated by the GAO products referred to in this statement, we have over the last several years looked at the Forest Service from several different perspectives and at several organizational levels. What is becoming more apparent is that, regardless of the organizational level and the perspective from which the agency is viewed, many of the issues appear to be the same. These issues include the lack of (1) adequate scientific and socioeconomic data to make necessary or desired trade-offs among various values and concerns, (2) adequate coordination within the Forest Service and among federal agencies to address issues and concerns that transcend the boundaries of ownership and jurisdiction, and (3) incentives for federal and nonfederal stakeholders to work together cooperatively to resolve their differences. We will, in the coming months, more fully evaluate these and other issues. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the Forest Service's management of national forests, focusing on the issues related to multiple use of forest land. GAO noted that: (1) the Forest Service's decisions are affected by changing natural land conditions, funding, and new information and events; (2) laws concerning forest land use are complicated by fragmented authority between federal agencies and states; (3) the Forest Service should consider shortening the periods covered under forest plans, reducing the influence of subsequent events, improving the data on which decisions are based, increasing coordination among the Forest Service and other federal agencies, and limiting administrative appeals; (4) some Forest Service officials believe that Congress should provide guidance on how to balance competing uses of forest land; and (5) the Chief of the Forest Service believes that the maintenance and restoration of noncommodity uses should be explicitly accepted or rejected, and if accepted, the effects should be acknowledged. |
After the Peru accident, a joint investigation by the United States and Peru reviewed the circumstances of the accident and made several conclusions. The investigation team was comprised of U.S. representatives from the Departments of State and Defense, as well as Peruvian officials from its Ministries of Foreign Affairs and Defense. According to U.S. officials, Colombian President Alvaro Uribe requested the restart of the ABD program to help combat drug trafficking. After initial discussions, the United States decided to assist Colombia in restarting a revised ABD program that would be managed by the Colombian Air Force and overseen by State/INL. In addition, U.S. officials said that Colombia had existing infrastructure that facilitated the program’s restart, such as air bases and a national air command center. Prior to the restart of Colombia’s ABD program, a committee of representatives from the Departments of State, Defense, and Justice, as well as the Colombian government developed a Letter of Agreement outlining the program’s goal, safety requirements, operational procedures, and each country’s responsibilities. The Letter of Agreement, which was signed in April 2003, states that the program’s goal is to increase the Colombian government’s ability to stop aerial drug trafficking over Colombia. Colombia is required to provide personnel and interceptor aircraft, designate the areas where the program can operate in Colombia’s airspace, and manage the daily operations. The United States is primarily responsible for providing the program with surveillance aircraft, personnel, training, and funds to maintain and operate the equipment, while ensuring that the program is regularly reviewed. Finally, a U.S. Presidential determination was necessary to restart the program, and is required annually to allow U.S. employees and its agents to assist Colombia in the use of force against civilian aircraft reasonably suspected to be primarily engaged in illicit drug trafficking. Since 2002, the United States has provided $68.4 million and surveillance aircraft for the ABD program (see table 1) and plans to provide an additional $25.9 million in fiscal year 2006. State/INL provided $57.2 million to support ARINC operations, including the U.S. personnel involved in the program and maintenance of the surveillance aircraft. State’s Narcotics Affairs Section (NAS) in the U.S. Embassy in Bogotá provided $11.2 million for training, aircraft operations, logistics support, and construction at ABD air bases. At no cost to State, the department also transferred five U.S.- owned Cessna Citation surveillance aircraft used in the prior ABD programs, which are equipped with tracking systems. Also, Defense gave Colombia two additional surveillance aircraft, which have yet to be used for surveillance. The fiscal year 2006 request for approximately $26 million for the program is planned to continue existing operations and construct additional infrastructure. U.S. and Colombian officials said they do not intend for Colombia to fully finance the program in the near future. In response to the findings of the Peru investigation and other determinations, the United States and Colombia developed new safeguards for the renewed ABD program which were implemented consistently in the interdiction missions we observed. The program also undergoes multiple evaluations to ensure that all elements of the Letter of Agreement are followed. The renewed ABD program has new safeguards in place, which were prompted by the review of the Peru accident. The investigation of the Peru incident found that crewmembers did not fully perform some procedures, neglected the safety of the mission, had limited foreign language skills, and were talking on a congested communication system. In response, safeguards were developed to reinforce and clarify procedures, bolster safety monitoring, enhance language skills of ABD personnel, and improve communication channels. Another factor of the accident—the civilian pilot’s unawareness of the ABD program’s procedures—was addressed by teaching civilian pilots about the program. In the Letter of Agreement for the renewed program, U.S. and Colombian officials require ABD personnel to follow specific procedures summarized in a safety checklist to ensure safe operations and clarify the roles of the United States and Colombia (see app. I). According to the Peru investigative report, the aircrews involved in the accident did not fully execute the program’s procedures, particularly aircraft identification. Also, the documentation of ABD procedures had become less specific over time. The procedural checklist for the renewed program lists the specific steps that must be taken to execute an ABD mission, including visually identifying the suspicious aircraft; calling to the aircraft over the radio; and, if necessary, requesting permission from the Commander of the Colombian Air Force to fire at the aircraft. In particular, the list provides the exact wording for crewmembers to use when confirming the completion of checklist steps to avoid confusion among the participants. While at the Colombian Air Force’s ABD command center in Bogotá, we observed ABD personnel following the checklist during the start of an ABD mission. While, at the time of the accident, the program did not have U.S. personnel dedicated to monitoring safety, three U.S. personnel are now responsible for safety and implementation of safeguards for each mission. The Peru investigation concluded that the interceptor crewmembers were focused on forcing aircraft to land and not on the overall safety of the mission. Under the renewed program, the three U.S. safety monitors are located at the Colombian Air Force’s command center in Bogotá; at JIATF-South in Key West, Florida; and onboard the surveillance aircraft. The safety monitor onboard the aircraft observes the completion of checklist procedures and alerts the U.S. monitors on the ground when each procedure is accomplished. If any of the three monitors objects to an action taken during the mission, the monitor immediately stops the mission. For example, if a monitor objects to the firing of warning shots because he is not confident that the suspicious aircraft is involved in drug trafficking, he will immediately pull all U.S. resources out of the mission, including the surveillance aircraft and U.S. personnel. However, if the problem is resolved, the mission can resume. As of June 2005, the U.S. monitors had objected to actions in ten missions, two of which were due to incorrect implementation of the checklist. The Letter of Agreement with Colombia requires all U.S. safety monitors to be fluent in Spanish and the Colombian crewmembers, besides the weapons controllers, to be proficient in English because language barriers among the aircrew members contributed to the Peru accident. The aircrews involved in the Peru accident had flown on previous operational missions together and had some foreign language skills; however, they were not proficient enough to communicate clearly during the high stress of an interception. For example, one of the crewmembers in the Peru incident obtained the suspicious aircraft’s tail number so that ground personnel could determine the owners of the aircraft. However, other participants in the mission did not understand the message, and never learned that the plane was a legitimate civilian aircraft. Under the renewed program, the U.S. safety monitors are native Spanish speakers or were formerly trained as linguists with the U.S. military. Annually, ARINC tests the U.S. monitors fluency in Spanish and the NAS tests the Colombians’ proficiency in English. Additionally, the renewed ABD program has incorporated improved communications systems and procedures to ensure the communications mishaps that contributed to the Peru accident do not reoccur. An ABD interception requires the coordination of many individuals, including ground personnel in the ABD command center in Bogotá and JIATF-South in Key West, Florida, and aircrews in the surveillance and interceptor aircraft. During the Peru accident, the aircrews were talking simultaneously over the same radio channels and were not able to hear each other clearly. For example, the crew of the suspicious aircraft called the closest air control tower when they saw the interceptor aircraft following them, but the control tower did not receive the call because it was too far away. Although the ABD aircrews were monitoring the tower’s radio frequency, they did not hear the call because other communication was occurring at the same time. For the new program, a satellite radio channel is dedicated to the U.S. monitors, and the Colombian crewmembers communicate on various radio channels. A satellite phone is also available if radio communication is impaired. The U.S. monitors must suspend the mission if communication among them is lost, which has happened three times. Although not a finding of the Peru investigation, the Letter of Agreement with Colombia recognizes that a community of civilian pilots unaware of the ABD program can threaten the safety of the program’s operations. Pilots who do not know how to respond to an ABD interception may put themselves and the ABD aircrews at undue risk. The pilot of the suspicious aircraft in Peru was unaware that he was being called over the radio by the ABD aircrew. The Letter of Agreement with Colombia requires the Colombian government to inform the public of the program’s operating procedures, which the government does through publicly posted notices and required training courses for pilots and air traffic controllers. The government also teaches the civilian pilots how to respond to an ABD interception, as well as the consequences of not complying with the Colombian Air Force’s commands. The program managers that we interviewed said that the program’s operations are reviewed and evaluated regularly by the program managers and cognizant U.S. agency officials. For example: The United States recertifies the program once a year, a process that serves as the basis for the President’s annual decision on whether to continue supporting the program. The program was first certified in May 2003 before it could restart and again in July 2004 by a team of representatives from the Departments of State, Defense, Homeland Security, Justice, and Transportation. The 2004 certification team ensured that the major components of the Letter of Agreement— operational procedures, training requirements, logistics support, and information to civilian pilots—were in place by reviewing official documents, performing interviews, and observing training and operational exercises. The crew of the surveillance aircraft records a video of each ABD mission once the suspicious aircraft is located. Both Colombian and U.S. program managers review the video to determine if the checklist was followed. We reviewed recordings of some ABD missions and observed the implementation of the safety procedures. U.S. and Colombian managers of the program meet every six months to plan for the future and discuss Colombia’s needs for the program. After the U.S. managers consider Colombia’s needs, they determine whether to fund the requests for such items as repaving runways at ABD operating locations. In addition, the Congress requires an annual report from the President on the program’s resources and procedures used to intercept drug trafficking aircraft. The report certifies that the procedures agreed to in the Letter of Agreement were followed in the interception of aircraft engaged in illegal drug trafficking during the preceding year. Our analysis of available data—the number of suspicious tracks and law enforcement activities—indicates that the ABD program’s progress to date is mixed. The Colombian Air Force surveillance aircraft pursued less than half of the almost 900 suspicious tracks identified since October 2003 and few were located. But the number of suspicious tracks appears to have declined, although the consistency of the suspicious track data is unknown. The program’s primary objective—to safely force suspicious aircraft to land so law enforcement authorities can gain control of it—has not happened very often. In addition, the location of most suspicious tracks has changed, making them more difficult to locate and intercept. State/INL does not have clear performance measures that can be used to help assess the ABD program’s progress toward eliminating aerial drug trafficking in Colombia. State/INL officials told us that they began developing measures in early 2005, but they do not contain benchmarks or timeframes. These same officials said they will meet with Defense shortly to review data related to these measures for the first time. Since October 2003, the Colombians pursued about 390 out of approximately 880 suspicious tracks, and located 48 of them (see fig. 1). Tracks are often difficult to locate because the relocatable over-the-horizon radar does not provide the suspicious aircraft track’s altitude or the exact location. Furthermore, according to Colombian Air Force officials, drug traffickers use hundreds of clandestine airstrips in Colombia. Often the airstrips are camouflaged and suspicious aircraft are hidden from view. Therefore, when surveillance aircraft do not interdict suspicious aircraft in the air, they are unlikely to find them once they land and disappear from radar. Recently, the Colombian Air Force asked the United States to permanently increase the number of flight hours (from 180 to 300 per month) for the surveillance aircraft to spend more time training and locating clandestine airstrips. According to U.S. and Colombian officials, once these airstrips are located, the ABD aircrews can focus on them when searching for a suspicious aircraft in the area. To date, based on our review of ABD documents, identification of clandestine airstrips has not assisted in locating any suspicious aircraft. Some State/INL officials are skeptical that more flying time, which would increase maintenance and fuel costs, would produce greater results for the program. The number of suspicious aircraft tracks over Colombia has apparently declined from approximately 49 to 30 a month, a decrease of about 40 percent, between the first 11 months of the program and the last 11 months (see fig. 1). According to U.S. and Colombian officials, the reduction in tracks indicates that the program is deterring drug traffickers from transporting drugs by air, and is allowing the Colombian government to meet its goal of regaining control of its airspace, but performance measures with benchmarks and timeframes linking the reduction of suspicious tracks to the overall goal have not been developed. Moreover, the accuracy of the suspicious track data over time is suspect due to the nature of the process, which relies on some subjective criteria. The process for determining whether a track is suspicious or legitimate is partly based on the judgment of JIATF-South and Colombian Air Force personnel, who screen thousands of tracks every month looking for suspicious movements. For example, personnel consider whether the aircraft has inexplicably deviated from its planned flight path or the aircraft’s altitude is unusually low. However, no definitive criteria exist for such factors, which can therefore be interpreted differently among operators. The National Police seldom take control of suspicious aircraft, arrest individuals suspected of drug trafficking, or seize drugs. Since October 2003, law enforcement was involved in 14 instances (see fig. 2), arresting four suspects and impounding several aircraft. However, in four of these instances, the aircraft was already on the ground. Only one ABD mission resulted in a drug seizure—0.6 metric tons (about 1,300 pounds) of cocaine. Because State/INL has not established performance measures regarding law enforcement activities, determining whether the objectives of the ABD program are being met is difficult. However, State/INL officials said they would like law enforcement authorities to more often take control of suspicious aircraft, which may contain drugs, weapons, or cash that goes unaccounted for if the military or police do not arrive. As steps on the procedural checklist, ABD personnel contact the Colombian National Police in Bogotá at the start of an interdiction mission and prior to firing at the aircraft. But, besides these calls, the Colombian Air Force usually does not involve the police in ABD missions. Further, the police face various challenges in reaching the locations where suspicious aircraft land—often in remote areas of Colombia that are not accessible by road. The police cannot travel to some locations without additional resources, including security and transportation, according to INL officials. But, according to U.S. officials, greater planning and coordination between the Colombian Air Force and National Police could enable law enforcement to more frequently take control of suspicious aircraft. For example, in the February 2005 ABD mission that yielded 1,300 pounds of cocaine, one armed Colombian Navy helicopter provided cover for another Navy helicopter to land and seize as much cocaine from the aircraft as it could carry, while the Colombian Army and National Police arrived later and confiscated the remaining cocaine. In recent months, the majority of suspicious tracks are concentrated along Colombia’s borders (see fig. 3), making it more difficult for surveillance and interceptor aircraft to reach the aircraft before it lands or leaves Colombian airspace. Once a suspicious aircraft lands, surveillance aircraft often have difficulty locating it. From November 2003 to July 2005, 141 suspicious tracks were not pursued by surveillance aircraft because they were near borders or too far away. In particular, suspicious tracks along Colombia’s southeastern border with Brazil and Venezuela are at least an hour flight from the closest ABD air base in Apiay, leaving surveillance and interceptor aircraft little time to find and intercept a suspicious aircraft before having to refuel (see fig. 4). For example, one ABD interceptor aircraft, the A-37B Dragonfly, takes 50 minutes to fly to Caruru (about 250 miles) in southeastern Colombia, but it can remain in that area for only 10 minutes before leaving to refuel. ABD program managers and safety monitors said it usually takes about 25 minutes to complete an ABD mission when all resources are in place, from visually locating the aircraft to forcing it to land. However, some ABD missions have lasted as long as six or seven hours and utilized multiple surveillance and interceptor aircraft. The United States agreed to help Colombia restart the ABD program at the request of Colombian President Alvaro Uribe. U.S. funding provided for the program through fiscal year 2005 totals over $68 million; almost $26 million is proposed for 2006. We found the results of the ABD program mixed. A primary U.S. concern regarding the program—the safety of the ABD aircrews and innocent civilians—is addressed in detail in the Letter of Agreement between the United States and Colombia. However, the agreement does not include performance measures with benchmarks and timeframes to help with assessing results. Although State has begun developing such measures, assessing the program’s effectiveness and determining whether additional resources might contribute to increased results is difficult. Although the number of suspicious aircraft tracks has apparently declined—from about 49 to 30 per month, the Colombian Air Force seldom locates the suspicious aircraft. Moreover, drug traffickers do not face a great risk of being arrested even if they are detected. Out of about 390 suspicious tracks pursued since the start of the program, the Colombian Air Force located 48; law enforcement or military authorities went to the scene of 14, including four that were already on the ground. Besides calling the Colombian National Police during a mission, as required by the procedural checklist, the Colombian Air Force rarely involves the police. Without sufficient notice, the police cannot respond quickly and safely to suspicious aircraft on the ground because much of Colombia is not controlled by the government. Moreover, many of the suspicious aircraft tracks detected in recent months are along Colombia’s border regions with Brazil and Venezuela where ABD aircraft are often too far away to threaten the trafficking mission. First, to help in assessing whether the ABD program is making progress toward meeting its overall goal of reducing illegal drug trafficking in Colombia’s airspace, we recommend that the Secretary of State work with Colombia to define performance measures with benchmarks and timeframes. These performance measures, as well as results, should be included in the annual report to the Congress regarding the ABD program. Second, because the police are seldom involved in ABD missions, we recommend that the Secretary of State encourage Colombia to seek ways to more actively involve the National Police. Finally, because many of the suspicious aircraft tracks are difficult for Colombia to locate and interdict given the current location of its ABD air bases, we recommend that the Secretary of State encourage Colombia to establish ABD air bases closer to the current activity of suspicious aircraft tracks. State provided written comments on a draft of this report (see appendix III). Overall, it found the report to be an accurate assessment of the intent and execution of the program and noted that it is developing benchmarks and timeframes for its performance measures. Defense did not provide written comments. However, neither department commented on our recommendations. In addition, State and Defense officials provided technical comments and updates that we incorporated throughout the report, as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to interested congressional committees and the Secretaries of State and Defense. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please call me at (202) 512-4268 or FordJ@gao.gov. Key contributors to this report were Al Huntington, Hynek Kalkus, Summer Pachman, and Kerry Lipsitz. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. ander) assumed suspect (UAS) An ABD mission begins when relocatable over -the-horizon radar or other intelligence sources provide information about a suspicious flight to personnel at the ABD command center, who then consider various questions to determine whether the track is involved in illegal drug trafficking activities. Once a track is determined to be suspicious (checklist step 1), a surveillance aircraft, with a crew of Colombian Air Force personnel and one U.S. government contractor, tries to visually locate it (step 5). The length of time it takes to find the suspicious aircraft can vary. Once located, the ABD aircrews try to determine its identity and order it to land in English and Spanish over the radio or through the use of visual signals (step 7). If the suspicious aircraft continually fails to respond, an ABD aircrew member gives a verbal warning that deadly force will be used (step 9). An interceptor aircraft called to the scene, with a Colombian Air Force crew, fires warning shots and direct shots if the suspect fails to follow orders (steps 10 and 14). ander) All use of force by the interceptor is pre- approved by the Commander of the Colombian Air Force. Abbreviations: ATOI-A track of interest COL-Colombia CNP-Colombian National Police I-Initiates action N-Noted, but no action required RR-Response required SAR-Search and rescue UAS-Unidentified assumed suspect USG-U.S. government records crash site. To review the changes made to the ABD program to address safety concerns and to determine whether these safeguards were followed, we reviewed ABD program documentation provided by the Department of State’s Bureau for International Narcotics and Law Enforcement Affairs (State/INL) and the Narcotics Affairs Section (NAS) at the U.S. Embassy, Bogotá, and the Department of Defense’s Joint Interagency Task Force- South (JIATF-South). We also discussed the new safeguards with knowledgeable U.S. officials at State and Defense in Washington, D.C.; Bogotá, Colombia; and Key West, Florida; officials from the Colombian government and Colombian Air Force in Bogotá; and ARINC contractors located in Bogotá and an ABD air base in Apiay, Colombia. We also interviewed officials involved in the negotiation of the agreement between the United States and Colombia prior to restarting the program. To evaluate the program’s progress in attaining U.S. and Colombian objectives, we examined data regarding law enforcement activities; suspicious tracks; aircraft pursued, located, and fired at. This data was documented by NAS in reports prepared monthly for State/INL containing the number of missions flown that month and their results. However, complete data for August and September 2003 was unavailable, and the data for October 2003 to March 2004 did not include the number of tracks pursued and identified. By using an event log prepared by NAS with narratives of ABD missions beginning in August 2003, we extrapolated some of the data and corroborated all law enforcement activities and aircraft fired on. We also compared the number of suspicious tracks recorded by NAS with JIATF-South’s count. The numbers reported were not the same for most months. We used the NAS count of suspicious tracks because it is more inclusive of tracks detected by Colombia and does not include tracks that were later determined to be friendly aircraft. We interviewed cognizant officials at State/INL, NAS, and JIATF-South about the data. Further, we discussed the numbers of suspicious tracks with the Colombian Air Force. Based on our ability to corroborate most of the data with multiple sources, we determined that it was sufficiently reliable for our purposes. We traveled to Colombia in April 2005 and met with NAS officials and other cognizant U.S. Embassy, Bogotá, officials, and with ARINC managers at their offices at the Government of Colombia’s Ministry of Defense in Bogotá. We also visited an ABD air base in Apiay where we met with ARINC contractors serving as monitors onboard the surveillance aircraft. Additionally, we visited the Colombian Air Force’s ABD command center in Bogotá where we interviewed both ARINC contractors and Colombian Air Force officials. We witnessed the start of an ABD mission at the command center and reviewed video recordings of previous ABD missions at the U.S. Embassy. The following are GAO’s comments on the Department of State’s letter dated August 23, 2005. 1. State officials provided us draft performance measures for the ABD program that they told us were developed in April 2005 with representatives from Defense and the Colombian government. 2. We recognized the U.S. role of program oversight and the Colombian government’s role of operating and managing the program throughout the report. 3. We made no substantive changes. This section addressed only the initial identification of a suspect track. Our analysis found that three of the six criteria used to determine if a track is suspicious were subjective. The checklist and other steps taken to further identify suspect aircraft are discussed elsewhere in the report. 4. We gave examples of the criteria used to determine if an aircraft is suspected of drug trafficking. 5. A discussion of U.S. government intelligence support to the ABD program was not within the scope of our report. | In the 1990s, the United States operated a program in Colombia and Peru called Air Bridge Denial (ABD). The ABD program targeted drug traffickers that transport illicit drugs through the air by forcing down suspicious aircraft, using lethal force if necessary. The program was suspended in April 2001 when a legitimate civilian aircraft was shot down in Peru and two U.S. citizens were killed. The program was restarted in Colombia in August 2003 after additional safeguards were established. To date, the United States has provided about $68 million in support and plans to provide about $26 million in fiscal year 2006. We examined whether the ABD program's new safeguards were being implemented and its progress in attaining U.S. and Colombian objectives. The United States and Colombia developed additional safeguards for the renewed ABD program to avoid the problems that led to the accidental shoot down in Peru. The safety measures aim to reinforce and clarify procedures, bolster safety monitoring, enhance language skills of ABD personnel, and improve communication channels. We found the safeguards were being implemented by the Colombians and U.S. safety monitors. In addition, the program managers perform periodic reviews and evaluations, including an annual recertification of the program, and have made efforts to improve civilian pilots' awareness of the ABD program's procedures. Our analysis of available data indicates that the ABD program's results are mixed, but the program's progress cannot be readily assessed because performance measures with benchmarks and timeframes do not exist. The stated objective for the program--for the Colombian National Police to take control of suspicious aircraft--seldom happens. During October 2003 through July 2005, the Colombian Air Force located only 48 aircraft out of about 390 suspicious tracks pursued; and the military or police took control of just 14 aircraft--four were already on the ground. Only one resulted in a drug seizure. However, many of the suspicious aircraft land in remote locations controlled by insurgent groups that require time to enter safely. Yet, the air force rarely involves the police besides calling them at the start of a mission and before firing at the suspicious aircraft. In addition, many of the suspicious tracks are near border areas with Brazil and Venezuela, which is too far from an ABD air base for aircraft to intercept without refueling. Nevertheless, the number of suspicious tracks has apparently declined from 49 to 30 per month, but the track counts may not be consistent over time because they are based on subjective criteria, such as whether an aircraft has inexplicably deviated from its planned flight path. According to U.S. and Colombian officials, the reduction in suspicious tracks indicates that Colombia is deterring traffickers and regaining control of its airspace. |
VHA provides a broad range of primary and specialized health care, as well as related medical and social support services through a network of more than 1,200 medical facilities. In carrying out its responsibilities, VHA uses “miscellaneous obligations” to obligate (or administratively reserve) estimated funds against appropriations for the procurement of a variety of goods and services when specific quantities and time frames are uncertain. According to VA policy, miscellaneous obligations can be used to record estimated obligations to facilitate the procurement of goods and services, such as fee-based medical and nursing services and beneficiary travel. In fiscal year 2007, VHA recorded over $6.9 billion of miscellaneous obligations for the procurement of mission-related goods and services. According to VHA fiscal year 2007 data, almost $3.8 billion (55.1 percent) of VHA’s miscellaneous obligations was for fee-based medical services and another $1.4 billion (20.4 percent) was for drugs and medicines. The remainder funded, among other things, state homes for the care of disabled veterans, transportation of veterans to and from medical centers for treatment, and logistical support and facility maintenance for VHA medical centers nationwide. In September 2008, we reported that VA policies and procedures were not designed to provide adequate controls over the authorization and use of miscellaneous obligations with respect to (1) oversight by contracting officials, (2) segregation of duties, and (3) supporting documentation for the obligation of funds. Collectively, these flaws increased the risk of fraud, waste, and abuse. Our case studies at three medical centers showed, for example, that VA did not have procedures in place to document any review by contracting officials, and none of the 42 obligations we reviewed had such documented approval. Effective oversight and review by trained, qualified officials is a key factor in helping to ensure that funds are used for their intended purposes. Without control procedures to help ensure that contracting personnel review and approve miscellaneous obligations prior to their creation, VHA is at risk that procurements do not have the necessary safeguards. In addition, our analysis of VA data identified 145 miscellaneous obligations, amounting to over $30.2 million, that appeared to have been used in the procurement of such items as passenger vehicles; furniture and fixtures; office equipment; and medical, dental and scientific equipment. VA officials told us, however, that the acquisition of such assets should be done by contracting rather than through miscellaneous obligations. Our 2008 report also cited inadequate segregation of duties. Federal internal control standards provide that for an effectively designed control system, key duties and responsibilities need to be divided or segregated among different people to reduce the risk of error or fraud. These controls should include separating the responsibilities for authorizing transactions, processing and recording them, reviewing the transactions, and accepting any acquired assets. In 30 of the 42 obligations reviewed, one official performed two or more of the following functions: requesting, approving, or recording the miscellaneous obligation of funds, or certifying delivery of goods and services and approving payment. In two instances involving employee grievance settlements, one official performed all four of these functions. In 2007, the VA OIG noted a similar problem in its review of alleged mismanagement of funds at the VA Boston Healthcare System. For example, according to OIG officials, they obtained documents showing that a miscellaneous obligation was used to obligate $200,000. This miscellaneous obligation was requested, approved, and obligated by one fiscal official. The OIG concluded that Chief of the Purchasing and Contracting Section and four other contracting officers executed contract modifications outside the scope of original contracts and the Chief of the Fiscal Service allowed the obligation of $5.4 million in expired funds. In response to the OIG recommendations, VA officials notified contracting officers that the practice of placing money on a miscellaneous obligation for use in a subsequent fiscal year to fund new work was a violation of appropriations law, and that money could no longer be “banked” on a miscellaneous obligation absent a contract to back it up. Similarly, an independent public accountant’s July 2007 report found, among other things, that the segregation of duties for VA’s miscellaneous obligation process was inadequate. Without the proper segregation of duties, risk of errors, improper transactions, and fraud increases. Our 2008 case studies also identified a lack of adequate supporting documentation at the three medical centers we visited. Specifically, VA policies and procedures were not sufficiently detailed to require the type of information needed such as purpose, vendor, and contract number that would provide crucial supporting documentation for the obligation. In 8 of 42 instances, we could not determine the nature, timing, or the extent of the goods or services being procured from the description in the purpose field. As a result, we could not confirm that the miscellaneous obligations were for bona fide needs or that the invoices reflected a legitimate use of federal funds. Our report concluded that without basic controls in place over billions of dollars in miscellaneous obligations, VA is at significant risk of fraud, waste, and abuse. In the absence of effectively designed key funds and acquisition controls, VA has limited assurance that its use of miscellaneous obligations is kept to a minimum, for bona fide needs, and in the correct amounts. We made four recommendations, concerning review by contracting officials, segregation of duties, supporting documentation, and oversight mechanisms. These recommendations aimed at reducing the risks associated with the use of miscellaneous obligations. In response to our recommendations, in January of 2009, VA issued Volume II, Chapter 6, of VA Financial Policies and Procedures— Miscellaneous Obligations, which outlines detailed policies and procedures aimed at addressing control deficiencies identified in our September 2008 report. Key aspects of the policies and procedures VA developed in response to our four recommendations included: Review of miscellaneous obligations by contracting officials—The request and approval of miscellaneous obligations are to be reviewed by contracting officials, and the contracting reviews are to be documented. Segregation of duties—No one official is to perform more than one of the following key functions: requesting the miscellaneous obligation; approving the miscellaneous obligation; recording the obligation of funds; or certifying the delivery of goods and services or approving payment. Supporting documentation for miscellaneous obligations—New procedures require providing the purpose, vendor, and contract number fields before processing obligation transactions, including specific references, the period of performance, and the vendor name and address. Oversight mechanism to ensure control policies and procedures are fully and effectively implemented—Each facility is now responsible for performing independent quarterly oversight reviews of the authorization and use of miscellaneous obligations. Further, the results of the independent reviews are to be documented and recommendations tracked by facility officials. The policies and procedures also note that the Office of Financial Policy is to conduct quarterly reviews of VA miscellaneous obligation usage to ensure compliance with the new requirements. As part of its fiscal year 2009 review activities, VA’s Office of Business Oversight (OBO) Management Quality Assurance Service (MQAS) evaluated VA compliance with new VA policies and procedures concerning the use of miscellaneous obligations—Financial Policies and Procedures, Volume II, Chapter 6, Miscellaneous Obligations. According to its executive summary report, the MQAS reviewed 476 miscellaneous obligations at 39 different medical centers, health care systems, and regional offices in fiscal year 2009. The MQAS found 379 instances of noncompliance with the new policies and procedures. Examples include: Inadequate oversight of miscellaneous obligations by contracting officials—Many miscellaneous obligations were not submitted for the required approval by the Head of Contracting Activity. Further, some miscellaneous obligation were used for invalid purposes, including employee tuition, utilities, general post, lab tests, and blood products. Segregation of duties— Many miscellaneous obligations had inadequate segregation of duties concerning the requesting, approving, and recording of miscellaneous obligations, and the certifying receipt of goods and services and approving payment. For example, the MQAS identified 48 instances where two individuals performed all four of these functions. Supporting documentation for miscellaneous obligations—Some miscellaneous obligations also lacked adequate supporting documentation concerning the vendor name, performance period, and the contract number. These noncompliance issues were similar to those we identified in our September 2008 report on VHA miscellaneous obligations. Overall, MQAS found that there was a lack of timely dissemination of the new miscellaneous obligation policy, and issued 34 recommendations to VA facility officials. Fiscal year 2010 facility-level recommendations included the need to develop standard operating procedures for implementing the policy, to provide training for new accounting personnel, to require documentation establishing segregation of duties, and to institute facility-level quarterly reviews. According to the MQAS Associate Director, VHA facilities are in the process of taking corrective actions to address the MQAS recommendations. In November of 2009, we reported that VA had three long-outstanding material weaknesses in internal control over financial reporting identified during VA’s annual financial audits. Financial management oversight—reported as a material weaknesses since fiscal year 2005. This issue was also identified as a significant deficiency in fiscal years 2000 through 2004. This weakness stemmed from a variety of control deficiencies, including the recording of financial data without sufficient review and monitoring, a lack of sufficient human resources with the appropriate skills, and a lack of capacity to effectively process a significant volume of transactions. Financial management system functionality—reported since fiscal year 2000—is linked to VA’s outdated legacy financial systems affecting VA’s ability to prepare, process, and analyze financial information that is timely, reliable, and consistent. Legacy system deficiencies necessitated significant manual processing of financial data and a large number of adjustments to the balances in the system. IT security controls—also reported since fiscal year 2000—resulted from the lack of effective implementation and enforcement of an agencywide information security program. Security weaknesses were identified in the areas of access control, segregation of duties, change control, and service continuity. We also found that while VA had corrective action plans in place intended to result in near-term remediation of its significant deficiencies, many corrective action plans did not contain the detail needed to provide VA officials with assurance that the plans could be effectively implemented on schedule. Eight of the 13 plans we reviewed lacked key information regarding milestones for completion of specific action steps and/or validation activities. Consequently, VA managers could not readily identify and address slippage in remediation activities, exposing VA to continued risk of errors in financial information and reporting. VA recognized the need to better oversee and coordinate agencywide oversight activities for financial reporting material weaknesses, and began to staff a new office responsible for, in part, assisting VA and the three administrations and staff offices in executing and monitoring corrective actions plans. Our report concluded that actions to provide a rigorous framework for the design and oversight of corrective action plans will be essential to ensuring the timely remediation of VA’s internal control weaknesses, and that continued support from senior VA officials and administration CFOs would be critical to ensure that key corrective actions are developed and implemented on schedule. We made three recommendations to help improve corrective action plan development and oversight. VA concurred with the recommendations and said that it took some actions to address the recommendations, including developing a manual with guidance on corrective action planning and monitoring, creating a corrective action plan repository, and establishing a Senior Assessment Team of senior VA officials as the coordinating body for corrective action planning, monitoring, reporting, and validation of deficiencies identified during financial audits. VA’s independent auditor fiscal year 2009 financial audit report included the three material weaknesses that have been reported as deficiencies since 2000. In addition, it also included a new material weakness concerning compensation, pension, and burial liabilities. Furthermore, VA’s reporting indicated remediation timetables for the previously reported material weaknesses appear to be slipping. In the fiscal year 2009 Performance and Accountability Report, VA officials noted that in fiscal year 2009 they had closed 10 of the underlying significant deficiencies reported in fiscal year 2008, but that their timetables had slipped for remediating the IT security controls and financial management oversight material weaknesses to 2010 and 2012, respectively. In addition, milestones for remediating the new material weakness—compensation, pension, and burial liabilities—had yet to be determined. According to the independent auditor, the causes for the fiscal year 2009 material weaknesses related to challenges to implement security policies and procedures, a lack of sufficient personnel with the appropriate knowledge and skills, a significant volume of transactions, and decentralization. These findings are consistent with those we identified in our 2009 report and are all long-standing issues at the VA. The auditor noted that VA did not consistently monitor, identify, and detect control deficiencies. The auditor recommended that VA assess the resource and control challenges associated with operating in a highly decentralized accounting function, and develop an immediate interim review and monitoring plan to detect and resolve deficiencies. In summary, while we have not independently validated the status of VA’s actions to address our 2008 and 2009 reports’ findings concerning VA’s controls over miscellaneous obligations and financial reporting, VA’s recent inspections and financial audit report indicate that the serious, long-standing deficiencies we identified are continuing. Effective remediation will require well-designed plans and diligent and focused oversight by senior VA officials. Further, the extent to which such serious weaknesses continue raises questions concerning whether VA management has established an appropriate “tone at the top” necessary to ensure that these matters receive the full, sustained attention needed to bring about their full and effective resolution. Until VA’s management fully addresses our previous recommendations, VA will continue to be at risk of improper payments, waste, and mismanagement. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other members of the committee may have at this time. For further information about this testimony, please contact Susan Ragland, Director, Financial Management and Assurance at (202) 512-9095, or raglands@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Major contributors to this testimony included Glenn Slocum, Assistant Director; Richard Cambosos; Debra Cottrell; Daniel Egan; Patrick Frey; W. Stephen Lowrey; David Ramirez; Robert Sharpe; and George Warnock. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In September 2008, GAO reported internal control weaknesses over the Veteran Health Administration's (VHA) use of $6.9 billion in miscellaneous obligations in fiscal year 2007. In November 2009, GAO reported on deficiencies in corrective action plans to remediate financial reporting control deficiencies. This testimony is based on these previous reports that focused on (1) VHA miscellaneous obligation control deficiencies and (2) Department of Veterans Affairs (VA) financial reporting control deficiencies and VA plans to correct them. For its review of VHA miscellaneous obligations, GAO evaluated VA's policies and procedures and documentation, interviewed cognizant agency officials, and conducted case studies at three VHA medical centers. For its review of financial reporting control deficiencies, GAO evaluated VA financial audit reports from fiscal years 2000 to 2008 and analyzed related corrective action plans. In September 2008, we reported that VHA recorded over $6.9 billion of miscellaneous obligations for the procurement of mission-related goods and services in fiscal year 2007. We also reported that VA policies and procedures were not designed to provide adequate controls over the authorization and use of miscellaneous obligations, placing VA at significant risk of fraud, waste, and abuse. We made four recommendations with respect to (1) oversight by contracting officials, (2) segregation of duties, (3) supporting documentation for the obligation of funds, and (4) oversight mechanisms. In January 2009, VA issued new policies and procedures aimed at addressing the deficiencies identified in GAO's September 2008 report. In November of 2009, we reported that VA's independent public auditor had identified two of VA's three fiscal year 2008 material weaknesses--in financial management system functionality and IT security controls--every year since fiscal year 2000 and the third--financial management oversight--each year since fiscal year 2005. While VA had corrective action plans in place that intended to result in near-term remediation of its internal control deficiencies, many of these plans did not contain the detail needed to provide VA officials with assurance that the plans could be effectively implemented on schedule. For example, 8 of 13 plans lacked key information about milestones for steps to achieve the corrective action and how VA would validate that the steps taken had actually corrected the deficiency. While VA began to staff a new office responsible for, in part, assisting VA and the three administrations in executing and monitoring corrective action plans, we made three recommendations to improve corrective action plan development and oversight. VA concurred with our recommendations and took some steps to address them. In fiscal year 2009, VA's own internal VA inspections and financial statement audit determined that the internal control deficiencies identified in our prior reports on miscellaneous obligations and material weaknesses identified in prior financial audits continued to exist. VA conducted 39 inspections, which identified problems with how VHA facilities had implemented VA's new miscellaneous obligation policies and procedures. Similarly, VA's independent auditor reported that VA continued to have material weaknesses in financial management system functionality, IT security controls, and financial management oversight in fiscal year 2009. To the extent that the deficiencies we identified continue, it will be critical that VA have an effective "tone at the top" and mechanisms to monitor corrective actions related to deficient internal controls. In its September 2008 report, GAO made four recommendations to improve VA's internal controls over miscellaneous obligations. In its November 2009 report, GAO made three recommendations to improve VA corrective action plans to remediate financial reporting control deficiencies. VA generally concurred with these recommendations and has since reported taking actions to address the recommendations. |
Federal agencies reported improper payments of an estimated $125.4 billion in fiscal year 2010. This estimate represents about 5.5 percent of the $2.3 trillion of reported outlays for the related programs in fiscal year 2010. The $125.4 billion estimate is an increase of $16.2 billion from federal agencies’ prior year reported estimate of $109.2 billion. Estimated improper payment amounts for both of these years may include estimates based on prior years’ data, if current reporting year data were not available, as allowed by OMB guidance. The $125.4 billion in estimated federal improper payments reported for fiscal year 2010 was attributable to over 70 programs spread among 20 agencies. As shown in table 1, the highest reported improper payment estimated amounts were associated with 10 programs. Specifically, the 10 programs accounted for about $118 billion or 94 percent of the total estimated improper payments reported for fiscal year 2010. It is important to recognize that the $125.4 billion in improper payments federal agencies reported in fiscal year 2010 is not intended to be an estimate of fraud in federal agencies’ programs and activities. Rather, reported improper payment estimates include many types of overpayments, underpayments, and payments that were not adequately documented. Agencies cited a number of causes for the estimated $125.4 billion in reported improper payments, including insufficient documentation, incorrect computations, changes in program requirements, and in some cases fraud. Increases in the estimated amounts of improper payments reported for fiscal year 2010 were primarily attributable to increases in estimated improper payments related to four major programs: (1) Department of Labor’s Unemployment Insurance program, (2) Department of the Treasury’s Earned Income Tax Credit program, (3) Department of Health and Human Services’ (HHS) Medicaid program, and (4) HHS’ Medicare Advantage program. Agencies reported that the increases in the estimates for these programs were primarily attributable to an increase in program outlays. That was the case for the Medicaid and Medicare Advantage programs even though these two programs reported lower error rates. Both Unemployment Insurance and Earned Income Tax Credit programs reported higher program outlays and higher error rates for fiscal year 2010 when compared to fiscal year 2009. Since the implementation of IPIA in 2004, federal agencies have consistently identified new programs or activities as risk-susceptible and reported estimated improper payment amounts. fiscal year 2005—17 new programs or activities, fiscal year 2006—15 new programs or activities, fiscal year 2007—19 new programs or activities, fiscal year 2008—10 new programs or activities, fiscal year 2009—5 new programs or activities, and fiscal year 2010—2 new programs or activities. In addition, federal agencies have reported progress since 2004 in reducing improper payment amounts and payment error rates in some programs and activities. From the initial implementation of IPIA in 2004 through 2010, 28 programs have consistently reported estimated improper payment error rates for each year. Of these 28, 17 agency programs reported reduced error rates in comparison with their initial or baseline error rates reported in fiscal year 2004. Following are examples of agencies reporting reductions in program error rates and estimated improper payment amounts (along with corrective actions to reduce improper payments) in their fiscal year 2010 PARs, AFRs, or annual reports. HHS reported that the fiscal year 2010 Head Start program’s estimated improper payment amount decreased from the fiscal year 2009 amount of $213 million to $123 million, which represented a decrease in the error rate of 1.3 percentage points to a 1.7 percent error rate. HHS reported that it reduced payment errors by issuing additional guidance for employees on verifying income eligibility and a standard template form to help guide grantees in the enrollment process. The U.S. Department of Agriculture (USDA) reported that the fiscal year 2010 estimated improper payment amount for the Marketing Assistance Loan program decreased from the fiscal year 2009 reported amount of $85 million to $35 million, which represented a decrease in the error rate of 1.75 percentage points to a 0.81 percent error rate. USDA reported that corrective actions taken to reduce improper payments included providing additional training and instruction on improper payment control procedures, and integrating employees’ individual performance results related to reducing improper payments into annual performance ratings. Despite reported progress in reducing estimated improper payment amounts and error rates for some programs and activities during fiscal year 2010, federal agencies’ reporting indicates the federal government still faces challenges in this area. Agency reporting highlighted challenges that remain in meeting the requirements of IPIA, including determining the full extent of improper payments across the federal government and in reasonably assuring that effective actions are taken to reduce improper payments. Specifically, some federal agencies’ fiscal year 2010 reporting did not include information demonstrating that (1) risk assessments were conducted on all of their programs and activities, and (2) improper payment estimates were developed and reported for all risk-susceptible programs. IPIA required agencies to annually review all of their programs and activities to identify their risk of susceptibility to significant improper payments. However, two agencies—the United States Postal Service and the Department of Transportation—did not report on risk assessments of their programs and activities. The agencies either did not report any information on risk assessments in their PARs, AFRs, or annual reports, or included some risk assessment-related information (such as listing risk factors), but not on the results of any assessments of risk for all of their programs and activities. Further, IPIA required agencies to estimate improper payments for each program identified as susceptible to significant improper payments during the risk assessment process. However, three agencies did not report estimated improper payment amounts for fiscal year 2010 for seven risk- susceptible programs with significant amounts of outlays. Most notably, HHS has yet to report a comprehensive improper payment estimate amount for the Medicare Prescription Drug Benefit program, which had about $59 billion in outlays in fiscal year 2010. However, HHS expects to report a comprehensive estimate for this program in fiscal year 2011. While none of the seven risk-susceptible programs reported an improper payment estimated amount in fiscal year 2010 or 2009, all but one— Medicare Prescription Drug Benefit program—reported an improper payment estimated amount for fiscal year 2008. During fiscal year 2010, a number of actions were taken intended to strengthen the framework for reducing and reporting improper payments. First, in November 2009, the President issued Executive Order 13520, Reducing Improper Payments. This order was intended to focus on increasing transparency and accountability for reducing improper payments and creating incentives for reducing improper payments. Under the Executive Order, OMB established a Web site (www.PaymentAccuracy.gov) and designated 14 high-error programs to focus attention on the programs that significantly contribute to the federal government’s improper payments. The Web site provides information on (1) the programs’ senior accountable officials responsible for efforts to reduce improper payments; (2) current, targeted, and historical estimated rates of improper payments; (3) why improper payments occur in the programs; and (4) what federal agencies are doing to reduce improper payments and recover overpayments. The President also issued two memoranda in March and June 2010, intended to expand agency efforts to recapture improper overpayments using recapture audits, and directing the establishment of a Do Not Pay List to help prevent improper payments to ineligible recipients, respectively. In addition, in 2010, the President set goals, as part of the Accountable Government Initiative, for federal agencies to reduce overall improper payments by $50 billion, and recapture at least $2 billion in improper contract payments and overpayments to health providers, by the end of fiscal year 2012. In July 2010, Congress passed and the President signed IPERA. This legislation was intended to enhance reporting and the reduction of improper payments. In addition to amending the IPIA improper payment estimation requirements, IPERA established additional requirements related to (1) federal agency management accountability; (2) recovery auditing aimed at identifying and reclaiming payments made in error; (3) compliance and noncompliance determinations based on an inspector general’s assessment of an agency’s adherence to IPERA requirements and reporting that determination; and (4) an opinion on internal controls over improper payments. For example, regarding management accountability, IPERA requires agency managers, programs, and, where appropriate, states and localities, to be held accountable for achieving the law’s goals. This includes management’s use of the annual performance appraisal process to assess whether improper payment reduction targets were met and whether sufficient internal controls were established and maintained. In addition, IPERA included a new, broader requirement for agencies to conduct recovery audits, where cost-effective, for each program and activity with at least $1 million in annual program outlays. This IPERA provision significantly reduces the threshold requirement for conducting recovery audits from $500 million to $1 million and expands the scope for required recovery audits to all programs and activities. Previously, recovery audits were only required for agencies whose annual contract obligations exceeded the threshold. Another new IPERA provision calls for federal agencies’ inspectors general to annually determine whether their respective agencies are in compliance with key IPERA requirements and to report on their determinations. In closing, given the pressures resulting from today’s fiscal environment, the need to ensure that federal dollars are spent as intended is critical. While the increase in governmentwide improper payment estimates is alarming, federal agencies’ efforts to more comprehensively report on estimated improper payments represent a positive step to improve transparency over the full magnitude of federal improper payments for which corrective actions are necessary. With more federal dollars flowing into risk-susceptible programs, establishing effective accountability measures to prevent and reduce improper payments, and to recover overpayments, becomes an even higher priority. However, measuring improper payments and designing and implementing actions to reduce and prevent them are not simple tasks. Nonetheless, the ultimate success of the governmentwide effort to prevent and reduce improper payments hinges on the level of sustained commitment the agencies and the administration places on implementing the requirements established by IPERA, the Executive Order, and other guidance. We view the recent actions taken by Congress and the administration as positive steps toward improving transparency over and reducing improper payments. However, it is too soon to determine whether the activities called for in the Executive Order, Presidential memoranda, and IPERA will achieve their goal of reducing improper payments while continuing to ensure that federal programs serve and provide access to intended beneficiaries. Moreover, congressional efforts to oversee agencies will be essential to ensure that agencies are taking the appropriate action to fully implement these administrative and legislative requirements to improve accountability, achieve targeted goals, and reduce overall improper payments. Chairman Platts, Ranking Member Towns, this completes my prepared statement. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For more information regarding this testimony, please contact, Kay L. Daly, Director, Financial Management and Assurance, at (202) 512-9312 or by e-mail at dalykl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony included Shirley Abel, Assistant Director; Sabrina Springfield, Assistant Director; Liliam Coronado; Nicole Dow; Vanessa Estevez; Crystal Lazcano; Chelsea Lounsbury; Kerry Porter; Debra Rucker; and Danietta Williams. Medicare and Medicaid Fraud, Waste, and Abuse: Effective Implementation of Recent Laws and Agency Actions Could Help Reduce Improper Payments. GAO-11-409T. Washington, D.C.: March 9, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. For our report on the U.S. government’s consolidated financial statements for fiscal year 2010, see U.S. Department of the Treasury. 2010 Financial Report of the United States Government. Washington, D.C.: December 21, 2010, pp. 221-249. Improper Payments: Progress Made but Challenges Remain in Estimating and Reducing Improper Payments. GAO-09-628T. Washington, D.C.: April 22, 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | GAO's work over the past several years has highlighted long-standing, widespread, and significant problems with improper payments in the federal government. Fiscal year 2010 marked the 7th year of implementation of the Improper Payments Information Act of 2002 (IPIA). IPIA requires executive-branch agencies to identify programs and activities susceptible to significant improper payments, estimate annual amounts improperly paid, and report these estimates and actions taken to reduce them. On July 22, 2010, the Improper Payments Elimination and Recovery Act of 2010 (IPERA) was enacted. IPERA amended IPIA and expanded requirements for recovering overpayments across a broad range of federal programs. This testimony addresses (1) progress federal agencies have reported in estimating and reducing improper payments in fiscal year 2010, (2) challenges that continue to hinder full reporting of improper payment information, and (3) recent efforts by Congress and the executive branch intended to improve transparency and accountability for reporting, reducing, and recovering improper payments. This testimony is primarily based on prior GAO reports. GAO summarized available fiscal year 2010 improper payment information reported by federal executive-branch agencies and actions taken by the executive branch and Congress intended to improve transparency over, accountability for, and reduction of improper payments. Federal agencies reported an estimated $125.4 billion in improper payments for fiscal year 2010. The $125.4 billion estimate of improper payments federal agencies reported in fiscal year 2010 was attributable to over 70 programs spread among 20 agencies. Federal agencies' fiscal year 2010 estimated improper payment amount is an increase of $16.2 billion from federal agencies' prior year reported estimate of $109.2 billion. (1) Progress Reported in Estimating and Reducing Improper Payments. Since the initial implementation of IPIA in fiscal year 2004, federal agencies have consistently identified new programs or activities as risk-susceptible and reported estimated improper payment amounts. In addition, federal agencies have reported progress in reducing improper payments and payment error rates in some programs and activities. From fiscal years 2004 through 2010, 28 programs have consistently reported estimated improper payment error rates for each year. Of these 28, 17 agency programs reported reduced error rates in comparison with their initial or baseline error rates reported in fiscal year 2004. (2) Challenges Remain in Meeting Legislative Requirements to Fully Report Improper Payments Information. Agency reporting highlighted challenges that remain in meeting the requirements of IPIA, including determining the full extent of improper payments across the federal government and in reasonably assuring that effective actions are taken to reduce improper payments. Specifically, two agencies did not report on risk assessments of their programs and activities and three agencies did not develop and report on improper payments estimates for seven risk-susceptible programs with significant amounts of outlays. (3) Recent Efforts to Address Improper Payments. During fiscal year 2010, a number of changes and initiatives were put in place that are intended to strengthen the framework for reducing and reporting improper payments. For example, the President issued Executive Order 13520, Reducing Improper Payments. The President also issued two memoranda intended to expand agency efforts to recapture overpayments and directed that a Do Not Pay List be established to help prevent improper payments. Further, IPERA was enacted. In addition to amending the IPIA existing requirements, IPERA establishes additional requirements, among others, related to (1) federal agency management accountability; and (2) recovery auditing aimed at identifying and reclaiming payments made in error. We view these actions as positive steps; however, it is too soon to determine whether these activities will achieve their goal of reducing improper payments while continuing to ensure that federal programs serve and provide access to intended beneficiaries. |
The Veterans’ Education Assistance Act of 1984 created the current Montgomery GI bill program. Under the GI bill program (Title 38 United States Code Chapter 30), veterans who have met their duty obligation and were honorably discharged are eligible to receive GI benefits. Active-duty personnel are also eligible to use GI benefits. In fiscal year 2000, approximately $923 million was paid to veterans through the GI bill program. The GI bill program is a contributory program, which requires service members, upon enlistment, to contribute $100 per month for their first 12 months of service. Although service members have the option to decline participation at enlistment, almost all are enrolling in the GI bill program. In fiscal year 1999, 96 percent of all active-duty eligible enlistees enrolled in the GI bill program. GI benefits end 10 years from last discharge, and if veterans do not use their GI benefit within that time period, the amount is not refunded to them. Veterans may use GI benefits for a variety of postsecondary education programs, including an undergraduate or graduate degree; courses that lead to a certificate or diploma from a business, technical, or vocational school; correspondence training; on-the-job training or apprenticeships; and vocational flight training. The monthly benefit amount will vary depending on the type of coursework in which the veteran enrolls and whether the student attends part-time or full-time. For veterans who enroll full-time in an undergraduate degree program in academic year 2001-02, the monthly GI benefit is $672 per month for a maximum of 36 months—a total of $6,048 for each of 4 academic years. The Veterans Education and Benefits Expansion Act of 2001 (P.L. 107-103) increases the monthly GI benefit to $800 per month as of January 1, 2002. Additionally, the law increases the GI benefit to $900 per month on October 1, 2002 and to $985 per month beginning on October 1, 2003—a total of $8,100 in 2002-03 and $8,865 in 2003-04. Most veterans use their GI benefits to pursue an undergraduate degree. For example, in fiscal year 2001, 88 percent of GI bill beneficiaries were pursuing an undergraduate degree. In addition, 62 percent of GI bill beneficiaries were enrolled in school full-time. Title IV of HEA of 1965, as amended in 1998, authorizes the federal government’s financial aid programs for postsecondary education. Title IV programs include the following: Pell grant—grants to undergraduate students who are enrolled in a degree or certificate program and have financial need. Subsidized Stafford loan—loans made to students enrolled at least half- time in an eligible program of study and have demonstrated financial need. The federal government pays the interest costs on the loan while the student is in school. Unsubsidized Stafford loan—loans made to students enrolled at least half-time in an eligible program of study. Although the terms and conditions of the loan (e.g., interest rates) are the same as those for subsidized loans, the federal government does not pay the interest costs on the loan while the student is in school. Students are therefore responsible for all interest costs. PLUS loans—loans made to parents of dependent undergraduate students enrolled at least half-time in an eligible program of study. Borrowers are responsible for paying all interest on the loan. Campus-based aid—allocated to participating institutions by the U.S. Department of Education. The institutions then award the following aid to students: Supplemental Educational Opportunity Grant (SEOG)—grants for undergraduate students with financial need. Priority for this aid is given to Pell grant recipients. Perkins loans—low-interest (5 percent) loans to undergraduate and graduate students. Interest does not accrue while the student is enrolled at least half-time in an eligible program. Priority is given to students who have exceptional financial need. Work-study—students are provided on- or off-campus jobs in which they earn at least the current federal minimum wage. The institution or off-campus employer pays a portion of their wages. Education is responsible for administering $46 billion a year through these programs. As shown in figure 1, most of these funds are subsidized and unsubsidized Stafford loans. In addition to the Title IV programs, there are tax provisions that students and their families may be eligible to use for postsecondary education. The Taxpayer Relief Act of 1997 (P.L. 105-34) created several provisions to help families pay for postsecondary education, such as the HOPE tax credit, Lifetime Learning tax credit, and the student loan interest deduction. The act also created tax-exempt savings accounts, called education individual retirement accounts (IRA), to help families save for college. See table 1 for further detail about the tax provisions created under the Taxpayer Relief Act. The act also made changes to the tax treatment of state tuition savings plans, extended the exclusion for educational assistance provided by employers from students’ taxable incomes, and exempted IRA distributions for qualified higher education expenses from early withdrawal penalties. In 2001, the Economic Growth and Tax Relief Reconciliation Act (P.L. 107- 16) expanded and modified several of the tax provisions previously mentioned. Most notably, the act created a new tax deduction for tuition and fees for taxpayers, even those who do not itemize. In 2002 through 2005, taxpayers will have the option of using either the HOPE tax credit or the tuition deduction. The 2001 act also made changes to the student loan interest deduction and education IRAs. To obtain financial aid under Title IV programs, students must apply using the Free Application for Federal Student Aid (FAFSA). Information from the FAFSA is used to determine the amount of money—called the expected family contribution (EFC)—that a student, or student’s family, is expected to contribute to the student’s education. A student is classified as either financially dependent on his or her parents or independent in the financial aid process. This classification is important because it affects the factors used to determine a student’s EFC. For dependent students, the EFC is based on both the student and parent’s income and assets, as well as whether the family has other children enrolled in college. For independent students, the EFC is based on the student’s and, if married, spouse’s income and assets and whether the student has any dependents other than a spouse. By law, veteran students are automatically classified as independent students, and GI benefits are not included as veterans’ income for purposes of calculating EFC. Once EFC is established, this information is compared to the cost of the institution at which the student will attend to determine a student’s financial need. In the federal financial aid process, an institution’s cost of attendance includes tuition and fees, room and board, books, supplies, transportation, and miscellaneous personal expenses. The type of institution that a student attends will affect the cost of attendance. Typically, public 2-year institutions cost significantly less than public 4- year institutions or private 4-year institutions. For example, on average, the annual cost to attend a public 2-year college as a commuter student was $7,024 compared with $11,338 for resident students at public 4-year institutions and $24,946 for resident students at private 4-year institutions. If the EFC is equal to or greater than the cost of attendance, then the student is not considered to have financial need for federal aid programs. If the EFC is less than cost of attendance, then the student is considered to have financial need. Financial aid administrators at the school the student is attending then create a federal financial aid package that could include grants, loans, and work-study. The basic premise for awarding need-based aid is that a student’s “total financial aid package must not exceed a student’s need.” Veterans are an exception to this rule, and in some cases, those who receive GI benefits may receive a combined federal aid package of GI benefits and need-based aid, such as Pell grants and subsidized Stafford loans, that is greater than their financial need. Figure 2 highlights these key steps in determining a student’s financial need for Title IV programs. Most students, whether receiving a GI benefit or not, enroll in a public 2- population and 89 percent of GI bill students enroll in a public institution, while 19 percent of all students and 11 percent of GI bill students enroll in a private institution. Due to the fact that small proportions of the general student population and GI bill students enroll at private 2-year institutions, we focused our analysis on the other three types of institutions. Veterans’ eligibility for some federal grants, loans, and tax incentives may be affected by the receipt of GI benefits. Veterans can apply for federal Title IV aid; however, receiving GI benefits may affect their eligibility for some Title IV aid. Pell grant aid is awarded to students based mainly on income. The amount awarded to students is the difference between the maximum Pell grant award for that academic year and the student’s EFC. GI benefits are not included in the formula used to calculate EFC and therefore receiving them does not affect veterans’ eligibility for Pell grants. For all other Title IV aid, EFC and the amount of other financial assistance students receive is subtracted from the cost of attendance to determine financial need. Depending on the aid program, GI benefits may or may not be considered as another source of financial assistance for veteran students, thus affecting veterans’ eligibility for these programs. By law, GI benefits are specifically excluded as another source of assistance for veteran students when awarding subsidized Stafford loans. Since GI benefits are excluded from the calculations used to award subsidized Stafford loans, veterans and nonveterans are equally eligible for these loans. On the other hand, when determining eligibility for unsubsidized Stafford loans and campus-based aid, GI benefits are considered another source of assistance. Regulations issued in 1999 allow financial aid administrators some flexibility to exclude the value of GI benefits in the calculations for campus-based aid, but only if the GI bill student also receives a subsidized Stafford loan. If this flexibility is used, veteran students’ eligibility for campus-based aid may not be affected by receipt of GI benefits. With regard to various federal tax incentives available for postsecondary education, receiving GI benefits does not prevent veterans from claiming such benefits, but may affect the amount they would be eligible to claim. On average, veterans and nonveterans with similar characteristics are awarded about the same amount of federal Title IV aid, and when GI benefits are included, the total amount of federal assistance for postsecondary education is greater for veterans than nonveterans. When GI benefits are combined with Pell grant and Stafford loan aid, veterans receive aid packages that include a lower proportion of loans than nonveterans. Veterans receive smaller campus-based aid awards than nonveteran dependents, but more than nonveteran independent students at 4-year institutions. Although the actual amount claimed in HOPE and Lifetime Learning tax credits by veteran and nonveteran students is unknown, there are several factors that affect the amount a student may be eligible to claim. At each type of institution, veterans are awarded total federal aid packages of Pell grant, Stafford loans, and GI benefits that are greater than those awarded to nonveteran students. When veterans and nonveterans have similar characteristics, such as family income, they are awarded on average the same Pell grant and subsidized Stafford loan aid. In contrast, veterans and nonveterans who have similar characteristics are awarded different amounts of unsubsidized Stafford loans. This is largely due to the fact that GI benefits are included in the formula used to award unsubsidized Stafford loans. The only exception is at private 4-year institutions where veteran and nonveteran independent students are awarded the same amount of unsubsidized Stafford loan aid because the average cost of attendance is much greater at these institutions, thus making veterans eligible for unsubsidized Stafford loans. When comparing total federal aid packages of Pell grants, Stafford loans, and GI benefits, veterans’ aid packages have a lower percentage of loans and a larger percentage of grant aid than nonveterans’ at each type of institution. As shown in figure 4, among students who attended public 2- year institutions, veterans received aid packages that were greater than nonveterans and consisted of primarily grants. Both nonveteran independent and dependent students at public two-year institutions typically received federal aid packages that were comprised primarily of loans. Among students attending public 4-year institutions, veterans also received federal aid packages that were larger than nonveterans and most veterans’ packages had a lower percentage of loan aid than nonveteran students, as shown in figure 5. Only low-income, nonveteran dependent students received an aid package with a greater proportion of grant aid than loan aid. Among middle-income students, nonveteran independents were awarded almost the same amount of aid as veterans; however, nonveteran independents’ aid package was 78 percent loans while veterans’ aid package was 39 percent loans. Middle-income nonveteran dependents’ federal aid package was comprised entirely of loans. Among students attending private 4-year institutions, veteran students were awarded total aid packages that were greater than those awarded to nonveterans. As shown in figure 6, veterans’ aid packages were more evenly balanced between grants and loans, with the exception of high- income veteran students whose aid package was comprised of slightly more loans than grants. In several cases, nonveteran students at private 4- year institutions received aid packages that were entirely loans. See appendix I for detailed data on estimated federal Title IV aid awarded to veteran and nonveteran students at each type of institution. The amount of campus-based aid awarded to veteran and nonveteran students varied across type of institution attended and by a nonveteran student’s dependency status. As shown in table 2, in academic year 1999- 2000 veteran students received lower awards than nonveteran dependent students at all institutions and nonveteran independent students at public 2-year institutions, while veterans who attended public 4-year and private 4-year institutions received larger campus-based aid awards than nonveteran independent students. Information on the exact amount that veteran and nonveteran students claimed for the HOPE and Lifetime Learning tax credits and the student loan interest deduction is not known; nonetheless there are several factors that are known to affect the amount one may be eligible to claim. The amount of HOPE or Lifetime Learning tax credit one may be eligible to claim is affected by several factors, including the amount of tuition and fees paid, amount of GI benefits and grant aid received, and family income and taxes owed. Generally, veteran and nonveteran students who pay higher tuition and required fees, such as students who attend private 4- year institutions or those with a tax liability greater than the HOPE credit, may claim the full credit of $1,500. Veterans and nonveterans who pay lower tuition and required fees may claim a partial HOPE tax credit. Additionally, veterans who receive GI benefits or nonveteran students who receive grants that equal or exceed the amount of tuition and fees paid may not claim a HOPE tax credit. Likewise, veterans and nonveterans who pay higher tuition and fees may claim the full Lifetime Learning credit of $1,000; while veterans who receive GI benefits or nonveterans who receive grant aid that equal or exceed the amount of tuition and fees paid may not claim a Lifetime Learning tax credit. Any student whose income is not higher than $55,000 ($75,000 if a joint filer) and who pays interest on a qualified education loan is eligible to deduct up to $2,500 per year for the first 60 months that interest has been paid on an education loan. In written comments on our draft report the Department of Veterans Affairs generally agreed with our reported findings. Veterans Affairs suggested that we include education awards provided through the AmeriCorps program in our review as well. Our analysis focused on comparing benefits that students receive in exchange for their military service to benefits that are awarded under Title IV of HEA and for which no service is required. We did not include AmeriCorps education awards in our review because they are not part of Title IV and because education awards are provided in exchange for community service. In addition, the number of students who earn AmeriCorps education awards is small compared to students who receive benefits through Title IV aid programs. In fiscal year 2000, about 27,000 participants earned an AmeriCorps education award compared to approximately 7.6 million students who received aid through Title IV programs in fiscal year 2001. Veterans Affairs’ written comments are printed in appendix II. The Department of Education provided technical clarifications on our draft report and we have incorporated these where appropriate. We are sending copies of this report to the secretary of education and secretary of veterans affairs and other interested parties. We will also make copies available to others upon request. The report is available at GAO’s homepage, http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-8403. Other major contributors include Jeff Appel and Andrea Romich Sykes. The following three tables provide estimates of the amount of Pell grant, subsidized Stafford loan, and unsubsidized Stafford loan aid awarded to veteran and nonveteran students at public 2-year, public 4-year, and private 4-year institutions and the amount of GI benefits available in academic year 1999-2000. | The Montgomery GI Bill provides a monthly stipend to pay postsecondary education expenses for veterans and eligible service members. Concerns have been raised about whether GI benefits adequately cover educational costs and whether the receipt of GI benefits affects other federal financial assistance available to postsecondary students under Title IV of the Higher Education Act and the Internal Revenue Code. Under Title IV, GI benefits do not affect the amount of aid veterans receive under the Pell grant and subsidized Stafford loan programs but may affect the amount they receive in unsubsidized loans and through campus-based aid programs. Depending on the program, GI benefits may be considered as another source of financial assistance for students, which may decrease a veteran student's financial need and thus the amount of need-based aid provided. With regard to available federal tax incentives, the receipt of GI benefits does not preclude veterans from claiming such benefits but may affect the amount they would be eligible to claim. On average, veterans and nonveterans with comparable characteristics are awarded similar amounts of federal Title IV aid. When GI benefits are included, the total amount of federal assistance is greater for veterans than it is for nonveterans. Moreover, veterans' total aid, including Pell Grants, Stafford loans, and GI benefits has a lower proportion of loans compared with nonveterans' packages. On average, veteran students received slightly lower average campus-based aid awards than did nonveteran dependent students, but they received more than nonveteran independent students at four-year institutions. The actual amount of HOPE and Lifetime Learning tax credits by veteran and nonveteran students is unknown. However, the amount of tuition and fees paid, amount of GI benefits and grant aid received, and family income and taxes owed will affect the amount of tax credit one may claim. |
DOD provides information to the public about its animal use projects through two main sources—an annual report to the Congress and the BRD. The annual report to the Congress provides information in a summary form on animal use activities, including numbers and types of animals used, general purposes for which animals were used, and DOD’s animal care and use oversight procedures. DOD provided its first annual report in 1994 in response to the direction of the House Armed Services Committee, as contained in its Committee Report on the National Defense Authorization Act for the Fiscal Year 1993. In House Report 103-499, however, the House Armed Services Committee noted that DOD’s annual report had not provided sufficient detail about its animal research programs and activities. The House Report directed DOD to “develop a mechanism for providing the Congress and interested constituents with timely information . . . about its animal use programs, projects, and activities, both intramural and extramural.” One mechanism, according to the House Report, would be a database with information about the research goal and justification, cost, procedures, kinds and numbers of animals used, and information about the pain to which these animals are subjected. In response to that report, in October 1995 DOD established the BRD, a database about individual projects using animals that is accessible by the public through the Internet. For each ongoing DOD animal use project, it provides a project summary that includes the funding amount, the location of the research, and a brief statement of the project’s research objectives and methods. Research projects cover a broad range of topics such as using animals in the development of vaccines to protect against biological warfare agents and technologies to improve treatment methods for combat casualty care. Information for the BRD is collected from DOD agencies and military commands, organizations, and activities involved in the performance and funding of animal care and use programs. Typically the researcher or the veterinary services department at each facility provides the information about each research project for the BRD and the annual report. This is information that facilities routinely maintain as part of the process of granting researchers the approval to conduct research and then subsequently ordering animals for the research project. The BRD includes research funded by DOD as well as research performed by DOD that is funded by external sources such as the National Institutes of Health and the Alzheimer Association. The BRD, which is updated annually, contained 805 project summaries for fiscal year 1996. It was updated to reflect fiscal year 1997 projects on October 1, 1998, one year after the fiscal year ended; project summaries for fiscal year 1996 were replaced by those for fiscal year 1997. DOD has made progress in making information available to the public on its animal research programs and activities. Prior to the creation of the BRD, information on animal research was contained as part of a larger Defense Technology Information Center (DTIC) database, which includes the broad range of DOD research and development projects. However, DOD did not require all of its animal research activities, such as those involving clinical training or investigations, to be reported to the DTIC database. DOD now requires all animal research projects to be reported separately in the BRD. In addition, the BRD is publicly available on the Internet, while the DTIC database has restricted public access. The fiscal year 1996 BRD had a number of problems, including inaccurate and incomplete disclosure of information about DOD’s animal use projects. These problems stem from DOD not collecting certain valuable information from animal use facilities and not reporting certain other information that it did collect. Other problems of inaccuracy or inconsistency in the database were due to flawed data reported to DOD by facilities. The BRD is inaccurate with respect to the number of animal use projects. For example, in the course of performing our work, we found seven projects or research protocols that were not included in the database. These projects were performed at three different DOD organizations: the Armed Forces Radiobiology Research Institute, the Army’s Landstuhl Regional Medical Center, and the Marine Corp’s Camp Lejeune Field Medical Service School. The animals used included goats, sheep, rodents, and nonhuman primates. Alternatively, we identified 19 projects in the fiscal year 1996 BRD related to medical research for biological defense that did not involve the use of animals that year (although they did involve animals in other years). In addition, we identified one project that was reported twice in the database—two different DOD organizations reported the same project. Cost information provided in the BRD is not always accurate and consistent. For example, the fiscal year 1996 funding amount provided in the BRD for some projects covered a longer period than just fiscal year 1996. In other cases, the amounts of funding shown was inconsistent because the funding for some projects was listed as an abbreviated notation of a larger amount without providing adequate explanation. For example, in the case of the project erroneously reported twice, one project summary showed funding as “28,” while the other showed the amount as “28000.” These discrepancies make it difficult, if not impossible, to accurately determine from the BRD the cost of these animal research projects for the fiscal year. Additionally, the BRD does not disclose the funding source for the projects, making it impossible to determine which projects were funded by DOD and which by external sources. Furthermore, the BRD does not contain certain information identified in House Report 103-499. For instance, it does not provide the numbers and species of animals used for DOD projects nor does it include information about the pain to which animals were subjected. Summary information is provided for numbers and types of animals used and pain categories in DOD’s annual reports to the Congress, but these reports lack information on individual programs and activities. Another type of information that was mentioned in the House report is generally absent in BRD project summaries. Few project summaries identify the military or nonmilitary justification of the project. Although some of the projects are directly tied to a military goal, such as developing more effective transfusion fluids for combat casualties, others are not tied to a military goal but are still being done under a specific congressional directive, such as DOD’s extensive breast cancer research program. Without this information the Congress and the public cannot identify projects by the type of requirement they support. DOD does not collect information on the justification of each project as part of its data collection for the BRD. The version of the BRD available to the public also does not contain a data field that describes the broader animal use categories listed in DOD’s annual report to the Congress on animal care and use. Examples of these categories are research on infectious diseases, research relating to combat casualty care, and training for medical personnel. The absence of this information prevents the public from identifying how individual research projects link together into these broader research areas. We also found variations in the levels of specificity reported on the projects in the BRD. Whereas most of the 805 project summaries represent an individual line of research, several summaries report broad groups of research projects. For example, the Uniformed Services University of Health Sciences placed 64 separate project summaries in the BRD reflecting detailed distinctions among its various clinical research activities, such as “Virulence Mechanisms of Salmonella Typhi.” In contrast, Fitzsimons Army Medical Center reports only two clinical research project summaries that are described broadly as “Animal-Facilitated Clinical Medicine Studies in Support of Graduate Medical Education” and “Animal-Facilitated Clinical Surgical Studies in Support of Graduate Medical Education.” These two summaries merged as many as 29 separate projects. DOD guidance to the animal use facilities on preparing project summaries allows facilities broad discretion in determining what constitutes a project. We identified one classified project in the BRD that involved research on animals for the development of a weapon system. While we found no problem with the information reported in the BRD for this project, it appears inconsistent with DOD’s fiscal year 1996 annual animal care and use report to the Congress, which stated that no animals had been used for offensive weapons testing during fiscal year 1996. We recommend that the Secretary of Defense continue to take steps to improve the BRD. Specifically, the Secretary should improve the data collection and reporting procedures to ensure that the BRD contains accurate, detailed information about individual animal research projects, including information on the number and species of animals used in each project, the research goal and justification, and the pain categories for each project as identified in House Report 103-499. In addition, to improve public accountability, we recommend that the Secretary provide other information in the BRD, such as the appropriate animal use categories for each project, consistent with information reported in the DOD’s annual reports to the Congress, and ensure that the information contained in the BRD be presented in a uniform manner for all projects. In written comments on a draft of this report (see app. I), DOD partially concurred with our first recommendation and concurred with our second recommendation. Specifically, DOD said it will provide additional training to on-site veterinarians who are responsible for submitting data, take steps to clarify funding information for individual project summaries, include animal use categories for each project summary, and require reporting of all projects that have any animal use. They stated that they will institute these changes prior to the fiscal year 1999 annual report. DOD, however, expressed a concern that our recommendation to provide further detail on the number and species of animals, the research goals and justifications, and pain categories for each project summary would require an extensive upgrade of the existing BRD software and hardware capacity, duplicate information that is already available in the DOD annual report on animal use activities, and would not improve animal welfare. DOD also contended that information in the BRD is uniformly presented. DOD also provided technical comments, which we incorporated where appropriate. The changes that DOD proposes adopting will improve the quality of the BRD. But we believe that additional detail on each project summary is necessary to respond to the original direction of the House Armed Services Committee as well as to improve public accountability. Moreover, we feel that this detail can be provided in the BRD without a significant increase in resource expenditures. As pointed out in this report, the number and species of animals used and the pain category of the research are collected on a routine basis by DOD research and training facilities as a means of monitoring and tracking animal use activities. Furthermore, much of this information is already gathered for the DOD annual report although it is only reported in terms of aggregate animal use and not by individual projects. DOD also needs to ensure a more consistent level of reporting of animal use activities. Facilities conducting clinical research, for example, should submit summaries for the BRD at a project rather than program level. Incorporating these additional changes would further improve what is an important source of information on animal welfare to the public. In the course of our work examining issues related to DOD’s oversight of its animal research programs, we are reviewing the BRD because it contains information on individual animal use projects. As we reviewed information contained in the BRD, conducted interviews with DOD officials, reviewed relevant congressional reports, and performed data analyses to address the objectives for our study, we identified problems with information in the BRD. The BRD is prepared annually by DOD based on a questionnaire that it sends to those of its laboratories and contractors who use animals for research or training purposes. We reviewed the BRD in two forms. First, we selectively reviewed a version that is publicly available on the Internet (at http://ocean.dtic.mil/basis/matris/www/biowww/sf). Second, DOD supplied us with an electronic file that also identified the animal use category (for example, research on infectious diseases) on the 805 projects in the 1996 database. We reviewed all the projects in three animal use categories involving medical research—biological defense, combat casualty care, and ionizing radiation. These categories comprise approximately 22 percent of the 805 projects. We reviewed the summaries in these categories and compared the information contained in them with other sources, including DOD’s annual report to the Congress on its animal care and use programs for 1996. We interviewed officials from DOD’s Office of the Director of Defense for Research and Engineering; the Armed Forces Radiobiology Research Institute; the Uniformed Services University of the Health Sciences; the Office for Naval Research; the Naval Medical Research Institute; and Walter Reed Army Institute of Research in the Washington, D.C., area. We also interviewed officials from the U.S. Army Medical Research and Materiel Command in Frederick, Maryland; the Air Force Research Laboratory and the U.S. Army Clinical Investigations Regulatory Office in San Antonio, Texas; and the Army’s Landstuhl Regional Medical Center in Landstuhl, Germany. We reviewed DOD documents and reports relevant to animal care and use as well as related congressional reports. Our review was not based on a random sample of records from the BRD and, as a result, we have not drawn conclusions about the extent to which certain of our observations are present in the database as a whole. We conducted our review from October 1997 to October 1998 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to other interested congressional committees, the Secretary of Defense, and other interested parties. We will also make copies available to others upon request. Please contact us if you or your staff have questions concerning this report. Kwai-Cheung Chan can be reached at (202) 512-3652. Stephen Backhus can be reached at (202) 512-7101. Other major contributors are listed in appendix II. Bruce D. Layton, Assistant Director Jaqueline Arroyo, Senior Evaluator Greg Whitney, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined several issues related to the Department of Defense's (DOD) administration of its animal research programs, focusing on: (1) the extent to which DOD's research using animals addresses validated military objectives, does not unnecessarily duplicate work done elsewhere, and incorporates methods to reduce, replace, and refine the use of animals; and (2) problems with the accuracy of information in the Biomedical Research Database (BRD). GAO noted that: (1) the BRD provides improved public access to information about DOD's use of animals in its research activities; (2) GAO found instances in which the information in the BRD was inaccurate, incomplete, and inconsistent, resulting in inadequate public disclosure; (3) specifically, the fiscal year 1996 BRD: (a) misstated the number of animal use projects because it omitted some projects that used animals and included others that did not involve animals; (b) did not include information, such as the numbers and types of animals used, that was identified in House Report 103-499; and (c) contained significant differences in specificity reported for the research projects; and (4) although GAO did not quantify the full extent of these problems, the problems it has identified suggest a need for DOD action to improve the accuracy and extent of the information in the database. |
As of December 2004, IRS classified approximately $7.7 billion in delinquent tax debt as potentially available for private debt collection— $5.5 billion in low-priority work and $2.2 billion that was not likely to be assigned to IRS employees for collection. In the American Jobs Creation Act of 2004, Congress authorized IRS to contract with private sector debt collection companies to collect federal tax debts. Based on this authority, IRS awarded contracts in March 2006 to three PCAs for tax collection services. IRS began referring taxpayer cases to PCAs in September 2006. Because of legal restrictions, PCAs can only take certain defined steps to collect tax debts—including locating taxpayers, requesting full payment of the tax debt or offering taxpayers installment agreements if full payments cannot be made, and obtaining financial information from taxpayers. PCAs have limited authorities and are not allowed to adjust the amount of tax debts or to use enforcement powers to collect the debts, which IRS believes are inherently governmental functions to be performed only by IRS employees. Additionally, PCAs do not actually collect the debts, but instruct taxpayers to forward payments to IRS. PCAs are paid on a fee-for- service basis ranging from 21 percent to 24 percent of the debt collected based on the balance of the account at the time of referral. IRS only referred those cases in which the taxpayer had not disputed the debt (e.g., taxpayers who filed form 1040, 1040A, or 1040EZ and owe a balance) and delinquency exists for one or more tax periods. Under the IRS policy and procedures guide, PCAs are required, within 10 calendar days of receiving delinquent account information from IRS, to send a taxpayer notification letter to an address provided by IRS. This letter states that the taxpayer’s account has been placed with an IRS contractor for collection. According to IRS guidance, no sooner than 2 days after the PCA sends the notification letter, PCA employees may attempt to contact the taxpayer by telephone. However, to comply with 26 U.S.C. § 6103—which establishes a taxpayer’s right to privacy of tax information—PCA employees must not disclose any tax information until they are certain the person with whom they are speaking is the taxpayer. When a PCA employee makes a call to a taxpayer and reaches an answering machine, the only information the employee may leave on a recording is his or her name (no pseudonyms), company name, telephone number, the name of the taxpayer the PCA is attempting to reach, and the fact that the PCA is calling about a debt (i.e., rather than specifically a tax debt). In August 2006, IRS began working with a consulting company to develop and administer a taxpayer survey for PCA contacts. On November 27, 2006, the consulting company began administering the survey. Under guidance issued by IRS, PCAs were instructed to invite every right party contact to take the survey. If the contacts agreed to take the survey, they were transferred to the automated survey line. For the first 3 months of survey administration, the consulting company was required to issue overall satisfaction scores every month, followed by a quarterly report containing responses to all survey questions with information subdivided by each PCA. According to IRS, early in 2007, IRS did not execute the option to renew one of the PCA contracts. As of the date of this testimony, only two of the PCAs we reviewed are now under contract with IRS. According to the PCAs, 37,030 tax debt cases were referred by IRS from September 2006 through February 2007. In addition, we were informed that the survey was not offered until November 27, 2006—almost 3 full months after PCAs began to contact taxpayers. PCAs reported a total number of 13,630 right party contacts from September 2006 through February 2007, with 6,793 of these contacts made after the survey was available. Because PCAs began calling taxpayers in September 2006 before the survey was available, about 50 percent of all right party contacts identified during the period of our review were not eligible to take the survey. According to the consulting company, the validity of the survey was based on the key underlying assumption that all right party contacts would be offered a chance to take the survey. Although IRS instructed the PCAs to offer the survey to all right party contacts, we could not obtain information on how many of the 6,793 contacts were offered the survey. One PCA reported that it offered the survey to 999 right party contacts and made 2,694 right party contacts during this period. Officials at this PCA told us that from November 27, 2006, through February 13, 2007, taxpayers were randomly selected to take the survey using a structured method that offered the survey to every first or third contact during a specified time of day. The second PCA told us that it offered the survey to all right party contacts, but it did not keep any records to substantiate this claim. The third PCA told us that the survey was offered to all right party contacts, unless the PCA representative was aware that the contact was driving, if the contact had stated that he or she needed to get off the phone, or the contact said he or she was late for something. This PCA also did not have records regarding how many right party contacts were offered the survey, but an official noted that they were implementing procedures to track this information in the future. See table 1 for a summary of the PCA approaches to offering the survey during the period of our review. Beginning in early April 2007, IRS officials reemphasized the need for PCAs to offer the survey to all right party contacts and to keep records in this regard. These instructions have been incorporated in additional guidance for the PCAs. The consulting company that administered the survey provided us with records indicating that of those offered the survey, 1,572 right party contacts agreed to be transferred to the automated survey system from November 27, 2006, through February 28, 2007. Of these, records further indicate that 1,011 individuals completed the survey. A consulting company representative told us that the company was not aware, until several months after the survey was first offered, that the PCAs had used differing methodologies for offering the survey and that not all right party contacts were offered it. Table 2 provides summary information on the data we gathered from IRS, the PCAs, and the consulting company. We also made several related observations during the course of our work: PCAs were given some information about taxpayers with delinquent debt, including the taxpayers’ name, Social Security numbers, and last known addresses per IRS records. According to IRS, it did not provide PCAs with telephone numbers for the taxpayers as a matter of policy. As a result, in attempting to contact taxpayers by telephone, PCA representatives tried to determine the taxpayers’ phone numbers through electronic searches, for example, through the Lexis-Nexis database. PCAs told us that they made a total of 252,173 outbound connected telephone calls from September 2006 through February 2007 in an attempt to resolve the 37,030 cases referred by IRS. PCAs indicated that 89,781 calls—or about 36 percent of all connected outbound calls—resulted in messages left on answering machines, voice mail, or with third parties. In an attempt to make contact with the right party, PCAs may have contacted a substantial number of taxpayers who were not part of the 37,030 cases referred to PCAs by IRS—these taxpayers represent a potentially large group of incorrect contacts. Incorrect contacts were not offered the survey. Examples of individuals who were not offered the survey would include individuals who refused to provide personal information to the PCAs and individuals who provided personal information but were not authenticated as part of the 37,030 IRS referrals. The overall satisfaction rating reported by the consulting company, and quoted by IRS, represents the answer to 1 question on a 20-question automated survey. The question was “Everything considered, whether you agree or disagree with the final outcome, rate your overall satisfaction with the service you received during this call.” Respondents were allowed to rate their satisfaction on a scale of one to five—with one being “very dissatisfied” and five being “very satisfied.” Of the survey questions, 15 related to customer satisfaction; the other questions were to gather more information about the respondents themselves. Those respondents who completed the entire survey had their results counted by the consulting company. Satisfaction ratings for other survey questions ranged from 81 percent (ease of understanding letters received from PCAs) to 98 percent (courtesy of PCA representatives). Officials at IRS and the consulting company confirmed that some right party contacts were offered (and may have taken) the survey more than once because they had multiple discussions with a PCA representative. Thus, some of the 1,011 right party contacts who completed the survey may represent duplicate respondents. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the Committee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-7455 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Key contributors to this testimony were John Ryan, Assistant Director; Bruce Causseaux, Jennifer Costello, Heather Hill, Wilfred Holloway, Jason Kelly, and Andrew McIntosh. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Every year the Internal Revenue Service (IRS) does not collect tens of billions of dollars in delinquent taxes. In 2004, Congress authorized IRS to use private collection agencies (PCA) to help collect some of these debts. To ensure that taxpayers are treated properly and that the program achieves the desired results, IRS contracted with a consulting company to perform a survey of right party contacts--those individuals who confirmed their identity and tax debt to PCAs over the telephone. The consulting company reported overall taxpayer satisfaction ratings from 94 to 96 percent for contacts made from November 2006 through February 2007. At the request of the Chairman, House Committee on Ways and Means, GAO attempted to obtain, for the period September 2006 through February 2007, the number of tax debt cases IRS referred to PCAs, right party contacts who were offered the taxpayer survey, and right party contacts who took the survey. GAO was also asked to report any other key observations related to the PCA program and taxpayer survey. To perform this work, GAO collected information and interviewed officials from IRS, the consulting group that administered the survey, and the PCAs. According to the PCAs, 37,030 tax debt cases were referred to them by IRS from September 2006 through February 2007. PCAs reported making contact with, and authenticating the identity of, 13,630 right party contacts. Of these, 6,793 were eligible to take the taxpayer survey which did not start until the end of November 2006. According to the consulting company, the validity of the survey was based on the key underlying assumption that all right party contacts would be offered a chance to take the survey. However, GAO could not determine the number of right party contacts offered the survey because not all PCAs kept records on who was offered it. Further, the three PCAs used different methods to determine which right party contacts were offered the survey. The consulting company that administered the survey told GAO that between November 27, 2006, and February 28, 2007, 1,572 of the individuals offered the survey, agreed to take the survey, and 1,011 of these individuals completed the survey. A consulting company representative told GAO that the company was not aware, until several months after the survey was first offered, that the PCAs used differing methodologies for offering the survey and that not all right party contacts were offered an opportunity to complete the survey. According to IRS, beginning in April 2007, PCAs began offering the survey to all right party contacts. Among other key observations, IRS advised GAO that they did not provide the PCAs with taxpayer telephone contact information for referred cases. As a result, in attempting to contact taxpayers by telephone, PCA representatives tried to determine the taxpayers' phone numbers through electronic searches. PCA representatives told GAO that they made a total of 252,173 outbound connected telephone calls from September 2006 through February 2007 in an attempt to make contact with the 37,030 tax debt cases IRS referred. PCAs did not offer the survey to incorrect contacts, such as individuals who provided personal information but were not authenticated as right party contacts. |
APA outlines the process for informal rulemaking, commonly referred to as notice-and-comment rulemaking. APA includes six broad categorical exceptions to this process, including, for example, rules dealing with agency organization and procedure (see sidebar). 5 U.S.C. § 553(b)(B). Agencies may also find “good cause” to exempt a rule from APA’s requirement for a 30-day delay of effective date. 5 U.S.C. § 553(d)(3). However, agencies’ use of that good cause exception was not within the scope of this review. circumstances presented in the rules. In other cases, statutes, such as the 2008 Farm Bill, have authorized or required agencies to issue rules without notice and comment.exceptions they are not required to request comments from the public or conduct certain regulatory analyses. When agencies invoke any of these CRA, which applies to all agencies, distinguishes between two types of rules, major and nonmajor. CRA defines a “major” rule as one that, among other things, has resulted in or is likely to result in an annual effect on the economy of $100 million or more. Throughout this report, we present results using the CRA distinction between major and nonmajor rules. The Office of Information and Regulatory Affairs (OIRA) within OMB is responsible for determining whether a rule is major. OIRA also is responsible for providing meaningful guidance and oversight so that each agency’s regulations are consistent with applicable law, the President’s priorities, and the principles set forth in executive orders, and that decisions made by one agency do not conflict with the policies or actions taken or planned by another agency. Under Executive Order 12866 (reaffirmed by Executive Order 13563), OIRA reviews significant proposed and final rules from agencies, other than independent regulatory agencies, before they are published in the Federal Register. OIRA also provides guidance to agencies on regulatory requirements. For example, on August 15, 2011, OIRA issued a primer instructing agencies how best to conduct a regulatory impact analysis. In addition to OIRA’s previously mentioned responsibilities, according to Executive Order 12866, OIRA is to be the “repository of expertise concerning regulatory issues.” Executive Order 12866, other executive orders, and OIRA guidance have all reiterated the importance of public participation and regulatory analysis in rulemaking. During calendar years 2003 through 2010, agencies published 568 major rules and about 30,000 nonmajor rules. As shown in figure 1, agencies published about 35 percent of major rules and about 44 percent of nonmajor rules without an NPRM during those years. Examples of major rules without an NPRM include a May 2010 Department of the Treasury final rule prohibiting certain consumer credit practices, for which the agency invoked the good cause exception, and a September 2008 Department of Health and Human Services (HHS) notice that announced Medicare cost-sharing amounts, for which the agency cited an exception in the Social Security Act and good cause.As we observed in our 1998 report, many nonmajor rules without an NPRM appeared to involve routine, administrative, or technical issues. Similar examples of nonmajor rules without an NPRM that we identified during this review included a January 2007 Department of Homeland Security (DHS) temporary final rule changing drawbridge operation hours for certain bridges in Florida, and a July 2009 Federal Election Commission rule allowing a committee that is being audited by the Commission to have a hearing prior to the Commission’s adoption of a final audit report. As illustrated in figure 2, the percentage of nonmajor final rules without an NPRM was very consistent across the 8-year period we reviewed; varying only slightly among individual years, but the percentage of major rules without an NPRM was less consistent. In particular, from 2008 to 2009, the percentage of major rules without an NPRM increased from 26 percent to 40 percent. Agencies issued the largest numbers of major rules without an NPRM in 2009 and 2010 (34 in each year), though the percentage was higher in 2009 than in 2010. (See app. II for more detailed results of the analyses we conducted during this review, including numbers, percentages, and confidence intervals.) Two agencies, HHS and the Department of Agriculture (USDA), published 62 (plus or minus 11) percent of major rules in our sample without an NPRM, as shown in figure 3. Other agencies accounted for much lower percentages of the total, all 7 percent or less. The Environmental Protection Agency (EPA) issued 30 major rules from 2003 through 2010, but none of these were issued without an NPRM. For the nonmajor rules, the Department of Transportation (DOT), Department of Commerce, DHS, and EPA together accounted for almost two-thirds of nonmajor rules without an NPRM. All other agencies accounted for 7 percent or less of the total. The agencies that published rules in our sample used interim rulemaking for a substantial portion of final major rules without an NPRM. As noted earlier, an interim rule becomes effective without an NPRM, but the public generally may provide comments after the rule’s issuance. Across the 8- year time period, agencies issued 47 percent of all major final rules and 8 percent of all nonmajor rules without an NPRM as interim rules. The percentage of major rules without an NPRM that used interim rulemaking increased from 2007 through 2010 but was more variable for nonmajor rules (see fig. 4). Appendix III provides more information on the frequency of agencies’ use of interim rulemaking in general. Across the 554 rules in our sample without an NPRM, agencies used 109 distinct terms, many of which had only slight wording variations within a broad category, to identify the rulemaking action. The majority of these terms were variations of five broad categories: final rules, interim rules, temporary rules, direct final rules, and notices. In practice, however, there may be little distinction between interim rules and certain other rules without an NPRM that were described using different terminology. For example, a “final rule, request for comments” and an “interim rule with request for comments” both provide an opportunity for the public to comment only after the rule has been published; these rules are in essence the same type of rule. As a result of the inconsistent terminology, it would be difficult for Congress to enact legislation on, or for the public to easily identify, rules without an NPRM based on what agencies call those rules. For example, in legislation to revise rulemaking procedures being considered by the 112th Congress, certain provisions would apply when agencies, for good cause, issue “interim rules.” If the intent is to address all rules using the good cause exception, this proposed legislation would not achieve that goal since our analysis showed that not all rules for which agencies claimed good cause were called “interim rules.” To facilitate public participation in the rulemaking process, OMB officials told us that they are working with the Office of the Federal Register to standardize terminology for agencies to use when publishing rules in the Federal Register. The agencies that published rules in our sample claimed the good cause exception in 77 (plus or minus 11) percent of major rules and 61 (plus or minus 10) percent of nonmajor rules without an NPRM, as shown in figure 5 below. 74 Fed. Reg. 20,210 (May 1, 2009). 75 Fed. Reg. 69,348 (Nov. 12, 2010). federal waters.Deepwater Horizon oil spill in the Gulf of Mexico on April 20, 2010. The department published this rule in light of the For the rules in our sample, agencies most often said that issuing an NPRM would be contrary to the public interest, but they also frequently cited multiple grounds for invoking the good cause exception (see table 1). They cited multiple reasons more often for major rules than for nonmajor rules—63 (plus or minus 14) percent of major rules and 44 (plus or minus) percent of the nonmajor rules. Ninety-two of the 123 major rules without an NPRM in our sample invoked the good cause exception. In examining these 92 rules we identified five primary categories of explanations (more than one category sometimes applies to a given rule): a law imposed a deadline either requiring the agency to issue a rule or requiring a program to be implemented by a date that agencies claimed would provide insufficient time to provide prior notice and comment—36 rules; a law prescribed the content of the rule issued—31 rules; the agency said it was responding to an emergency—19 rules; the rule implemented technical changes—5 rules; and all other explanations (for example, an agency issued a final rule without an NPRM in response to a court decision)—14 rules. After good cause, agencies most often cited specific exceptions in statutes other than APA. Such exceptions were cited in 9 (plus or minus 4) percent of nonmajor rules and in 34 (plus or minus13) percent of all major rules without an NPRM. More specifically, in 38 of the 123 major rules in our sample, we identified 18 different statutory authorities that either required or authorized agencies to issue rules without notice and comment. For example, the 2008 Farm Bill required the issuance of final rules to implement provisions of the law without prior notice and comment. HHS, the Department of Labor (DOL), and the Department of the Treasury issued several joint rules to implement provisions in the Patient Protection and Affordable Care Act as interim rules. In another type of example, a provision of the Social Security Act provides an exception to notice-and-comment rulemaking when a statute establishes a specific deadline for implementation of a rule and the deadline is less than 150 days after its enactment. This provision allowed HHS to issue several Medicare rules without an NPRM because the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 required some rules to be issued within shorter time frames than 150 days. Table 2 provides additional information on all of the statutory exceptions that agencies cited for major rules in our sample. As mentioned earlier, in addition to the good cause exception, APA includes six broad categorical exceptions to notice-and-comment rulemaking. Agencies that published rules in our sample invoked these broad categorical exceptions infrequently. They did so in 11 (plus or minus 5) percent of nonmajor rules (most often for rules of agency organization, procedure, or practice), and in 8 (plus or minus 11) percent of major rules (most often citing the exception for rules on public property, loans, grants, benefits, or contracts). These exceptions were cited in 8 of the 123 major rules without an NPRM in our sample. The following are examples of rules in which agencies cited the six APA categorical exceptions: Military and foreign affairs—cited by a Department of Commerce 2007 final rule that made several corrections to the Export Administration Regulations regarding Libya and terrorist-supporting countries. Agency management or personnel—cited by a General Services Administration 2005 rule regarding the Federal Travel Regulation to clarify various provisions on temporary duty travel. Public property, loans, grants, benefits, or contracts—cited by the Board of Directors of the HOPE for Homeowners Program in a rule establishing a temporary Federal Housing Administration program providing mortgage insurance for refinanced loans made to avoid foreclosure. Interpretative rules—cited by a DOL 2008 rule revising regulations implementing the nondiscrimination and affirmative action provisions of the Vietnam Era Veterans’ Readjustment Assistance Act of 1974, as amended.interpretation of a mandatory job listing requirement. According to DOL, it published this rule to codify its General statements of policy—cited by an HHS 2010 rule revising standard federal rates and the extension of wage indexes under the Patient Protection and Affordable Care Act for Medicare payments in conformance with congressional policy. Agency organization, procedure, or practice—cited by a DOL 2007 rule amending Occupational Safety and Health Administration procedures for handling retaliation complaints. In sum, our review of reasons agencies gave for issuing major rules in our sample without an NPRM showed that they cited grounds relating to statutes for most of the 123 rules we analyzed. Specifically, we found that in 84 of the 123 major rules without an NPRM in our sample, agencies described circumstances in which a statute: (1) required or authorized them to issue the rule without an NPRM, (2) prescribed the content of the rule, or (3) set a deadline for a rule or program which the agency stated did not allow sufficient time to issue an NPRM. About 70 percent of the 123 major rules in our sample involved, at least in part, the distribution of federal payments to the public, such as disaster assistance and reimbursement for health care costs. By foregoing notice and comment in these cases, agencies expedited the flow of funds to beneficiaries. Of the 123 major rules without an NPRM that we reviewed, 113 provided some estimates of economic effects, such as on the potential costs or benefits. Agencies do this because Executive Order 12866 directs non- independent regulatory agencies to assess economic effects, including costs and benefits for all significant rules, whether or not those rules are issued with an NPRM.requirements of the Executive Order apply only “as practicable.” Of the remaining 10 of the 123 rules that we sampled, 5 provided some economic information but did not include estimates of the costs or benefits, and 5 were issued by independent agencies which are not required to comply with the Executive Order. However, according to OMB officials, the Costs and benefits include both quantifiable measures as well as qualitative effects that may be difficult to quantify. The information provided on costs and benefits in the 123 rules we reviewed varied, and included both quantitative and qualitative information. Agencies gave quantitative measures of effects for 104 rules, information on effects for 44 rules included in our sample. Appendix IV provides summary information about each of the 123 major rules without an NPRM, including the potential benefits, costs, and other economic effects identified by the agencies. Agencies quantified costs for 50 rules, benefits in 10, and transfers in 86. payments in whole or in part. In these cases, agencies typically reported only the estimated budgetary impacts of transfer payments. Additionally, rules that have a significant effect on the economy, whether or not issued with an NPRM, are subject to review by OIRA. Agencies must submit detailed economic analyses of the costs and benefits of all reasonably feasible regulatory alternatives to OIRA for review. For 55 of the 123 major rules we examined, the rule stated that the agency had considered regulatory alternatives. the Conservation Stewardship Program, USDA identified and provided analyses on five policy options, as well as the option of no program. Of the 123 major rules we examined, all but 10 were subject to OIRA review. The 10 rules not subject to OIRA review were issued by independent regulatory agencies. Agencies may have considered alternatives but did not summarize their findings in the published final rules, so there may be other rules among the 123 for which agencies considered regulatory alternatives. Of the 123 major rules without an NPRM in our sample, we found that agencies requested comments for 77 rules where they had discretion over at least part of the regulation’s content. Agencies sometimes solicit public comments through the Federal Register on such rules, though not required to do so. If an agency solicits comments in these cases, the public’s opportunity to comment does not occur in advance of the rule’s issuance or, in some cases, the effective date for complying with the rule’s provisions. Major rules in which the agency has some discretion may benefit from consideration of public comments, because the public could add value by identifying issues, information, and analyses that the agency might not have initially considered. However, agencies were not obligated to respond to comments received on these rules, a key difference from comments received on proposed rules when those rules are finalized. In 26 of the 77 rules without an NPRM in our sample where the agency had discretion, the agency did not publish a follow-up rule or respond to any comments received (see figure 6). Typically, agency responses to comments received from the public are published in the Federal Register when a follow-up rule is issued. We analyzed each of these 77 major rules to determine whether, by the end of June 2012, agencies had published a follow-up rule in the Federal Register and, if so, whether the agencies reported receiving comments and making changes to the original rules. In 26 of these 77 rules, there was no follow-up rule. We examined publicly available information and found that the public submitted comments for at least 15 of these 26 rules but the agencies did not respond to them. Each of these 26 rules has significant economic effects, with some of these rules having an impact of a billion dollars a year or more. These rules also cover important issues ranging from national health care policies to manufacturing incentive programs. For example, in one of the 26 rules, an agency defined a pre- existing condition to implement the Patient Protection and Affordable Care Act, and sought public comment. The agency received 4,627 comments, but has not published a response to them. When agencies do not publish their response to any comments received, the public record is incomplete. The public does not know whether the agency considered the comments, accepted or rejected the views or evidence presented, or if the agency intends to finalize and potentially change the rule. As the courts have recognized, the opportunity to comment is meaningless unless the agency responds to significant points raised by the public. We found that when agencies did respond to public comments they often made changes to the rules. In the 51 major rules without NPRMs in our sample for which the agencies had discretion and requested comments, the agencies did issue a follow-up rule, and our analysis of those cases illustrates the potential benefits of follow-up efforts. The agencies reported receiving public comments on all but 3 of these 51 major rules, which indicates that the public usually takes advantage of the opportunity to comment on rules without an NPRM following publication. In addition, we found that agencies made changes to the text of 31 of the 51 rules, most often in response to public comments. For example, DHS finalized a September 2009 interim rule on air cargo screening in August 2011. In response to public comments, the agency removed two provisions of the original interim rule regarding air cargo screening requirements. These changes reduced the costs of the rule. In a similar example, in June 2011, the Department of the Treasury, DOL, and HHS followed up on a jointly- issued July 2010 rule on group health plans and health insurance issuers. The agencies stated that the amendments in the subsequent rule were being made in response to public comments received on the prior rule and that the primary effect of the amendments was to reduce the costs of compliance. Over the years, the Administrative Conference of the United States (ACUS), an advisory agency in administrative law and procedure, has also highlighted the potential benefits of following up on final rules issued without an NPRM. In particular, to ensure public participation and limit undesirable effects regarding final rules issued without notice and comment, ACUS recommended that agencies request comments whenever they invoke the “impracticable” or “contrary to the public interest” reasons under the good cause exemption and publish a responsive statement on significant and relevant issues raised by such comments. ACUS noted that in such cases public comments could provide both useful information to the agency and enhanced public acceptance of the rule. Although this recommendation has not been implemented, ACUS continues to support it in an effort to improve transparency and public participation in rulemaking. Agencies issue thousands of final rules each year that affect many aspects of citizens’ lives. The rulemaking procedures that agencies follow balance the public’s right to be involved in the rulemaking process against agencies’ need to carry out their missions in an efficient and effective manner. When rulemaking is expedited, there is a trade-off between obtaining the benefits of advanced notice and comment and the goal of issuing the rule quickly. The consequences of such trade-offs could be most significant for major rules issued without an NPRM, given their substantial annual effects on society. Agencies often lessened this trade-off by requesting public comments on rules issued without an NPRM for which they had some discretion. This is a positive practice that promotes the benefits of public participation. However, if agencies and the public are to fully benefit from the process of public comments, what matters is not simply providing an opportunity for comment but also public understanding of whether comments were considered. For more than a third of the major rules published without an NPRM between 2003 and 2010 where agencies had discretion and requested comments, the agencies did not respond to comments received. Some of these rules related to significant national issues such as health care. When agencies solicit but leave unclear whether comments were considered, the public record is incomplete. Though such follow-up is not required, agencies may be missing an opportunity to fully obtain for themselves, and provide to the public, the benefits of public participation. Further, agencies may create the perception that they are making final decisions about the substance of major rules without considering data, views, or arguments submitted in public comments. The benefit of follow-up efforts is demonstrated by our finding that, when agencies did issue follow-up rules, they often made substantive changes to the original rules, usually in response to public comments. To better balance the benefits of expedited rulemaking procedures with the benefits of public comments that are typically part of regular notice- and-comment rulemakings, and improve the quality and transparency of rulemaking records, we recommend that the Director of OMB, in consultation with the Chairman of ACUS, issue guidance to encourage agencies to respond to comments on final major rules, for which the agency has discretion, that are issued without a prior notice of proposed rulemaking. We provided a draft of this report to the Director of OMB and the Chairman of ACUS for their review and comment. We received written comments on the draft report from OMB, which are reprinted in Appendix V. OMB also provided a technical comment which we incorporated as appropriate. ACUS provided technical comments, which we also incorporated as appropriate. OMB disagreed with our recommendation to issue guidance to encourage agencies to respond to comments on final major rules, for which the agency has discretion, that are issued without a prior notice of prior rulemaking. OMB stated that it does not believe it is necessary to issue guidance on this topic at this time. In its response, OMB reiterated the value of public participation during the rulemaking process and noted that it routinely encourages agencies to establish procedures to consider public comments received on interim final rules. However, OMB believes that the timing and extent of an agency’s responses is a discretionary matter that an agency must consider in the context of the nature and substance of the particular rulemaking, as well as the particular agency’s resource constraints and competing priorities. OMB further stated that this case-specific approach is generally appropriate—especially given the often unique circumstances faced by agencies issuing rules without a prior notice of proposed rulemaking—and that it is not aware of compelling evidence that a more general, undiscriminating policy, set out in guidance, would offer substantial benefits. We continue to believe that enhanced guidance would improve the quality and transparency of rulemaking procedures. We recognize the fact that OMB encourages agencies to establish procedures to consider public comments but believe that OMB needs to go further to encourage all agencies to respond to public comments on the record. We believe that there is compelling evidence that such guidance would offer substantial benefits. ACUS identified this as an issue of concern in 1995, and our current review confirmed that agencies still do not always follow up on rules issued without an NPRM. For more than a third of the major rules published without an NPRM between 2003 and 2010 where agencies had discretion and requested comments, we found that the agencies did not respond to comments received. As our evidence demonstrated, some of these rules had economic impacts in the billions of dollars, attracted over 4,000 comments, and addressed significant national issues, such as health care. When it is unclear whether agencies considered comments, rulemaking is less transparent to the public, and, as courts have recognized, the opportunity to comment is meaningless unless the agency responds to significant points raised by the public. Further, we disagree with OMB’s characterization of the scope of our recommendation. We are not suggesting an undiscriminating policy, instead we are recommending that OMB work with ACUS to develop appropriate guidance. Such guidance could maintain the flexibility for agencies that OMB believes is necessary. Also, following up on rules issued without an NPRM is not necessarily resource intensive. For example, an agency could simply post a summary response to public comments on regulations.gov. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Director of OMB, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff has any questions concerning this report, please contact Melissa Emrey-Arras at (617) 788-0534 or emreyarrasm@gao.gov, or Robert Cramer at (202) 512-7227 or cramerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in Appendix VI. For final rules published during calendar years 2003 through 2010, the objectives of this report were to: 1. Identify how often agencies issued final rules, including interim rules, without a notice of proposed rulemaking (NPRM), whether this changed over time, and which agencies most often issued such rules. 2. Identify which exceptions agencies used when issuing final rules without an NPRM. 3. Assess whether agencies, when issuing final major rules without an NPRM (a) provided information on the rule’s economic effects, (b) solicited public comments, and (c) responded to public comments. To address each of these objectives, we selected and reviewed a representative sample of final regulatory actions published during calendar years 2003 through 2010 to estimate the prevalence of certain characteristics in this population. We used the Government Printing Office’s (GPO) Federal Digital System database on the Federal Register to compile a list of 30,583 final regulatory actions published in the Rules and Regulations section during those years. We defined our units of analysis as “actions” rather than “final rules,” because not all of the individual documents published in the Rules and Regulations section of the Federal Register are rules (e.g., some extended comment periods or made editorial corrections). Further, one published action may include multiple rules, and there is no way to determine the total number of rules published short of reviewing each action. However, for simplicity of presentation, we use the term “final rules” instead of “final regulatory actions” throughout this report. We supplemented information from GPO’s Federal Register database with information from our database on rules submitted to us under the CRA.used to generate our list of all final rules by reviewing related documentation, interviewing knowledgeable agency officials, testing for missing data, and tracing a sample of entries to source documents. We concluded that the data were sufficiently reliable for our purposes. We tested the reliability of the databases From this population of 30,583 final rules published in the Rules and Regulations section from 2003 through 2010, we selected a generalizeable stratified sample of 1,311 final rules. To ensure that we reviewed the rules expected to have the most significant effects, we selected all major rules, as identified under the CRA for calendar years The remaining rules during this period were stratified 2007 through 2010. and sampled by year (2007 through 2010) and by whether they contained the term “interim” in the text of the Federal Register action.included rules for calendar years 2003-2006 in our sample. For this period, we grouped the rules into three additional strata: major rules, “interim” rules, and other rules. Table 3 summarizes the population and sample size by stratum. Based on this sample, we are able to estimate characteristics of the population of all final rules published in the Rules and Regulations section of the Federal Register. To ensure that all the rules expected to have the most significant effects were reviewed, we also included an additional 27 major rules that were not published in the Rules and Regulations section, but instead were published as Notices (bringing the total number of rules For this report, when we present estimates for all we reviewed to 1,338).major rules, we are projecting to the major rules published in both the Rules and Regulations section and those published in Notices. All other estimates presented in this report are estimates of the population of rules published in the Rules and Regulations section for 2003 through 2010. Our sample contained rules by 52 different agencies, including every cabinet-level agency issuing regulations and every agency that published a major rule during the 8-year period. Because this is a probability sample, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (for example, plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. We reviewed the published text of all selected final rules to determine if they had been published in whole or in part without NPRMs (referred to in the rest of this report simply as rules without NPRMs). Our analysis included rules where only a part of the rule was issued without an NPRM to ensure that our results reflected all instances when agencies cited an exception to notice and comment. To address our third objective we focused primarily on whether agencies issuing major rules without an NPRM: (a) provided information on the economic effects of the rules, (b) solicited public comments when issuing final rules without an NPRM, and (c) responded to comments received on major rules without an NPRM by June 30, 2012. We used standardized data collection instruments and applied criteria from the Administrative Procedure Act, Regulatory Flexibility Act, Unfunded Mandates Act, and Executive Order 12866 to collect and analyze information to address each key question. If the final rule in our sample was not itself a rule, but was related to a rulemaking (e.g., if it extended a comment period) we used the underlying rule to address our questions if sufficient information was provided to identify the underlying rule. In addition to using our sample to generate estimates for the entire population on these objectives, we also did additional content analyses of the major rules without NPRMs in our sample to help address the objectives. Unlike the generalizable results from our reviews of the broader sample of rules, the results of these content analyses are not generalizable to the entire population. They only represent the facts and circumstances of the specific rules we reviewed. We also met with officials from the Office of Management and Budget (OMB) and Administrative Conference of the United States (ACUS) who are knowledgeable about federal regulatory and administrative law procedures. We did not assess the agencies’ decisions regarding claims of good cause and other exceptions or their determinations regarding the effects of their rules; instead, we are providing information about what the agencies published in the Federal Register as the basis for their findings. Further, we limited our analysis to only what agencies specifically stated in Federal Register notices. For example, we counted a particular exception only if the agency specifically cited it or quoted from part of APA’s description. We did not assume that an agency meant to claim a particular exemption based on the general content of the rule. Therefore, our results may understate the frequency with which APA’s good cause and categorical exceptions applied. We conducted this performance audit from June 2011 to December 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Tables 4 and 5 in this appendix provide more detailed information on the results of various analyses we completed for this report, including the upper and lower bounds of confidence intervals for estimated values, as appropriate. Figures 7 through 9 in this appendix provide more detailed information on how agencies addressed the RFA, UMRA, and Executive Order 12866, reported by agencies under the Congressional Review Act (CRA), including confidence intervals for our estimates. RFA applies to all agencies, but the requirements under RFA to prepare initial and final regulatory flexibility analyses only apply to rules for which an agency is required to publish an NPRM. Nevertheless, agencies often discussed RFA in their rules without NPRMs. UMRA applies to agencies other than independent regulatory agencies, and UMRA’s requirements only apply to rules for which an agency published an NPRM. Nevertheless, agencies often discussed UMRA in their rules without NPRMs. Executive Order 12866 procedural and analytical requirements only apply to significant rules, and the requirement to provide the underlying analysis of benefits and costs only applies to rules that are economically significant (generally those with an annual impact of $100 million or more). The executive order does not apply to independent regulatory agencies. Interim final rules are rules that agencies often, but not always, issue without an NPRM and provide the public an opportunity to comment after the rule has taken effect. APA does not address interim rulemaking, although in 1995, ACUS recommended that agencies adopt a form of interim rulemaking, and Congress has expressly authorized this procedure in legislation. In our 1998 report on final actions without NPRMs, we had estimated that agencies published about 400 interim final rules per year from 1992 to 1997 (out of approximately 4,000 total final rules each of those years). Overall, agencies appeared to use interim rulemaking infrequently. Between 2003 and 2010, agencies published about 4 (plus or minus 2) percent of nonmajor rules as interim rules. There was relatively little variation across the individual years in the percentage of nonmajor interim rules (see fig. 14). However, for major rules, we found that 15 percent (actual) of all major rules from 2003 through 2010 were issued as interim rules. The number of major interim rules increased starting in 2008 and was highest in 2010, when 23 of 100 major rules were an interim rule or included an interim rule among other final rules. A few of the cases involving interim rules that we reviewed (17 of the 120) had prior proposed rules. We reviewed a total of 120 interim rules within our sample, 56 of which were major and 64 nonmajor. Three agencies issued more than half of the major interim rules we reviewed, with the Department of Health and Human Services (HHS) accounting for 23 percent, the Department of Agriculture (USDA) 18 percent, and the Department of Homeland Security (DHS) 11 percent. All other agencies in our sample accounted for less than 7 percent each of the total major interim rules we reviewed. For the nonmajor interim rules, four agencies accounted for approximately two-thirds of these rules, with USDA accounting for 22 percent, DHS 22 percent, the Department of Defense (DOD) 14 percent, and HHS 11 percent. All others counted for less than 5 percent each of the total. There may be other agencies that issued interim rules that did not appear in our sample. Table 6 provides the detailed results of our analysis, by time period, with confidence intervals. There is no general requirement for agencies to finalize interim rules, but we did a “look forward” analysis for each of the 120 interim rules in our sample to determine how many of those rules agencies had subsequently finalized and, if so, whether the agencies reported receiving comments or making changes to those rules. By the end of June 2012, agencies in our sample had finalized almost half of the interim rules—29 of the 56 major interim rules in our sample and 37 of the 64 nonmajor interim rules. It took these agencies on average 452 days (about 1 year, 3 months) after publication of the original interim rules to finalize the rules, so agencies may eventually finalize additional interim rules from our sample. Agencies in our sample frequently reported receiving comments on and making changes to interim rules that they finalized, especially in the case of major interim rules. These agencies received public comments on all but 2 of the major interim rules that were subsequently finalized. In addition, agencies in our sample made changes to the text of 15 of the 29 major interim rules when they were finalized, most often in response to public comments. The following tables provide information about each of the 123 major rules issued without an NPRM from 2003 through 2010 that we reviewed for this report. Rules for which agencies waived NPRMs for only part of the rule are designated with (P) where we identify the exceptions to NPRMs that were cited. The narratives on each rule are summarized primarily from the relevant major rule reports that we submitted to Congress under CRA. CRA requires us to report on the issuing agency’s compliance with procedural steps required by various acts and executive orders governing the rulemaking process. Links to those reports are provided in the Rule column. In some cases where the published rule contained other relevant summary information on estimated economic effects that was not reflected in the major rule report, we added that information to the summary. The entries are sorted by agency and presented chronologically by the published date of the rule. Joint rules issued by more than one agency are listed at the end of each table. Because of differences in methods and assumptions (for example, discount rates, inflation), the agencies’ estimates may not be comparable. In addition to the individuals named above, Tom Beall, Tim Bober, Sara Daleski, Janet Dolen, Clifton G. Douglas Jr., Denise Fantone, Rob Gebhart, Tim Guinane, Lois Hanshaw, Shirley Jones, Andrea Levine, Donna Miller, Mark Ramage, Beverly Ross, Cynthia Saunders, Wesley Sholtes, Lou V.B. Smith, Andrew Stephens, and Sabrina Streagle made key contributions to this report. | Agencies publish thousands of rules each year, with significant benefits and costs. Before issuing a final rule, agencies are generally required to publish an NPRM in the Federal Register. Agencies must then respond to public comments when issuing final rules. Agencies may use exceptions in certain circumstances to forgo this NPRM process to expedite rulemaking. The Office of Management and Budget (OMB) has authority to provide guidance on regulatory issues. GAO was asked to provide information on the rulemaking process. This report addresses (1) how often agencies issued final rules without an NPRM; (2) which exceptions agencies used to do this; and (3) whether agencies took certain actions when issuing major rules without an NPRM, including voluntarily requesting and responding to public comments. GAO reviewed a generalizable random sample of 1,338 final rules published during calendar years 2003 through 2010. The sample contained rules by 52 agencies, including all cabinet departments issuing regulations. GAO completed more detailed analyses of 123 major rules without an NPRM, including every such rule published from 2007 through 2010, to obtain additional information to answer the objectives. Agencies did not publish a notice of proposed rulemaking (NPRM), enabling the public to comment on a proposed rule, for about 35 percent of major rules and about 44 percent of nonmajor rules published during 2003 through 2010. A major rule has significant economic impact and may, for example, have an annual effect on the economy of $100 million or more. Agencies published a total of 568 major rules from 2003 through 2010. Agencies also published about 30,000 nonmajor rules during this period, which have less economic significance and can involve routine administrative issues. Agencies frequently cited the "good cause" exception and other statutory exceptions for publishing final rules without an NPRM. Agencies in GAO's sample used the "good cause" exception for 77 percent of major rules and 61 percent of nonmajor rules published without an NPRM. Agencies may use the good cause exception when they find that notice and comment procedures are "impracticable, unnecessary, or contrary to the public interest." In practice, agencies may find an NPRM "impracticable" when the rule must be issued by a statutory deadline, "unnecessary" when the rule pertains to technical corrections, and "contrary to the public interest" in an emergency situation. To a lesser extent, agencies also used other statutory exceptions to issue a rule without an NPRM. For example, in 84 of the 123 major rules that GAO analyzed, agencies described circumstances in which a statute: (1) either required or authorized them to issue the rule without an NPRM, (2) prescribed the content of the rule, or (3) set a deadline for a rule or program which the agency stated did not allow sufficient time to issue an NPRM. GAO found that agencies, though not required, often requested comments on major final rules issued without an NPRM, but they did not always respond to the comments received. Agencies may solicit comments through the Federal Register when publishing a final rule without an NPRM, but the public does not have an opportunity to comment before the rule's issuance, nor is the agency obligated to respond to comments it has received. For example, agencies requested comments on 77 of the 123 major rules issued without an NPRM in GAO's sample. The agencies did not issue a follow-up rule or respond to comments on 26 of these 77 rules. This is a missed opportunity, because GAO found that when agencies did respond to public comments they often made changes to improve the rules. In addition, each of these 26 rules is economically significant and some of these rules have an impact of a billion dollars a year or more. These rules also cover important issues ranging from national health care policies to manufacturing incentive programs. For example, in one of the 26 rules, an agency defined a pre-existing condition to implement the Patient Protection and Affordable Care Act and sought public comment. The agency received 4,627 comments, but has not published a response to them. When agencies do not respond to comments requested, the public does not know whether the agency considered their comments, or if it intends to change the rule. As the courts have recognized, the opportunity to comment is meaningless unless the agency responds to significant points raised by the public. GAO recommends that OMB issue guidance to encourage agencies to respond to comments on final major rules, for which the agency has discretion, that are issued without a prior NPRM. OMB disagreed that guidance would offer substantial benefits. GAO believes the recommendation remains valid, as further discussed in the report. |
As part of Congress’ efforts to improve the availability of information on and management of services acquisitions, it enacted Section 801 of the National Defense Authorization Act for Fiscal Year 2002, which required the Secretary of Defense to establish a data-collection system to provide management information with regard to each purchase of services by a military department or defense agency. For example, the information to be provided includes the services purchased, the total dollar amount of the purchase, and the extent of competition provided in making the purchase, among other things. In 2008, Congress amended this provision in section 807 of the National Defense Authorization Act for Fiscal Year 2008 to add a requirement for the Secretary of Defense to submit an annual inventory of the activities performed pursuant to contracts for services for or on behalf of DOD during the preceding fiscal year. The inventory is to include a number of specific data elements for each identified activity, including the function and missions performed by the contractor; the contracting organization, the component of DOD administering the contract, and the organization whose requirements are being met through contractor performance of the function; the funding source for the contract by appropriation and operating agency; the fiscal year the activity first appeared on an inventory; the number of full-time contractor employees (or its equivalent) paid for performance of the activity; a determination of whether the contract pursuant to which the activity is performed is a personal services contract; and a summary of the information required to be collected for the activity under 10 U.S.C. § 2330a(a). As indicated in AT&L’s May 2010 guidance, DOD components are to compile an inventory of activities performed on their behalf by contractors and submit it to AT&L, which formally submits a consolidated DOD inventory to Congress. Once compiled, the inventory is to be made public and, within 90 days of the date on which the inventory is submitted to Congress, the Secretary of the military department or head of the defense agency responsible for activities in the inventory is to review the contracts and activities for which they are responsible and ensure that any personal services contract included in the inventory was properly entered into and is being performed appropriately; that the activities in the inventory do not include inherently governmental functions; and to the maximum extent practicable, activities on the list do not include any functions closely associated with inherently governmental functions. In addition, the Secretary of the military department or head of the defense agency is to identify activities that should be considered for conversion to performance by civilian employees pursuant to 10 U.S.C. § 2463 or to an acquisition approach that would be more advantageous to the department. Congress added Section 2463 to title 10 of the U.S. Code in the National Defense Authorization Act for Fiscal Year 2008. This section required the Under Secretary of Defense for Personnel and Readiness to develop guidelines and procedures to ensure that consideration is given to using DOD civilian employees to perform new functions and functions that are currently performed by contractors and could be performed by DOD civilian employees. In particular, these guidelines and procedures are to provide special consideration for, among other instances, in-sourcing functions closely associated with inherently governmental functions that are currently being performed by contractors, or new requirements that may be closely associated with inherently governmental functions. Congress required the Secretary of Defense to make use of the inventories created under 10 U.S.C. § 2330a(c) for the purpose of identifying functions that should be considered for performance by DOD civilian employees under this provision. DOD issued initial in-sourcing guidance in April 2008 and additional guidance in May 2009 to assist DOD components in implementing this legislative requirement. The National Defense Authorization Act for Fiscal Year 2010 provided for a new section 115b in title 10 of the U.S. Code that requires DOD to annually submit to the defense committees a strategic workforce plan to shape and improve the civilian workforce. Among other requirements, the plan is to include an assessment of the appropriate mix of military, civilian, and contractor personnel capabilities. The Office of the Under Secretary of Defense for Personnel and Readiness is responsible for developing and implementing the strategic plan in consultation with AT&L. Finally, Section 803 of the National Defense Authorization Act for Fiscal Year 2010 requires the Secretary of Defense to include information in DOD’s annual budget justification materials related to the procurement of contract services. Specifically, the legislation requires, for each budget account, to clearly and separately identify (1) the amount requested for the procurement of contract services for each DOD component, installation, or activity, and (2) the number of contractor FTEs projected and justified for each DOD component, installation, or activity based on the inventory and associated reviews. Collectively, these statutory requirements indicate that the inventory and the associated review process are to serve as a basis for identifying candidates for in-sourcing contracted services, supporting development of DOD’s annual strategic workforce plan, and specifying the number of contractor FTEs included in DOD’s annual budget justification materials. Figure 1 below illustrates the relationship between the related statutory requirements. Amount requested for contract services and number of contractor FTEs projected and justified for each budget account, by component, installation, or activity DOD initially planned to use a phased approach to implement the inventory requirement, relying first on submission in October 2008 of a prototype inventory covering activities performed for the Army pursuant to contracted services for fiscal year 2007, and gradually producing inventories for the remaining military departments and defense agencies over subsequent years. The Army served as the prototype as it had started collecting information in 2005 to obtain better visibility of its contractor workforce. To do so, the Army developed its Contractor Manpower Reporting Application (CMRA), a system that is designed to collect information on labor-hour expenditures by function, funding source, and mission supported on contracted efforts. In response to direction from Congress, DOD revised its implementation schedule and in July and September 2009 submitted inventories covering the fiscal year 2008 service contracting activities of the military departments and 13 other defense agencies. For fiscal year 2009, inventories were submitted by the military departments, as well as a larger group of other defense agencies, 17 in total, as well as the U.S. Special Operations Command and the U.S. Transportation Command. DOD officials noted that the components submitting inventories are those components with acquisition authority. AT&L implemented a more uniform approach for compiling the fiscal year 2009 inventories compared with fiscal year 2008, and the changes in the approach affected both the reported spending on service contracts and the number of contractor FTEs. For example, changes in the categories of services included in the inventories influenced the Air Force’s reported increase and the Navy’s reported decrease in spending on services in fiscal year 2009. Similarly, the use of a new formula based on AT&L guidance for estimating contractor FTEs reduced the number of contractor FTEs the Navy and Air Force would have reported had they used the formulas each used for their fiscal year 2008 inventories. AT&L’s guidance also authorized the Army to continue using its existing process, which incorporates data reported by contractors through the Army’s CMRA system, as the basis for its inventory. Army officials attributed the reported increases in spending and number of contractor FTEs in the Army’s inventory to better reporting in the CMRA system in fiscal year 2009. DOD and military department officials identified continuing limitations associated with the fiscal year 2009 inventories, including the inability of FPDS-NG, which was to be used by DOD components other than the Army, to provide information for all of the required data elements. Similarly, Army officials we spoke with expressed some concerns with the process used to ensure the accuracy of data reported in CMRA. AT&L characterized its May 2010 guidance as an interim measure for circumstances in which actual contractor manpower data have not been collected. The department has stated that it plans to move towards collecting such data from contractors as the basis for future inventories, but it has not issued guidance or a plan of action for doing so. In May 2010, AT&L issued guidance that provided more uniform direction to be used by DOD components other than the Army to compile their fiscal year 2009 inventories, while allowing the Army to continue using its existing process that reports manpower data collected directly from its contractors. AT&L noted that the move towards a more uniform approach in fiscal year 2009 was meant to reduce inconsistencies that resulted from DOD components using different approaches in fiscal year 2008 and was an interim measure for circumstances in which actual contractor manpower data have not been collected. AT&L’s guidance for fiscal year 2009 standardized the process for compiling the inventories for most DOD components by defining the categories of services to be included in the inventories, the data sources to be used to populate the required data elements, and the method to estimate the number of contractor FTEs. For example, the guidance indicated that all categories of services identified in FPDS-NG were to be included in the fiscal year 2009 inventories, with the exception of those associated with the early stages of research and development, lease and rental of facilities and equipment, and construction. By contrast, for the fiscal year 2008 inventories, the Air Force did not include any research and development services, while the Navy had included all stages of research and development services. Further, the guidance required that FPDS-NG be used as the source for the majority of the inventory’s data elements, such as the service purchased, the total dollar amount of the purchase, the organization whose requirements are being met by contracted performance, and the function and mission being performed by the contract. DPAP officials noted that as DOD currently lacks a single data source that contains information for all the data elements required in the inventories, DOD determined that FPDS- NG provided the most readily available data departmentwide, though it acknowledged that there were limitations in using FPDS-NG to meet the inventory requirements. In instances where FPDS-NG did not contain information for the required inventory data element, such as for the funding source for the contract, AT&L indicated that DOD components were to use other existing data sources. Additionally, the AT&L guidance provided a formula and identified specific information needed for DOD components other than the Army to estimate the number of contractor FTEs paid for the performance of an activity. In contrast, DOD components used several different approaches in fiscal year 2008. For example, the Air Force relied on three approaches for fiscal year 2008, though it primarily relied on its own formula to estimate the number of contractor FTEs. Similarly, the Navy relied solely on a formula it had developed using a sample of Navy contracts to estimate FTEs. The formula provided under the AT&L guidance for fiscal year 2009 incorporated the amount obligated on the contract as reported in FPDS-NG, the estimated portion of those obligations that were associated with a contractor’s direct labor expense, and the estimated cost of that labor. For these two latter factors, DPAP computed averages it derived from the Army’s CMRA data for each type of service and provided them to DOD components to use to estimate contractor FTEs. As noted in the AT&L guidance, these averages were used because other DOD components currently lack a data system to collect data from contractors on the number of direct labor hours associated with the services they perform. Figure 2 provides an illustration of how these averages were to be used to estimate the number of contractor FTEs on a contract for systems engineering services under which approximately half of the $400,000 obligated under the contract was the direct labor provided by contractor employees. The AT&L guidance authorized the Army to continue to use its existing process to compile its inventory, which differs from the approach used by other DOD components, because it relies on contractor-reported data from the CMRA data system. CMRA captures data directly reported by contractors on services performed at the contract line item level, including information on the direct labor dollars, direct labor hours, total invoiced dollars, the functions and mission performed, and the Army unit on whose behalf the services are being performed. In instances where contractors are providing different services under the same order, or are providing services at multiple locations, they can enter additional records in CMRA to capture information associated with each type of service or location. Under its approach, the Army included all categories of research and development services in its inventory, rather than the portion included by the Air Force and Navy, as well as identified the services provided under contracts for goods. To report the number of contractor FTEs, the Army indicated that it divided the number of direct labor hours reported by a contractor in CMRA for each service provided by 2,088, the number of labor hours in a federal employee work year. For other data elements in its inventory, such as the funding source and contracting organization, the Army also relied on the Army Contract Business Intelligence System and updates from resource managers, contracting officer’s representatives (COR), and other officials. DOD reported that the amount obligated on service contracts rose to about $140 billion in fiscal year 2009, while the number of contractor FTEs under those contracts increased to nearly 767,000 FTEs, as shown in table 1. However, the changes in DOD’s approach, in particular how DOD reflected research and development services, and the use of a new formula for estimating contractor FTEs, affected the reported changes in inventory data from fiscal years 2008 to 2009. Further, while the Army approach did not change from fiscal year 2008, Army officials stated that the increase in the amount of fiscal year 2009 spending reported in the Army inventory reflects better reporting in the CMRA system. Consequently, we and DOD officials agree that caution should be exercised when making direct comparisons between the fiscal year 2008 and 2009 inventory data. The Air Force’s fiscal year 2009 inventory shows an increase of about $12.1 billion, whereas the Navy’s inventory shows a decrease of about $1.2 billion. Several factors accounted for these changes. For example: Based on the AT&L guidance, the Air Force included $6.7 billion in research and development, $2.9 billion in maintenance of real property contracts, and $0.1 billion in miscellaneous construction, education and training, and transportation contracts in fiscal year 2009 that it had previously excluded in fiscal year 2008. The remaining $2.4 billion increase reflects additional obligations in fiscal year 2009 on services that were included in both fiscal years. Based on the AT&L guidance, the Navy excluded $5.3 billion of services associated with early stages of research and development activities in fiscal year 2009 that it had previously included. In addition, the Navy included a net increase of about $0.3 billion in contract actions under $100,000 and deobligations in fiscal year 2009 that had previously been excluded. This overall $5 billion decrease, however, was partially offset by a $3.8 billion increase in obligations in fiscal year 2009 on services that were included in both fiscal years. The Navy and Air Force reported an increase in the number of contractor FTEs in their inventories from fiscal year 2008 to 2009, although our analysis found that the Navy’s reported increase was in error. According to a Navy official, the Navy used a different set of labor rates and ratios from those specified under the AT&L approach to simplify the FTE calculations. Had the Navy used AT&L’s proscribed approach, the Navy would have reported 207,604 contractor FTEs for fiscal year 2009, a decrease of 14 percent from fiscal year 2008. More generally, our analysis indicates that the use of the AT&L formula for fiscal year 2009 produced a lower number of contractor FTEs for the Navy and Air Force than their respective fiscal year 2008 formulas would have produced had the approach not changed, as shown in figure 3. The effect of the change in estimating contractor FTEs was even more pronounced on specific categories of services. For example, applying AT&L’s formula resulted in the Air Force reporting 7,902 contractor FTEs associated with systems engineering services for fiscal year 2009. If the Air Force’s fiscal year 2008 formula were applied, the inventory would have shown 12,661 FTEs. At the same time, the Air Force spent more on systems engineering services in fiscal year 2009 than it did in fiscal year 2008. For the Navy, even though it obligated more for program management support services in fiscal year 2009, using the AT&L formula would have resulted in 3,374 contractor FTEs whereas using the Navy’s fiscal year 2008 formula would have produced 8,025 FTEs. Although the Army’s approach for compiling its inventory did not change from fiscal year 2008 to 2009, officials attributed the $8.9 billion increase in the amount of spending reported and the 23 percent increase in the number of contractor FTEs to better reporting through the CMRA system. In particular, Army officials responsible for CMRA said that the fiscal year 2009 inventory contains more data for weapon systems support services than was included in the fiscal year 2008 inventory. These officials also noted that reporting improvements resulted from steps taken to identify missing contractor manpower data and their efforts to follow up with officials and contractors to ensure that required data were reported. For example, subsequent to the deadline for reporting data in CMRA, officials responsible for CMRA stated that they provide a report identifying contracts that were missing data to Army contracting offices and CORs, who are to ensure that contractors report required data. DOD noted the approach taken to compile the fiscal year 2009 inventories, while providing more consistency in certain areas, reflected continued limitations. In the absence of a single departmentwide data system that could provide data that directly responded to the legislative reporting requirements, DPAP officials stated that they relied on the best information currently available, including data from FPDS-NG. Similarly, Army officials acknowledged that they are taking steps to continue to improve the Army’s process for collecting data in CMRA. In acknowledging limitations associated with the fiscal year 2009 inventories, DOD plans to release future guidance to move towards the department’s stated goal of collecting actual contractor manpower data. AT&L’s use of FPDS-NG as the primary basis for the inventories presented several limitations, including that it does not currently contain information on the number of contractor FTEs. Further, the legislation required information on all activities performed pursuant to contracts for services during the fiscal year, but DOD noted that because contract actions are recorded in FPDS-NG as being used either to purchase goods or services, instances in which services were provided under a contract action coded as one for goods were not captured in the Air Force, Navy, and defense agencies’ inventories. In contrast, because the Army’s CMRA enables it to identify services acquired under contracts for goods, we found that the Army’s inventory included about $5.5 billion in services that were purchased under contracts consistently coded as goods in FPDS-NG in both fiscal year 2008 and 2009. In addition, components using the AT&L approach were instructed to use the funding office as recorded in FPDS- NG as the basis for responding to the legislation’s requirement to identify the requiring organization. However, the organization identified as the funding office in FPDS-NG may not necessarily be the organization whose requirements are being met through the contract. Similarly, AT&L’s guidance instructed DOD components to record in the inventory the category of service with the predominant amount of dollars, although more than one category of service may be purchased under a contract action. As a result, this approach may not provide visibility into all the services purchased under a contract action. Further, DOD acknowledged that it did not account for service contracts that were awarded on behalf of DOD by non-DOD agencies, as was the case with its fiscal year 2008 inventories. According to FPDS-NG data, non-DOD agencies awarded contracts totaling just under $1 billion on behalf of the Army, Navy, and Air Force in fiscal year 2009. Several officials from DOD and the military departments also expressed concerns about the formula provided under the AT&L guidance for calculating contractor FTEs. For example, Air Force and Navy officials expressed concerns that the average direct labor rates and average ratios of direct labor dollars to total invoiced dollars specified in the AT&L approach may not reflect the services for which they contract, because the AT&L averages were derived from data reported in the Army’s CMRA system. They agreed, however, to implement the AT&L approach given the absence of a departmentwide system containing information on the number of contractor FTEs paid to perform activities under contracts for services. Officials from the Army Force Management, Manpower and Resources (FMMR) office, the Office of Cost Assessment and Program Evaluation, and the Office of the Under Secretary of Defense for Personnel and Readiness raised concerns about the use of average labor rates and ratios to estimate contractor FTEs given the tendency of those averages to obscure variation in the underlying data. In this regard, our analysis showed that when applying the AT&L formula to the Army’s reported fiscal year 2009 inventory data, the AT&L formula approximated the aggregate number of contractor FTEs reported by the Army, but resulted in significant variations for some specific categories of services and particular contracts. At the aggregate level, the AT&L formula estimated the number of contractor FTEs at about 3 percent below the Army’s reported 262,282 FTEs. However, the Army reported 113,713 contractor FTEs performing professional, administrative, and management support services whereas the AT&L formula estimated significantly fewer, 65,408 FTEs. Additionally, the Army reported 264 FTEs on an individual $23.6 million task order for engineering technical services, whereas the AT&L formula estimated 115 FTEs. These types of differences occurred because the average labor rates and ratios calculated by DPAP for use in the AT&L formula were heavily influenced by a small number of large dollar value contracts included in the Army’s inventory. At the same time, officials from the Army’s financial management and manpower planning offices and the Army commands we spoke with expressed some concerns with the process used to ensure the accuracy of data reported in CMRA. According to CMRA guidance, CORs are to review data entered by contractors in CMRA and edit incorrect data. Specifically, Army officials responsible for CMRA stated that CORs are to help ensure that contractors report data in CMRA, and are to validate entries such as the requiring organization, the function performed by the contractor, the funding source, and the total invoiced amount. They also noted, however, that CORs are not responsible for validating the number of direct labor hours reported by contractors, which is used to report contractor FTEs in the Army’s inventory. This is in part because the CORs do not have direct knowledge of or access to contractor information regarding the number of direct labor hours for fixed-price or performance-based contracts. Officials responsible for CMRA oversight and officials from the Office of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology and IMCOM stated that they have efforts underway to better clarify COR responsibilities with regard to CMRA data, including providing additional training to CORs and implementing guidance that clearly defines responsibility for ensuring the completeness and accuracy of the Army’s inventory data. AT&L characterized the purpose of its May 2010 guidance as providing an interim measure for circumstances in which actual contractor manpower data has not been collected. AT&L’s guidance stated that the department recognizes the need and benefit of doing so, in part to help make well- informed in-sourcing decisions, and is committed to doing so. In addition, the guidance stated that AT&L planned to work with the Office of the Under Secretary of Defense for Personnel and Readiness and the Office of Cost Assessment and Program Evaluation to issue preliminary guidance and a proposed plan of action by August 2010. The guidance noted that this process would require close collaboration between the component acquisition and manpower organizations. At the time of our review, such guidance indicating how the department will move towards achieving its stated objectives, including anticipated time frames and the necessary resources to do so, had not been issued. Senior officials from DPAP and the Office of the Under Secretary for Personnel and Readiness indicated, however, that the approach in the short term will likely remain the same until the department implements a longer-term solution. The military departments differ in their approaches to the required reviews of the activities performed by contractors and in the extent to which they have used the inventories to inform workforce decisions, including in- sourcing. The Army has used a centralized, headquarters-level approach to identify contractors performing functions that are inherently governmental or closely associated with inherently governmental functions, unauthorized personal services, and other functions on a command-by- command basis. Since January 2009, the Army has completed at least one review for 24 of 26 commands and headquarters organizations and identified approximately 2,357 contractor FTEs performing inherently governmental functions, 45,934 contractor FTEs performing activities closely associated with inherently governmental functions, and 1,877 contractor FTEs providing unauthorized personal services. Army officials have indicated that these reviews contributed to decisions to insource selected functions for performance by Army personnel. In contrast, the Air Force and Navy have implemented decentralized approaches that rely on their major commands to review the activities performed by contractors listed in their inventories and report the results back to their respective headquarters. The Air Force implemented its initial review in January 2010, but experienced challenges in doing so, including receiving inadequate information for many of its contracts. These challenges will likely cause its approach to evolve in the future. Air Force officials reported that the inventory has provided limited utility for informing decisions such as in-sourcing to date. The Navy issued guidance to its commands in September 2010, but the results of the Navy’s initial review had not yet been reported as of November 2010. The Army has implemented a centralized approach that relies on two processes—a review prior to contract award and a headquarters-level review of all functions performed by contractors—to meet the requirement to annually review the service contracts and activities in its inventory. In combination, these processes are intended to inform decisions on the use of contractors for services, including in-sourcing. Army officials report that they have completed reviews of 24 of 26 command and headquarters organizations as of November 2010. Army officials noted that the length of time to conduct reviews for each command and the need to reconcile the various data sources used to conduct the reviews have posed challenges that they are working to address. Both of these processes are relatively recent initiatives. On October 2, 2008, the Assistant Secretary of the Army for Acquisition, Logistics and Technology issued guidance stating that starting in fiscal year 2009, officials from requiring activities must receive written approval to initiate a contract or exercise an option for services from the cognizant General Officer or member of the Senior Executive Service. To obtain approval, the requiring activity must complete a service contract approval form and a series of worksheets that are designed to help identify whether the function to be performed by the contractor to meet the contract requirement is inherently governmental, closely associated with an inherently governmental function, or a personal service, and whether the function should be insourced. The General Officer or Senior Executive must certify that special consideration was given to having federal employees perform functions that are closely associated with an inherently governmental function, sufficiently trained and experienced officials are available to oversee the contract, and that the contract is or will be reported in CMRA. Additionally, in January 2009, the Army established the Panel for Documentation of Contractors (PDC), which was tasked to review functions being performed by contractors on an annual basis at each command. The PDC consists of officials from the Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs, Force Management, Manpower and Resources (FMMR), along with headquarters officials from the acquisition and manpower planning communities. Army guidance directed commands to provide data to the PDC on the functions being performed by contractors and an assessment of whether those functions were inherently governmental, closely associated with inherently governmental functions, or unauthorized personal services. To carry out this assessment, commands identify functions that are being performed by contractors, the organizational unit for which the function is being performed, the funding information associated with the contract under which the function is being performed, and the number of contractor FTEs performing the function. In addition, for each function, command officials are to include a detailed description and categorize it according to whether the function is appropriate for contracting, constitutes an unauthorized personal service, is closely associated with an inherently governmental function, or is an inherently governmental function. The service contract approval form and associated worksheets are to inform the commands’ categorization of contractor functions. In turn, commands provide this information to the PDC officials, who make a separate determination as to the appropriate categorization of the function being performed by the contractor. FMMR and command officials reported that they engage in further discussion in instances where there is a difference of opinion on the appropriate categorization of a function in order to reach agreement. Once the PDC has completed its review, FMMR issues a memorandum to the commands summarizing the results of the review, including the number of contractor FTEs categorized as performing inherently governmental functions, closely associated with inherently governmental functions, or providing unauthorized personal services. The command is to use the results of the PDC review to inform decisions regarding the need to insource certain functions, whether to continue using contractors to perform the functions, or in some cases, to determine the command no longer requires those functions. FMMR officials noted that commands are responsible for integrating the results of the PDC process into their manpower planning efforts. Since the PDC reviews of contractor functions started in 2009, FMMR officials indicated that they have completed reviews for 24 of 26 Army commands and headquarters organizations as of November 2010. Through its reviews, the Army reported that it identified approximately 2,357 contractor FTEs performing inherently governmental functions, 45,934 contractor FTEs performing activities closely associated with inherently governmental functions, and 1,877 contractor FTEs providing unauthorized personal services. For example, the PDC review completed in August 2010 of 12,805 contractor FTEs performing functions for TRADOC identified 9 contractor FTEs that were performing inherently governmental functions and 53 contractor FTEs performing unauthorized personal services. Similarly, in March 2010, the Army completed its review of the over 36,000 contractor FTEs performing functions for IMCOM, and identified 6 contractor FTEs that were performing inherently governmental functions and 657 contractor FTEs providing unauthorized personal services. Both TRADOC and IMCOM officials reported that in- sourcing decisions either have been informed by the PDC review, or will be based on PDC reviews going forward. For example, TRADOC is in the process of in-sourcing 5 contractor FTEs that had been performing military analyst functions that were identified as candidates in the PDC review. IMCOM officials noted, however, that other factors, such as budgetary changes and other statutory requirements, also contributed to in-sourcing decisions. For example, IMCOM officials said that most in- sourcing for fiscal year 2010 will result from a loss of statutory authority to contract for certain security guard functions, and in fiscal year 2011 most in-sourcing decisions will be the result of requirements to reduce service contract costs. Army FMMR and command officials have identified a number of challenges in conducting the initial reviews, including the length of time to conduct reviews for each command, and the need to reconcile data used to conduct the reviews to data in the inventory. As of October 2010, the Army had been working through the PDC reviews for about 18 months, and in that time, the PDC has reviewed functions associated with over 100,000 contractor FTEs across most commands and headquarters organizations. FMMR officials noted that the process has taken time to implement because they engaged in discussions with command officials in a number of instances to revise the initial information provided to the PDC to ensure that the criteria for categorizing functions are applied appropriately and consistently. TRADOC officials said that they went through two rounds of PDC reviews of their contracted functions to improve the accuracy of the service contract data the command submitted to the PDC for its reviews. Army officials stated that for future PDC reviews, it may be possible to focus efforts on functions that are new or have changed from prior reviews as a way to more efficiently implement the review process. The length of time it has taken to implement the PDC reviews has also resulted in challenges related to incorporating the final determinations that come out of the reviews into Army manpower documents in a way that aligns with the annual budget and planning processes, according to Army force management officials. FMMR and command officials also reported difficulties reconciling service contractor information used for the review process with the inventory data provided through the Army’s CMRA system, including the numbers of contractor FTEs reported as performing various activities. For example, to help determine whether a contractor is performing an inherently governmental function, the commands collect more detailed information on the functions being performed by the contractor than is collected by CMRA and reported in the Army’s inventory of services. FMMR and command officials indicated that the inventory data is generally used to check whether the data on the total number of contractor FTEs reported to the PDC appears reasonable. For example, at IMCOM, the number of contractor FTEs it reported to the PDC for review was 10,639 higher than the number of contractor FTEs reported in CMRA. IMCOM officials stated that this difference occurred because contractors were not fully reporting the required data in CMRA and CORs were not verifying whether contractors had reported the data. FMMR and IMCOM officials are working together to reconcile this discrepancy. FMMR officials also stated that they are working with Army commands to better align contractor function data provided to the PDC with data in CMRA and noted that the use of data from both CMRA and the PDC has been helpful in gaining greater insight into the contracted component of the Army’s workforce. The Air Force and Navy each issued inventory review guidance that delegates responsibility for determining the approach, as well as for conducting the actual review, to their major commands. The Air Force reported that its commands completed their initial effort at conducting the reviews in March 2010, but the department is revising its review process to address several issues encountered during this process, including a substantial number of contracts in the inventory for which inadequate information was provided. The Navy issued guidance to its commands in September 2010 requiring them to conduct a review, but the results of the commands’ reviews were not available as of November 2010. In January 2010, the Secretary of the Air Force issued guidance instructing its major commands to conduct an initial review of its fiscal years 2008 and 2009 activities performed under service contracts. To do so, Air Force headquarters inventory officials provided each major command with a spreadsheet containing its portion of the department’s inventory from which command officials were to review and determine, with a yes or no answer, whether the activity performed under the contract was an inherently governmental function, closely associated with an inherently governmental function, a personal service, or whether the activity is being considered for in-sourcing. The guidance included broad definitions, based on existing DOD guidance and the Federal Acquisition Regulation (FAR), for commands to use to make these assessments. According to an Air Force inventory official, a headquarters review of the initial information submitted by the commands in March 2010 in response to the January 2010 guidance found that approximately 40 percent of the contracts included for review did not contain adequate responses to the required review elements. Air Force Materiel Command (AFMC) inventory officials explained that they experienced a number of difficulties during the initial review process, and command data show that AFMC did not provide the required determinations for approximately 22 percent of its contract actions. An AFMC official noted that in many cases it was difficult to determine the requiring activity for a given contract action, which in turn made it difficult to determine who was the most appropriate manager to provide the required information. Additionally, even when AFMC officials were able to identify the appropriate subordinate units and responsible managers, the AFMC official expressed concern that the managers were not consistently applying the criteria indicated in the January 2010 guidance to identify contractors performing inherently governmental or services closely associated with inherently governmental functions. For example, AFMC identified 152 contract actions that potentially involved performance of an inherently governmental function, but the official responsible for the command’s review process was unsure of the extent to which these determinations were correct. As a result of the challenges experienced throughout the department during its initial inventory review, the Secretary of the Air Force issued additional guidance in October 2010 requiring major commands to complete the review of activities under service contracts reflected in its fiscal year 2009 inventory that may have been missed during the initial review as well as some activities from fiscal year 2010. Additionally, Air Force acquisition officials stated that the department is considering how to further address the challenges encountered. Notwithstanding these challenges, Air Force headquarters and AFMC officials stated that they rely on other processes to mitigate the risk of contractors performing inherently governmental functions. For example, they stated that requirements are reviewed prior to contract award to prevent contracting for inherently governmental functions in accordance with existing Air Force guidance. Additionally, an AFMC official explained that they rely on the CORs to monitor contractor performance and ensure that the functions being performed do not evolve into inherently governmental functions. Air Force in-sourcing and AFMC officials noted that to-date the inventory has provided limited utility during the in-sourcing decision-making process. AFMC officials indicated that the inventory can provide command officials with a list of contracts from which they may be able to identify potential cost savings, but Air Force officials stated that additional analysis including detailed cost comparisons and command stakeholder input is required to make cost-based in-sourcing decisions. According to the Air Force’s fiscal year 2010 in-sourcing plan, the majority of its decisions to insource were to be based on analyses of whether the performance of services by government employees would be more cost- effective, which is one of several criteria indicated in DOD’s guidance on in-sourcing. The Navy issued guidance in September 2010 requiring Navy organizations to review the fiscal year 2009 inventory and report the results within 45 days, but at the time of our review in November 2010, the department had not yet completed this initial inventory review. The guidance requires the head of the contracting activity to validate that its respective contracts for services were reviewed to determine if the contracted functions include inherently governmental or closely associated functions, or unauthorized personal services, based, for example, on criteria in the FAR. Following completion, each contracting activity must provide a letter to the Assistant Secretary of the Navy for Research, Development, and Acquisition certifying that it completed the review, identifying the number of contracts that were found to be unacceptable based on the review criteria, and including a plan of corrective action for those contract activities deemed unacceptable. Navy headquarters acquisition and command officials stated although the department’s initial inventory review remains in progress, other processes are used to review contractor functions and inform workforce management–related decisions such as in-sourcing. For example, Navy officials responsible for organizing the inventory compilation and review processes explained that commands review contracts during the preaward and option exercise phases in an effort to prevent the award of contracts that include inherently governmental functions and unauthorized personal services. Additionally, Naval Air Systems Command officials reported that CORs are to monitor contracted employees during contract performance to ensure that the scope or nature of the function does not evolve to include inherently governmental functions. Finally, command officials explained that they rely on existing command staffing processes, which includes input from functional managers to determine the most appropriate blend of military, civilian, and contractor personnel to meet command workload and mission requirements, as well as identify opportunities for in-sourcing. DOD has acknowledged the need to rebalance its workforce, in part by reducing its reliance on contractors. To do so, however, the department needs good information on the roles and functions played by contractors, which the department currently does not have. The required inventories that DOD is developing are intended to provide an additional source for such information to assist DOD in determining whether or not contracted services should be performed by government employees, to mitigate the risk of transferring government responsibilities to contactors. At this point, DOD has been working to implement the inventory requirements since the legislation was passed in 2008. With regard to reviewing the functions and activities reflected in the inventories, the department’s efforts are less mature. Given this early state of implementation, the inventories and associated review processes are being used to varying degrees by the military departments to help inform workforce decisions such as in-sourcing. Overall, the Army has used the inventories to a greater degree than the other military departments. The department’s primary focus has been on identifying ways by which it can compile the information required by the legislation, and in particular, estimating the number of contractor personnel providing services. AT&L’s latest approach for the fiscal year 2009 inventories was intended, in part, to provide the Navy and Air Force with a more uniform approach than previously used for estimating the number of contractors. To do so, AT&L’s guidance provided a formula for them to use and specified that FPDS-NG be used as the basis for the majority of the data elements in the inventory. DOD officials expressed concerns, however, about AT&L’s estimating approach, which we found resulted in significant variations for specific categories of services. DOD officials also expressed concern about the type of data that can be obtained through the FPDS-NG to meet the inventory requirements. AT&L’s guidance also authorized the Army to continue using the approach it has put in place to obtain contractor- reported data on direct labor hours. For its part, the Army acknowledged that it needs to continue to improve its process for collecting contractor- reported data, including clarifying responsibilities for ensuring the completeness and accuracy of the data. AT&L stated that its latest approach was an interim measure for components that do not currently have the capability to collect actual contractor manpower data in a manner similar to that of the Army. To move toward the department’s stated goal of collecting such data, AT&L’s guidance noted that it intended to work jointly with the Office of the Under Secretary of Defense for Personnel and Readiness and other organizations to issue new guidance and plans by August 2010, but it has not yet done so. Developing a plan of action, including time frames and the resources needed to implement it, would provide an important tangible step in meeting the inventory requirements. The real benefit of the inventory, however, will ultimately be measured by its ability to inform decision-making. At this point, the absence of a way forward hinders the achievement of this objective. To help implement the requirements for conducting the inventory of service contract activities, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics, and the Under Secretary of Defense for Personnel and Readiness to work jointly to take the following two actions: develop a plan of action, including anticipated time frames and necessary resources, to facilitate the department’s stated intent of collecting manpower data and to address other limitations in its current approach to meeting inventory requirements, including those specific to FPDS-NG; and assess ways to improve the department’s approach to estimating contractor FTEs until the department is able to collect manpower data from contractors. DOD provided oral comments on a draft of this report. Mr. Shay Assad, Director, Defense Procurement and Acquisition Policy, stated that DOD concurred with the recommendations. DOD also provided technical comments, which were incorporated as appropriate. We are sending a copy of this report to the Secretary of Defense and interested congressional committees. In addition, this document will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please contact us at (202) 512-4841 or huttonj@gao.gov or (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this correspondence are listed in appendix II. Section 803 of the National Defense Authorization Act for Fiscal Year 2010 directs GAO to report annually on the inventory of activities performed pursuant to contracts for services that are to be submitted by the Secretary of Defense, in 2010, 2011, and 2012 respectively. To respond to this mandate, we assessed (1) the approaches used to compile the fiscal year 2009 inventories and how the approaches have changed, and (2) how the inventories have been reviewed and used to inform workforce decisions. As the military departments accounted for 83 percent of the reported obligations on service contracts and 92 percent of the reported number of contractor full-time equivalents (FTE) in the fiscal year 2009 inventories, we focused our efforts on the Army, Navy, and Air Force. To gain additional insights into the inventory compilation and review processes at the command level, we selected a nongeneralizable sample of four commands based on (1) a combination of data from the Federal Procurement Data System–Next Generation (FPDS-NG) on dollars obligated under contracts for professional, administrative, and management support services, (2) military department data on in- sourcing activities planned for fiscal year 2010, and (3) recommendations from military department officials. Using these criteria, we selected the Army’s Training and Doctrine Command and the Installation Management Command; the Naval Air Systems Command; and the Air Force Material Command. We also selected a nongeneralizable sample of other defense components that were among those obligating high-dollar amounts under contracts for services in fiscal year 2009 according to FPDS-NG to gain their perspectives on the inventory compilation and review processes, including the TRICARE Management Activity, the Defense Information Systems Agency, and the U.S. Special Operations Command. While we used information from these components to further inform our understanding of the inventory compilation and review processes, we did not focus on these organizations for purposes of this report. To assess the approaches used to compile the fiscal year 2009 inventories and how the approaches have changed, we reviewed the guidance issued in May 2010 by the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics (AT&L), as well as additional guidance and documents from the military departments, and interviewed officials responsible for compiling the inventories. We compared this guidance with similar guidance and documents related to the fiscal year 2008 inventories, as well as information from our January 2010 report that assessed DOD’s approach. We also obtained the inventories submitted by AT&L for each of the military departments. For the Navy and Air Force, we replicated the criteria included in the AT&L guidance using data we extracted from FPDS-NG on service contracts active in fiscal year 2009 to determine whether their inventories complied with instructions in the guidance. For example, we verified that the Navy and Air Force inventory included the contract services specified under AT&L’s guidance and that the information on the number of and obligations on those contracts were consistent with the data reflected in FPDS-NG. The Army used the Contractor Manpower Reporting Application (CMRA) data system, which captures data reported by contractors at the contract line item level, in both of its fiscal year 2008 and 2009 inventories. We did not compare the Army’s fiscal year 2009 inventory with data in FPDS-NG to assess the completeness due to differences in the data captured between the two systems. We discussed with Army officials responsible for the inventories the factors that contributed to changes in the Army’s reported spending on services and the number of contractor FTEs. We did not independently assess the accuracy or reliability of the underlying data supporting the Army, Navy, or Air Force fiscal year 2009 inventories. However, we found the data to be sufficiently reliable for the purpose of assessing the effect of changes in the approach from fiscal year 2008 to 2009. In addition, we analyzed the extent to which the change in approach from fiscal year 2008 to fiscal year 2009 affected the reported amount of obligations on services included in their inventories. Similarly, we analyzed the extent to which the change in approach affected the estimated number of contractor FTEs reported in the inventories at either the aggregate level and for specific types of services by (1) applying the formulas used by the Navy and Air Force to estimate the number of contractor FTEs in fiscal year 2008 to the contracts included in their fiscal year 2009 inventories; and (2) comparing that figure with the estimated number of contractor FTEs using the formula prescribed by AT&L in the May 2010 guidance. Because AT&L’s approach for estimating contractor FTEs was based, in part, on the labor rates and the ratios of direct labor to total expenditures derived from the Army’s CMRA data system, we assessed the extent to which the AT&L approach would have produced similar estimates of the number of contractor FTEs as reported by the Army at the summary level as well as for specific categories of services. Specifically, we applied the average direct labor rates and the average ratios of direct labor to total obligations computed by AT&L’s Office of Defense Procurement and Acquisition Policy (DPAP) to the total invoiced dollar amount for each contract included the Army’s inventory. We then compared the number of contractor FTEs estimated by the AT&L formula to the number of contractor FTEs reported in the Army’s inventory. To assess how the inventories have been reviewed in accordance with the requirements contained in 10 U.S.C. § 2330a(e), we reviewed guidance and interviewed officials from each of the military departments and selected commands to discuss the review processes and to identify any challenges encountered in conducting these reviews and any steps taken to address those challenges. For the Army and the Air Force, we reviewed data on the results of their respective inventory reviews as of November 2010. In conducting the inventory reviews, officials made determinations as to whether contracts included the performance of inherently governmental functions, closely associated with inherently governmental functions, or involved the performance of unauthorized personal services by contractor personnel. We did not independently assess whether such determinations were consistent with existing regulations and guidance, but focused our work on the processes used to conduct the reviews. To assess how the inventories have been used to inform workforce decisions, we focused our work on the extent to which the inventories have been used to help identify candidates for in-sourcing work currently performed by contractor personnel. To do so, we interviewed officials from the Office of the Under Secretary of Defense for Personnel and Readiness and from each of the military departments who were responsible for in-sourcing efforts to determine whether and how information contained in the inventories was used as the basis for informing decisions related to in- sourcing. To conduct our work, we interviewed officials from the following offices: Office of the Secretary of Defense: Office of the Under Secretary of Defense for Acquisition, Technology and Logistics (AT&L), Office of Defense Procurement and Acquisition Policy (DPAP); Office of the Under Secretary of Defense for Personnel and Readiness, Requirements and Program and Budget Coordination Directorate; Office of Cost Assessment and Program Evaluation; Department of the Army: Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs, Office of Force Management, Manpower and Resources; Office of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology, Deputy Assistant Secretary of the Army for Procurement; Office of the Deputy Chief of Staff for Personnel; Office of the Deputy Chief of Staff for Programs, Program Analysis Office of the Assistant Secretary of the Army, Financial Management and Comptroller, Deputy Assistant Secretary of the Army for Budget, Formulation Division; Office of the Deputy Chief of Staff for Operations, Army Force Accounting and Documentation Division; Installation Management Command, Resource Management Directorate; Training and Doctrine Command, Resource Management Department of the Navy: Office of the Assistant Secretary of the Navy for Research, Development, and Acquisition, Deputy Assistant Secretary of the Navy for Acquisition and Logistics Management; Office of the Assistant Secretary of the Navy for Manpower and Reserve Affairs, Office of Civilian Human Resources; Office of the Assistant Secretary of the Navy, Financial Management and Comptroller, Office of Budget; Office of the Chief of Naval Operations, Deputy Chief of Naval Operations for Manpower, Personnel, Education and Training, Strategic Resourcing Branch; Naval Air Systems Command, Analysis and Planning Office, Office of the Deputy Assistant Commander for Contracts and Office of Command Strategic Force, Planning and Management; Department of the Air Force: Secretary of the Air Force, Office of Acquisition Integration; Secretary of the Air Force, Program Executive Office for Combat Directorate of Manpower, Organization and Resources, Strategic Assistant Secretary of the Air Force for Financial Management and Comptroller, Deputy Assistant Secretary for Budget, Directorate of Budget Operations; Air Force Materiel Command, Strategic Plans and Programs Business Integration Office, Office of Manpower and Personnel; Office of Acquisition Policy and Compliance; Business Operations Directorate, Operations Department; Defense Information Systems Agency: Procurement Directorate, Policy, Quality Assurance and Procedures Manpower, Personnel, and Security Directorate, Manpower and Personnel Systems Support Division; U.S. Special Operations Command: Directorate of Procurement, Procurement Management Division; Assessment and Manpower Validation Branch. We conducted this performance audit from May 2010 through November 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, Timothy J. DiNapoli, Assistant Director; Celina Davidson; Morgan Delaney Ramaker; Kathryn Edelman; Meriem Hodge; Julia Kennon; John Krump; Jean McSween; Kenneth Patton; Roxanna Sun; Grant Sutton; Jeff Tessin; and Rebecca Wilson made key contributions to this report. | The Department of Defense (DOD) relies on contractors to perform myriad functions, which can offer benefits and flexibility for DOD. GAO's work has shown that reliance on contractors to support core missions, however, can place the government at risk of transferring government responsibilities to contractors. In April 2009, the Secretary of Defense announced his intent to reduce the department's reliance on contractors. In 2008, Congress required DOD to compile and review an annual inventory of the number of contractor employees working under service contracts and the functions and activities they performed. The fiscal year 2010 National Defense Authorization Act directed GAO to report annually on these inventories. GAO assessed (1) the approaches used to compile the fiscal year 2009 inventories and how the approaches have changed, and (2) how the inventories have been reviewed and used to inform workforce decisions. GAO reviewed guidance; compared the approaches used to develop the fiscal year 2008 and 2009 inventories; and interviewed acquisition and manpower officials from DOD, the military departments, and selected defense components. DOD implemented a more uniform approach to compile its fiscal year 2009 inventories to reduce inconsistencies that resulted from DOD components using different approaches in fiscal year 2008. To do so, in May 2010 the Under Secretary of Defense for Acquisition, Technology and Logistics (AT&L) issued guidance to the Navy, Air Force, and other components that specified the categories of services to be included in the inventories; instructed them to use the Federal Procurement Data System-Next Generation (FPDS-NG) as the basis for most of the inventory data requirements; and provided a formula to estimate the number of contractor full-time equivalent personnel working under those contracts. This guidance also authorized the Army to continue to use its existing process, which incorporates contractor-reported data, including direct labor hours, from its Contractor Manpower Reporting Application. The changes in DOD's approach, in particular how DOD reflected research and development services and the use of a new formula for estimating contractor personnel for the Air Force and Navy, as well as better reporting by the Army, affected the reported fiscal year 2009 inventory data. Collectively, these changes make comparing the fiscal year 2008 and 2009 inventory data problematic. DOD officials acknowledged several continuing limitations associated with the fiscal year 2009 inventories, including the inability of FPDS-NG to provide information for all of the required data elements, and concerns about AT&L's estimating approach. AT&L's May 2010 guidance indicated that it planned to move towards collecting manpower data from contractors and indicated AT&L would work with the Office of the Under Secretary of Defense for Personnel and Readiness and other organizations to issue preliminary guidance and a proposed plan of action by August 2010. However, DOD has not yet done so. The military departments differ both in their approaches to reviewing the activities performed by contractors and the extent to which they have used the inventories to inform workforce decisions. The Army has implemented a centralized approach to identify and assess the functions being performed by contractors and has used such assessments to inform workforce decisions, including those related to identifying functions being performed by contractors that could be converted to performance by DOD civilian personnel. In contrast, the Air Force and Navy have implemented decentralized approaches that rely on major commands to review their contracted activities and report the results back to their respective headquarters. The Air Force implemented its initial review but experienced challenges, including that it did not obtain adequate information, that will likely cause its approach to evolve in the future. The Navy issued guidance on completing reviews to its commands in September 2010, but the results of the reviews had not been reported as of November 2010. Additionally, Air Force and Navy officials said that to date they have made limited use of the inventories to date to help inform their workforce decisions. GAO recommends DOD develop and issue a plan of action to collect manpower data and, in the interim, improve its estimating approach. DOD concurred with the recommendations. |
The Navy comprises a very significant amount of total DOD operations. It accounted for 31 percent, or $78 billion, of DOD’s fiscal year 1994 gross budget authority; controls about 50 percent, or a reported half trillion dollars in DOD’s assets, including 540 ships and over 5,200 aircraft; and employs over one million civilian and military personnel. In addition, the Navy encompasses Marine Corps operations, which in fiscal year 1994, had about $9 billion in gross budget authority, or about 11 percent, of the Navy’s gross budget authority that year. The Navy also operates certain Defense Business Operations Fund (DBOF) activities, which in fiscal year 1994 had $24 billion in reported revenue and were larger than both the Air Force’s or the Army’s DBOF activities. DOD, and especially the Navy, have acknowledged serious and long-standing financial management and reporting problems. Because of these problems, in February 1995, GAO designated DOD’s financial management as a high-risk area especially vulnerable to waste, fraud, and mismanagement. Several organizations are integrally involved in carrying out the Navy’s financial management and reporting: (1) the Office of the Navy’s Assistant Secretary for Financial Management and Comptroller, which has overall financial responsibility, (2) DFAS, which reports to the DOD Comptroller and provides accounting and disbursing services, and (3) Navy components that initiate and authorize financial transactions. The DFAS Cleveland Center is primarily responsible for preparing the Navy’s financial reports from data generated by accounting, financial management, and other management information systems operated by DFAS, the Navy, and the Marine Corps. The CFO Act requires DOD and the other “CFO Act” agencies to improve their financial management and reporting operations. Among its specific requirements is that each agency CFO develop an integrated agency accounting and financial management system, including financial reporting and internal controls. Such systems are to comply with applicable principles and standards and provide for complete, reliable, consistent, and timely information that is responsive to the agency’s financial information needs. To help strengthen financial management, the CFO Act also requires that DOD prepare financial statements for its trust funds, revolving funds, and commercial activities, including those of the Navy. To test whether agencywide audited financial statements would yield additional benefits, the CFO Act also established a 3-year pilot program for the Army, the Air Force, and eight other “CFO Act” agencies or components of agencies. In response to experiences gained under the CFO Act, the Congress concluded that agencywide financial statements contribute to cost- effective improvements in government operations. Accordingly, when the Congress passed the Government Management Reform Act of 1994 (GMRA) (Public Law 103-356), it expanded the CFO Act’s requirement for audited financial statements by requiring that all 24 “CFO Act” agencies, including DOD, annually prepare and have audited agencywide financial statements, beginning with those for fiscal year 1996. GMRA also authorizes the Director of the Office of Management and Budget (OMB) to identify component organizations of the 24 “CFO Act” agencies that will also be required to prepare financial statements for their operations and have them audited. Consistent with GMRA’s legislative history, OMB has indicated that it will identify the military services as DOD components required to prepare financial statements and have them audited. Therefore, fiscal year 1996 is the first year for which the Navy will be required to prepare a full set of financial statements for its general funds. To an even greater extent than the other military services, the Navy is plagued by troublesome financial management problems involving billions of dollars. These problems include (1) internal control breakdowns over disbursements, (2) actual and potential violations of the Anti-Deficiency Act, and (3) widely inconsistent financial reporting on the results of operating Navy’s DBOF activities. The Navy’s serious and widespread financial management problems have been highlighted in audit reports, and embarrassing fraud cases and have severely impeded the Navy’s effective financial management. The following are examples of these problems. In 1989, we reported to the Secretary of the Navy that the Navy’s consolidated financial reports for fiscal year 1986 were unreliable and understated assets by $58 billion.In 1994, we reported a $163 billion discrepancy between the value of property, plant, and equipment that the Navy reported for fiscal year 1993 and the amounts shown in the supporting information various Navy commands submitted to DFAS.In its fiscal year 1994 Federal Managers’ Financial Integrity Act (FMFIA) (Public Law 97-255) report to the Secretary of Defense, the Navy reported that none of the 28 operating accounting systems it evaluated complied with appropriate accounting standards and related requirements. Between 1989 and 1992, a former Military Sealift Command supply officer established a fictitious company, submitted over 100 bogus invoices, and received an estimated $3 million in fraudulent payments. With regard to internal control breakdowns over disbursements, over 2 years ago, we reported that the Navy had a severe and persistent problem with unmatched disbursements, which, in December 1992, amounted to about $13.6 billion. As of August 31, 1995, the Navy’s unmatched disbursements and other problem disbursements totaled $18.6 billion, by far the most of any DOD component, with over 67 percent of the DOD total, as shown in table 1. Particularly troubling, the Navy continues to have difficulty in solving its problem disbursements. For example, from October 31, 1994, through June 30, 1995, the Navy and DFAS resolved about $7.6 billion in unmatched disbursements, which is significant. This reduction was, however, largely eclipsed by $6.7 billion in new unmatched disbursements. Also, problems in keeping records on Navy disbursements have distorted governmentwide financial reporting. DFAS, Cleveland Center, incorrectly recorded billions of dollars of fiscal year 1995 Navy disbursements to a nonbudgetary deposit fund account. According to Department of the Treasury officials, this error resulted in the Treasury understating by at least $4 billion the federal government’s overall budget deficit reported as of June 30, 1995. Thus, maintaining accurate financial records and producing reliable financial information on the Navy’s operations is a meaningful process with relevance to and significant ramifications for the government as a whole. With respect to actual and potential violations of the Anti-Deficiency Act, for fiscal years 1993 and 1994, and through the first 10 months of fiscal year 1995, the Navy investigated 25 cases of potential Anti-Deficiency Act violations involving about $166 million. Of these, 18 cases have been closed with the following results. For 15 cases, involving about $87 million, DOD reported to the Congress that the Navy had violated the Anti-Deficiency Act. In 11 of these cases, the violations were due to misclassifications between appropriations and four cases represented overexpenditures of obligational authority. These violations resulted in disciplinary actions against 58 people. These actions included 1 removal from office, 2 suspensions, 3 letters of punitive reprimand, 20 letters of nonpunitive reprimand, and various other admonishments. In 3 cases, involving about $63 million, investigators found no violations of the act but discovered that accounting errors primarily caused what initially appeared to have been violations. Navy DBOF activities should operate in a businesslike manner with the objective of breaking even. However, in June 1994, we reported that, given the magnitude of differences reported for DBOF’s operating results, it is difficult for Navy and DOD managers to know the Navy DBOF activities’ actual operating results. Nevertheless, the Navy has continued to report misleading DBOF financial information. For fiscal year 1994, the Navy reported (1) a loss of $120 million in the fund’s budget overview, (2) a cumulative loss of $3.2 billion when the fund’s monthly reports for the fiscal year were totaled, and (3) income of $574 million on the fund’s year-end financial statements. Thus, it is unclear and undeterminable whether, in fiscal year 1994, the Navy’s DBOF activities operated at a gain or a loss, or whether they broke even as intended. In addition to these wide fluctuations, comparison of the reported results of the Navy DBOF activities between fiscal years also shows readily apparent inconsistencies. For instance, the Navy’s DBOF financial statements for fiscal year 1992 showed a $2.7 billion operating loss whereas the fiscal year 1993 statements showed operating income of $2.5 billion. The fiscal year 1994 statement showed operating income of $574 million. These extreme fluctuations in annual operating results raise questions regarding the effectiveness of fund management and the accuracy of reported amounts. In addition, the Naval Audit Service has been unable to express opinions on the Navy’s consolidated DBOF activities’ financial statements prepared under the CFO Act. The Service found extensive problems including that the reported cost of property, plant and equipment, and related depreciation, were not adequately supported, and account balances were materially misstated. Also, DFAS acknowledged in its 1994 FMFIA Statement of Assurance that its Navy-related accounting systems did not provide adequate general ledger control. As a result, the DOD Inspector General was unable to audit DOD’s consolidated DBOF financial statements citing significant deficiencies in accounting systems and the inability of Navy to submit timely and accurate statements for audit of its DBOF activities. DOD has initiatives underway that could help address the fundamental weaknesses we found that impede effective financial management and reporting for the Navy. Specifically, the June 1995 DFAS Business Plan includes actions intended to achieve the finance and accounting improvement goals laid out in Secretary Perry’s blueprint for financial management reform. For example, the DFAS Business Plan includes 5 actions to address DOD’s problem disbursements, 19 actions to improve compliance with the Anti-Deficiency Act, and 6 actions to improve the systems supporting DOD’s DBOF operations. For fiscal year 1994, the Navy’s consolidated financial reports showed $506 billion in assets, $7 billion in liabilities, and $87 billion in operating expenses. However, each of these amounts was substantially misstated. Overall, we identified at least $225 billion of errors in the Navy’s fiscal year 1994 consolidated financial reports. As a result, these reports were unreliable and misleading and, thus, of no use to the Congress and to DOD and Navy managers. Furthermore, the reports were, in part, prepared from budgetary data that also contained questionable and abnormal balances, such as negative unliquidated obligations. The Navy’s financial reports were submitted to the Treasury. The Treasury used data from the reports to prepare consolidated financial reports for the federal government. Therefore, the significant errors and problems we identified in the Navy’s financial reports also affect the reliability of the overall government financial reports. We have discussed with DOD, Navy, and DFAS officials and provided to them our workpapers documenting the errors we identified in the Navy’s reports. Nonetheless, because of the Navy’s and DFAS’s inadequate financial records, we cannot be sure that we identified all significant mistakes. Our analysis showed that the Navy’s fiscal year 1994 consolidated financial reports were riddled with billions of dollars in omissions, errors, and misclassifications. The effects of these misstatements on the Navy’s fiscal year 1994 consolidated Reports on Financial Position and Operations are summarized in table 2. Specifically, the Navy’s fiscal year 1994 consolidated financial reports did not depict its true financial status and operating results because of: $66 billion in material omissions, including $31 billion in ammunition held worldwide; $14 billion in supply inventories at air stations, supply centers, other shore activities, and on vessels; and $7 billion in unfunded liabilities for projected environmental cleanup costs for which estimated costs are available; $43 billion in errors, including $32 billion in assets, such as structures and facilities, and government-furnished and contractor-acquired property that were reported twice; $9 billion of understated revenues due to an erroneous calculation; and $2 billion in property that were, in fact, DBOF assets, and, thus, should not have been reported in the Navy’s consolidated financial reports; and $116 billion in misclassifications, including $72 billion in accrued expenditures reported as revenue; $28 billion in capital expenditures reported as operating expenses; and $12 billion in ammunition reported as military equipment. Moreover, we found that the Navy’s financial reports did not include billions of dollars invested in building aircraft and missiles and modernizing weapons systems. Also, while the Navy reported $26.4 billion for ships under construction as of September 30, 1994, it did not include outfitting and post delivery costs, costs related to Military Sealift Command vessel construction, and components for future construction. The Navy did not have sufficient data from which we could determine amounts for these items. In commenting on a draft of this report, DOD agreed that misclassifications and errors were made in the Navy’s fiscal year 1994 financial reports, however, DOD stated that it could not concur with the specifics of the finding regarding the errors until it completes further research. In addition, the Navy’s consolidated financial reports did not disclose the government’s contingent liability for potentially large losses likely to occur but for which reasonable cost estimates could not be made at the time the reports were prepared. Disclosing that these contingent liabilities exist, although they cannot be quantified at present, is significant because they could ultimately cost the government billions of dollars. For example, the Navy’s fiscal year 1994 consolidated financial reports did not describe contingent liabilities for the future costs to the government of cleaning up the environment at Navy sites, for which amounts were not estimable, and the Navy’s share ($643 million) of DOD’s $2 billion liability for pollution prevention activities which covers fiscal years 1995 through 1999; indemnifying contractors under contracts for procurement of nuclear-powered vessels, missiles, and components, and disposal of low-level nuclear waste; and decommissioning ships, including the disposal of nuclear propulsion plants and closing dozens of naval bases and air stations. We found that the Navy’s fiscal year 1994 consolidated financial reports did not disclose obligation and disbursement problems. First, part of the $66 billion in material omissions previously discussed resulted because the Navy did not disclose an estimated $888 million that will eventually be required to pay currently undelivered orders and unpaid obligations associated with appropriations that were canceled as of September 30, 1994. Second, the Navy did not report its billions of dollars of problem disbursements as of September 30, 1994. The Navy’s financial systems, for the most part, do not distinguish between disbursements made for operating expenses and for capital expenditures and, thus, the amounts for these items were improperly reported. DFAS, Cleveland Center, incorrectly (1) used the total obligations incurred for all appropriations to report the Navy’s operating expenses for fiscal year 1994 and (2) reported no amount for capital expenditures. Transaction codes (specifically, object class codes), which are fundamental for properly classifying disbursements, could be used to distinguish between, and, thus, properly report, disbursements for operating expenses and capital expenditures. However, the Navy and DFAS do not require the consistent use of object class codes when recording disbursements for these purposes. OMB has recognized the importance of object class information and encourages its use for financial statement presentation under the CFO Act. In this regard, we extracted approximately 174,000 disbursement transactions totaling about $7.3 billion recorded in the Navy’s Standard Accounting and Reporting System from July through September 1994. Sixty-eight percent of these transactions, representing about $6.4 billion, did not contain object class codes. Also, we identified 2.8 million transactions processed through the Navy’s Centralized Expenditure/Reimbursement Processing System for May and June 1994. We found that 2.2 million of the transactions (78 percent) were processed without object class codes. A Navy finance official told us that Navy and DFAS activities are required to use expense element codes to record transactions for operation and maintenance and research, development, test, and evaluation appropriations, and that in his opinion, DFAS, Cleveland Center, should be able to generate expense data, at least for these appropriations, using these codes. However, similar to object class codes, our analysis of about 630,000 disbursement transactions for 2 months of fiscal year 1995 for the two appropriations showed that expense element codes were not consistently used. Of the transactions we analyzed, about 454,000 either (1) did not have expense element codes or (2) the recorded codes were invalid. In the absence of object class and expense element code data, we believe that information from which to more accurately report these two types of disbursements could have been derived from the Navy’s budget execution reports as of September 30, 1994. Using these reports, we estimated amounts for operating expense and capital expenditures to be $61 billion and $28 billion, respectively, for fiscal year 1994. As a result, we estimated that the $87 billion that the Navy reported as operating expenses was overstated by $26 billion, or almost 30 percent. A root cause of the Navy’s financial reporting deficiencies is the lack of basic internal controls and well-disciplined financial operations. Effective financial management requires strong systems of internal control to help ensure the integrity and reliability of financial information, safeguard assets, and promote conformity with accounting requirements and operating procedures. However, we found that the Navy and DFAS used financial control practices that were fundamentally deficient. Reconciliations are a primary control practice to detect differences between summary and detailed records and accounts. When independently derived records do not agree, managers are to investigate the causes, resolve discrepancies, and make appropriate adjustments. Thus, periodic reconciliations are a first-line defense to detect potential problems, such as the loss or theft of assets. However, we found that the Navy and DFAS did not routinely perform quarterly reconciliations between (1) the Navy’s official accounting records at DFAS’s Defense Accounting Offices (DAO) and (2) custodial property records at Navy activities, as required by the Navy Comptroller Manual. We found, for instance, that the Navy’s official accounting records at DAO-Arlington, had not been reconciled with any of the Navy’s custodial property records for at least 18 months. We found unresolved differences of at least $21 million. The periodic review and analysis of financial information generated by an accounting system is a basic control technique to maintain the integrity of the information by helping to ensure that errors have not occurred. Typically, this control technique would entail processes such as (1) reviewing financial reports to detect unusual information or account balances and (2) analyzing account balance trends between reporting periods. When abnormal account balances or unexpected trends occur, their cause should be investigated and any necessary corrections made. When an agency’s records or reports show abnormal information or account balances, that is a strong indication that errors have occurred in recording or processing the underlying transactions. In this respect, for example, the Navy’s fiscal year 1994 consolidated financial reports showed an operating loss of $9.1 billion. This followed reported losses of $12 billion for fiscal year 1993 and $7.1 billion for fiscal year 1992. Taken at face value, the magnitude of these losses should have alerted the Navy that it may have overspent its appropriations during these 3 fiscal years. Also, the financial reports DFAS used to prepare and support the Navy’s consolidated financial reports for fiscal year 1994 showed various abnormal account balances, such as the following. The military construction appropriation report showed a negative accounts receivable balance of $95 million. The ship procurement appropriation report showed a negative accounts receivable balance of $13 million. Another procurement appropriation report showed an account balance for uncollectible receivables of $88 million, which exceeded the reported value of receivables by $30 million. Although the information and account balances in each of these cases were highly unusual and unlikely to be correct, the Navy and DFAS did not investigate and correct them. Also, we found unusual trends and large variances in account balances that were not investigated, explained, or resolved, even though the Navy’s regulations require them to be. For example, the Navy’s September 30, 1994, consolidated financial reports showed the value of structures and facilities to be $62 billion, or more than double the $29 billion reported a year earlier. A cursory review of these reports would have identified this unreasonable upward fluctuation. Once identified, the underlying cause, which in this case was double counting, could have been readily identified and the financial reports corrected. Specifically, this double counting occurred because DFAS personnel inadvertently included in the worksheets used to prepare the Navy’s fiscal year 1994 consolidated financial reports the same structures and facilities data reported from two sources—the Naval Facilities Engineering Command, which maintains Navywide real estate and facilities data, and individual Navy accounting offices. In its comments on a draft of this report, DOD stated that Navy’s SF-220 series of reports for fiscal year 1994 provided appropriation level totals but did not provide breakdowns of financial data by command or individual activity. DOD also stated that since these financial reports were prepared only at the total appropriation levels, errors at an activity or command were difficult to discern. Finally, DOD stated that it is improbable that errors at the appropriation level will be found without a breakout by command and activity data. Effective financial systems and internal controls would prevent and/or detect errors in recording and processing transactions, regardless of the level at which they occurred. Also, it should be recognized that the fiscal year 1994 SF-220 reports which we evaluated were prepared at the overall Navy departmental level, not at the appropriation level as the comments suggest. Contrary to the department’s assertion, most of the errors we identified were not difficult to discern because they dealt with relatively obvious data omissions, double-counting, and misclassifications. In most cases, they occurred because information available at Navy commands was not requested by DFAS, Cleveland Center, errors were made in recording information in the correct “line items” of the reports, or information was entered in the reports twice. The Navy Comptroller Manual specifies that (1) DAOs schedule physical inventories of plant property and monitor their completion at Navy activities, (2) activities perform such physical inventories at least once every 3 years and correct their property records for any differences, and (3) activities inform the DAOs when physical inventories have been completed. However, we found that DAO-Arlington and DAO-San Diego, which accounted for $5.2 billion of the Navy plant property reported in fiscal year 1994, did not ensure that Navy activities reporting to them had completed the required physical inventories. The activities did not properly inform the DAOs as to whether the triennial physical inventories had been completed, and the DAOs did not follow up with the activities. Specifically, as of September 30, 1994, 124 Navy activities out of 148, or 84 percent, that DAO-Arlington had scheduled for inventories in fiscal years 1993 and 1994 had not reported to the DAO that the inventories had been completed. In February 1995, DAO-Charleston assumed plant property accounting responsibilities for these activities. As of September 30, 1995, DAO officials told us that none of the transferred activities had reported completion of their physical inventories to DAO-Charleston. DAO-San Diego reported plant property amounts for 43 activities as of September 30, 1994. Although the plant property official at the DAO scheduled the physical inventories at these activities, the official did not check to see if the activities were reporting the completion of the physical inventories. Therefore, the DAO had no assurance that these required inventories were being done and the records corrected. We also found that, when inventories were completed, errors were not always identified and corrected. For example, although an Air Maintenance Training Group conducted a physical inventory every 6 months, we found over $46 million of operating inventory items inappropriately included in its plant property records. At a diving unit, our physical inventory of equipment, which was performed shortly after the unit had completed its inventory, noted over $1 million in errors. We also found $1 million of discrepancies at a Naval Computer and Telecommunications Detachment when compared to the DAO-Pensacola property records for the activity. We are now completing our review of other categories of inventory, such as operating materials and supplies, and will report later on the results of that work. There are various sound and necessary reasons for adjusting accounting records, such as to correct errors or to write off bad debts. DOD’s financial management regulations require that adjustments be clearly documented to help ensure that only proper adjustments are made. Otherwise, adjustments could be used to cover up embezzlements, hide losses, or mask errors. Accordingly, it is essential to establish and enforce internal controls that (1) allow only legitimate, authorized adjustments to be made and (2) require maintenance of documentation that explains their basis and purpose, and indicates which official approved them. However, we found that adjustments totaling billions of dollars were routinely made to accounting records and account balances, largely without adequate documentation. From October 1994 through January 1995, over $14 billion of adjustments were processed by DFAS operating locations against the Navy’s financial records. From these transactions, we judgmentally selected 64 adjustments totaling about $1 billion and requested supporting documentation from the applicable DFAS operating locations. These locations provided us documenting records for 33, or about half, of these adjustment transactions, valued at $498 million. For the remaining $527 million, no documentation was provided. Supervisory review of staff work and products is a basic internal control to ensure the quality of work processes and financial reports. Without supervisory review and approval, adjustments could be used to circumvent essential internal controls and, thus, hide errors, fraud, or misuse of assets. Nonetheless, in our view, many of the inaccuracies in the Navy’s financial reports discussed in the previous section could have been identified if Navy and DFAS managers had conducted adequate supervisory reviews. Also, many of the adjustments just discussed were not provided to supervisors for their review and approval. By using basic reasoning to assess account and report balances, evaluate changes from previous periods, and compare reported amounts with available documentation, we were able to identify numerous errors, such as the abnormal account balances and unusual trends previously discussed. The discrepancies in the financial reports and records we found were not, however, detected and investigated by either the DFAS, Cleveland Center, or the Navy. On September 1, 1995, the DFAS Director requested that the DFAS center directors be personally involved in improving DOD’s financial statements and in preventing a repetition of reporting errors disclosed by DOD’s CFO Act financial audits. The Director’s guidance noted that many of the errors were preventable if proper validation steps had been in place before issuance of the reports. The DFAS Director called for increased emphasis on basic internal control areas by ensuring that adequate documentation is available to support the validity and accuracy of accounting transactions; identifying and recording accounts receivable, accounts payable, collections, and disbursements accurately, consistently, and completely, including reconciliation to supporting subsidiary ledgers; obtaining management approval of accounting adjusting entries; compiling and reporting contingent liabilities; and ensuring that component reports of property, equipment, and inventory are promptly submitted and certified as to accuracy. The Director’s guidance is based on DOD’s lessons learned in preparing financial statements under the CFO Act and having them audited. We believe that the guidance gets to the heart of the Navy’s and DFAS’s financial management problems and outlines control techniques that could have detected or prevented many of the financial reporting problems we identified. The DFAS Director stressed that the guidance must be fully and effectively implemented to prevent all types of reporting deficiencies identified throughout DOD. The Navy and DFAS, Cleveland Center, developed the joint CFO Project Plan to set out the steps necessary to meet requirements for preparing consolidated financial statements for the Navy’s general fund operations for fiscal year 1996. The plan describes tasks to be completed, such as holding project meetings and visiting DFAS centers; identifies, for each task, the responsible participating organization, other participating organizations, and deliverables, such as plans or summaries; and includes milestones, such as planned and actual start and completion dates. The plan, which had been under development by Navy and DFAS for approximately 6 months, was approved by the two organizations on October 4, 1995. At the time of its approval, 58 of the 204 tasks that had been identified as underway or completed as of that date were already behind schedule or not yet started. Moreover, given the scope and depth of the Navy’s prior problems, we believe that the plan is not sufficiently detailed to enable the Navy and DFAS to successfully meet the requirements for the preparation of auditable financial statements within the next year. Specifically, the CFO Project Plan does not specify the: Specific offices or positions within the Navy and DFAS which are to be accountable for accomplishing the specific planned actions required to carry out the identified tasks. Instead, the plan identifies only organizational responsibilities for each task. For example, the plan identifies 168 tasks as the responsibility of DFAS, Cleveland Center, but does not designate a specific office or position accountable for completing the tasks. Actions to address previously reported deficiencies. For example, the plan calls for reviewing reports on financial operations as a discrete task with the associated deliverable specified as a summary. However, the plan does not specify the actions to be taken to deal with previously reported deficiencies identified as a result of the reviews. Manner in which it will be coordinated with DOD’s requirement to meet governmentwide financial management improvement initiatives. These initiatives include meeting the requirements of the U.S. Standard General Ledger (SGL), which OMB has required governmentwide for almost a decade. As of September 30, 1994, OMB reported that 34 percent of all executive branch systems fully implemented the SGL and 18 percent partially implemented it. Another governmentwide financial management initiative involves the Treasury’s Federal Agencies Centralized Trial-balance System (FACTS), an automated financial reporting system using the SGL. For fiscal year 1994, the Treasury began using FACTS to collect agency standard general ledger account balances for use in producing the government’s consolidated financial statements. The Treasury gave three DOD organizations and one other executive branch agency waivers for meeting this reporting requirement for fiscal year 1994. In its comments on a draft of this report, DOD did not concur with this last finding and stated that “task 10” of the Navy/DFAS CFO Project Plan provides for coordination with the DOD FACTS effort. The cited task simply reads “Coordinate effort with FACTS effort” without providing any additional specificity. Even though it did not concur with this finding, DOD stated that the ongoing FACTS tasks will be incorporated into the Navy/DFAS plan which should resolve most of our concerns. The Navy/DFAS plan does not have any tasks specifically addressing the SGL issue. An adequate plan would also encompass strategies to provide (1) enough financial management personnel with adequate financial management expertise and experience in Navy operations and (2) short-term solutions to improve the quality of financial data pending completion of long-term financial systems modernization plans. Navy and DFAS officials have told us on numerous occasions that they do not have enough personnel with the right experience to effectively implement the CFO Act’s requirements. However, neither the Navy nor DFAS has taken steps to assess the personnel levels, skills, and experience necessary to effectively carry out Navy-related financial management responsibilities and prepare the Navy’s financial reports and statements. In addition, the CFO Project Plan does not address alternatives, such as the use of contractors, for meeting Navy and DFAS financial management personnel resource needs. An official from the Navy’s Office of the Assistant Secretary for Financial Management and Comptroller told us that higher priorities, such as resolving the Navy’s continuing unmatched disbursements problem, have prevented the Navy from dedicating sufficient personnel to its general fund financial reporting. Similarly, the Director of DFAS’s headquarters Financial Statements Directorate stated that insufficient personnel is a primary impediment to preparing reliable financial reports on the Navy’s operations. Regarding personnel resources, we found that DFAS, Cleveland Center’s Departmental Accounting and Analysis Directorate Had 186 authorized staff positions, but as of June 1995, 57 of these positions, or 31 percent, were vacant. Of these vacancies, 13 were at the mid- and senior-level (GS-12 and above). For generally comparable financial reporting responsibilities supporting the Air Force and the Army, DFAS, Denver Center, had 207 authorized staff positions and DFAS, Indianapolis Center, had 212 positions, with vacancy rates of about 15 percent. Does not have sufficient personnel experienced in Navy operations. Before 1991, DFAS, Cleveland Center, served as the Navy’s military payroll processing center. In 1991, DOD began transferring responsibility for the Navy’s departmental financial reporting from the Navy’s Office of the Assistant Secretary for Financial Management and Comptroller in Washington, D.C., to DFAS, Cleveland Center. Since then, only 13 personnel experienced in the Navy’s financial operations, and only 3 experienced in Navy financial reporting, transferred to DFAS, Cleveland Center. Had 50 mid- and senior-level accountants in the 510 accounting job classification series allocated to the financial reporting area. This is fewer than the 60 staff in these positions at DFAS, Denver Center, and significantly fewer than the 87 at DFAS, Indianapolis Center. As of October 1995, 22 percent of DFAS, Cleveland Center’s 510 mid- and senior-level staff positions were vacant. Had 17, or 30 percent, of its 56 mid- and senior-level positions filled with personnel in the 501 accounting-related job classification series, although this series requires no accounting education. Ensuring that sufficient numbers of personnel with appropriate expertise are assigned financial reporting responsibilities at DFAS, Cleveland Center, is particularly important because of the deficiencies we noted in that center’s financial reporting operations and the substantial effort that will be required to correct them. Consequently, an adequate financial management improvement resource plan would help ensure that the Navy and DFAS, Cleveland Center, have an adequate allocation of personnel with the requisite technical skills to effectively carry out financial reporting responsibilities for the Navy. In its comments on a draft of this report, DOD stated that DFAS, Cleveland Center, had recently received personnel resource authorizations from DFAS headquarters and that 14 accountants and financial analysts recently started work in the center’s CFO area. DOD further stated that 13 more personnel were expected to join the center’s CFO team by the end of February 1996. Although the hirings should logically alleviate some of the personnel shortages, a viable financial management improvement resource plan is still needed to ensure that adequate CFO technical skills are available at the center. The CFO Project Plan also does not provide short-term strategies for improving existing financial systems’ operations. Overall, systems deficiencies substantially increase the difficulty and time required to develop the Navy’s financial reports. Further, such deficiencies significantly increase the risk of errors, and, without compensating controls, increase the Navy’s and DOD’s exposure to undetected fraud, waste, and mismanagement. Both DOD and Navy officials have forthrightly acknowledged that systems deficiencies severely hamper their ability to effectively carry out accounting and financial reporting for the Navy. For example, in its fiscal year 1994 report pursuant to the Federal Managers’ Financial Integrity Act, DFAS, Cleveland Center, reported that it was unable to prepare complete, reliable, and accurate financial statements because of systems deficiencies. More specifically, DFAS, Cleveland Center, reported that the nonintegrated systems it used for the Navy’s financial reporting were not designed to conform with DOD’s general ledger requirements, did not use the standard data elements needed to ensure consistent definition of accounts, and required considerable manual intervention to summarize and interpret data from subordinate systems. The absence of a fully integrated general ledger system necessitates reliance on labor-intensive, error-prone processes to ascertain whether all required items and accounts are reported in the Navy’s financial reports and statements. Without integrated systems operating under general ledger control, there is no overall discipline to ensure the veracity and completeness for the amounts reported. As a result, for example, the value of perhaps as much as 83 percent of Navy’s assets—primarily property—cannot be derived from the existing financial systems structure. To report information on the dollar value of the Navy’s fixed assets, the Navy and DFAS, Cleveland Center, must rely on “data calls” to various Navy commands and other organizations, which use their logistics systems and databases to provide the information. DOD began its Corporate Information Management (CIM) initiative in 1989 with the objective of improving its business processes and information systems. With respect to accounting and finance systems, DFAS’s approach to implementing the CIM concept has been to select and adapt as an interim step the best existing systems for use as “migratory” financial systems to be followed eventually by “target” systems. Most recently, DFAS has set out its strategy for consolidating DOD’s accounting systems as part of the July 1995 DFAS Business Plan. Although the DFAS strategy calls for systems improvements, few, if any, improvements have been made in the systems the Navy or the other military services, will use for financial management and reporting. Historically, DOD’s system improvement plans have fallen far short of goals and its continuing systems problems are a serious challenge that will require a number of years to correct. In the short term, many Navy and DFAS financial management problems can be successfully remedied without developing new systems. In this regard, it is imperative that the Navy and DFAS make concerted efforts now to improve the data produced by their existing systems. Consequently, an adequate CFO Project Plan would address the specific actions that both the Navy and DFAS will take to (1) improve data in existing systems, (2) ensure the use of existing systems’ capabilities to account for transactions by object class or expense element, and (3) follow existing systems’ operating and transaction processing requirements. It will also be important to have procedures to monitor throughout the year whether rudimentary controls, such as those the DFAS Director called for in September 1995, are being used throughout Navy and DFAS financial operations. In commenting on a draft of this report, DOD stated that the Standard Accounting and Reporting System-Departmental Reporting (STARS-DR) (a system currently under development) has been designated as the “target” system for Navy’s general fund financial reporting. It remains to be demonstrated whether STARS-DR, once developed and implemented, will adequately serve as the Navy’s overall financial reporting system. We would also note that many of the problems we identified resulted from Navy and/or DFAS personnel not following established procedures, a condition that would detrimentally affect data in even the most well-designed and implemented systems. In the past, DOD has not clearly defined or strictly enforced accountability between the Navy and DFAS for the Navy’s financial management and reporting operations and for meeting the CFO Act’s requirements. On November 15, 1995, the DOD Comptroller issued a departmentwide policy, “Roles and Responsibilities of the DOD Component and the Defense Finance and Accounting Service Relative to Finance and Accounting Operations and Departmental Reports.” The policy, for example, requires DFAS to perform quality control reviews of the financial reports and statements it prepares; furnish these documents to its “customers” for review and concurrence before release; obtain preapproval from “customers” for any prior period adjustments to their financial reports that exceed established thresholds; adequately and properly document all adjustments, including appropriate documentation to support the need to correct an error and adjust the affected balances; and report potential violations of the Anti-Deficiency Act to the cognizant military service or other DOD component. Similarly, the policy mandates specific responsibilities for data accuracy to the DOD components, such as the military services, for which DFAS prepares financial statements. This policy establishes specific requirements for the components with respect to such things as (1) installing and operating appropriate internal controls to help ensure the accuracy of data provided to DFAS and (2) assessing the quality of information in DFAS-prepared reports prior to their release. If effectively implemented, the policy, along with the DFAS Director’s September 1995 guidance, should help to resolve many of the reporting problems we found involving the Navy and DFAS. However, the policy generally does not impose new requirements, as many of the provisions were already required by DOD regulations prior to the Comptroller’s issuance of the guidance. Further, neither DFAS nor the military services have consistently followed required procedures. We found no evidence that failure to follow established procedures resulted in disciplinary or other adverse actions except in instances also involving violations of laws. Consequently, to make the present arrangement work more effectively, the policy must be expeditiously and fully implemented so that the Navy’s and DFAS’s specific financial management roles and responsibilities are clearly delineated. To follow through and determine whether all provisions of the new policy are enforced and effectively implemented, or whether refinements are necessary, it is important for the DOD Comptroller to establish time frames within which to achieve results from the clarified roles and responsibilities, and establish milestones for assessing progress toward financial management improvement; designate specific offices or positions to be held accountable for actions to improve the Navy’s financial management and reports; and discipline managers for failing to improve the Navy’s financial management operations and to meet the CFO Act’s requirements to enhance financial systems. In its comments on a draft of this report, DOD stated that it was concerned that our finding tends to underplay the importance of the DOD Comptroller’s November 15, 1995, “roles and responsibility” document by stating that the document generally does not impose new requirements, as many of the provisions were already required by DOD regulations. DOD further stated that, prior to the Comptroller’s guidance, it was not always clearly stated whether DFAS or DOD components were responsible for specific financial management and reporting requirements. Finally, DOD stated that, due to various accounting and finance consolidations, DFAS’s roles and responsibilities relative to its customers were not formalized and therefore, were not clear to all parties. The need to clarify the respective roles and responsibilities of DFAS and the military services has existed since DFAS began operations in 1991. In August 1992, we first reported that DOD needed to clearly define DFAS’s role and accountability for financial management and reporting. While the DOD Comptroller’s November 1995 guidance clarifies the roles and responsibilities of DFAS and the DOD components, it does not greatly change existing financial management requirements, such as properly documenting transactions, accurately and completely processing transactions in a timely manner, and establishing appropriate internal controls. These and many more requirements existed prior to the Comptroller’s guidance. We recognize that the guidance now fulfills the need to more clearly define whether DFAS or DOD components are responsible for implementing the various requirements. The guidance should provide a vehicle to begin holding the appropriate DFAS and military service officials accountable for meeting those requirements. The serious financial management and reporting problems we found place the Navy at significant risk of waste, fraud, and misappropriation and drain resources needed for military readiness. We found widespread financial reporting inaccuracies, involving billions of dollars in erroneous balances covering the spectrum of key accounts. These inaccuracies undermine the credibility of financial reports and information on the Navy’s operations available to the Congress and Navy and DOD managers. Equally disturbing, the Navy’s financial reports mask various problems with data, including abnormal budgetary account balances, used to prepare these reports. Our work showed little tangible progress toward resolving the Navy’s financial management problems. The pervasive financial management problems we identified involve both the Navy and DFAS and stem primarily from these organizations not adequately observing basic accounting and control conventions; implementing financial management improvement efforts to achieve accurate reporting; addressing serious financial management staffing shortfalls; using existing systems to their full potential in controlling, managing, and reporting on the Navy’s financial operations; and exercising effective financial management accountability in the current arrangement of shared responsibility between DFAS and the Navy. The Navy and DFAS have had several years to address the pervasive and long-standing problems that hamper the Navy’s financial management operations, and, as the CFO Act requires, to begin readying themselves to prepare reliable financial statements for the Navy for fiscal year 1996. The Navy has not taken advantage of the 5 years since the act’s passage, or the lessons learned from the experiences of its counterparts, the Army and the Air Force, in preparing financial statements. The Navy and DFAS must now “catch up” through measures that will lead to successfully preparing reliable financial statements on the Navy’s operations within the next year or so. The DFAS Director has set the underpinnings for improved financial controls. This groundwork is an important step in finally coming to grips with a long record of neglect, underscored by the lack of accounting discipline and of a perceived value in this function. As a key “CFO Act” agency, it is imperative for DOD to now ensure that the difficulties the Navy and DFAS have experienced in preparing reliable Navy financial reports do not prevent DOD from meeting its statutory responsibility to prepare reliable agencywide financial statements beginning with those for fiscal year 1996. We recommend that the DOD Comptroller and the Navy’s Assistant Secretary for Financial Management and Comptroller jointly act to improve the credibility of the Navy’s financial reports and to adequately position the Navy and DFAS to prepare auditable financial statements for the Navy, beginning with those for fiscal year 1996, and periodically report to the Secretary of Defense the status of their results. First, to avoid the mistakes made in preparing the Navy’s fiscal year 1994 consolidated financial reports, the Navy and DFAS should diligently attain the greatest degree of accuracy possible in finalizing the Navy’s fiscal year 1995 consolidated financial reports. This is especially critical because data in these reports will help establish the opening balances for fiscal year 1996. These actions would, at minimum, require that financial statements and reports be compiled in accordance with applicable Treasury, OMB, and DOD requirements; financial information be reviewed thoroughly to determine its reasonableness, accuracy, and completeness; adjustments to account balances and reports be fully documented as to their basis and purpose; and the Assistant Secretary of the Navy for Financial Management and Comptroller certify that financial reports comply with applicable requirements. Second, so that fiscal year 1996 and subsequent financial statements for Navy operations are auditable, the Navy and DFAS should place high priority on implementing basic required financial controls over Navy financial accounts and reports. The minimum requirements to carry out this step would include assurance that Navy’s periodic physical inventories of equipment, property, and inventories are taken, the results are reported to DFAS, and any discrepancies are investigated as to cause and resolved; reconciliations of accounts and records are made, significant discrepancies are examined and resolved, and appropriate adjustments are made; transactions are clearly and completely documented and such documentation is retained and readily available to support account balances; and account balances are analyzed and financial reports are reviewed to detect abnormal account balances and unusual fluctuations and trends, any significant variances are researched and are explainable, and any necessary corrections are made. Also, to ensure that these basic internal control requirements are enforced, the Navy and DFAS should develop and implement strategies for monitoring progress throughout the year. Third, the Navy and DFAS should immediately prepare implementing strategies for producing reliable financial statements for the Navy, beginning with those for fiscal year 1996. This plan should, at a minimum address staffing issues, such as filling financial management vacancies, upgrading the experience of financial managers, and using contractors, as necessary, to improve financial management operations; include short-term measures to improve the data in existing financial systems, follow existing systems operating and transaction processing requirements, and use standard data elements, such as object class codes; incorporate strategies for promptly meeting DOD’s requirement to use the U.S. Standard General Ledger and the Treasury’s Federal Agencies Centralized Trial Balance System; and identify the specific offices or positions accountable for accomplishing the actions established by the strategies and provide a means for monitoring implementation throughout the year. Finally, given the history of problems in preparing the Navy’s financial reports, we recommend that the DOD Comptroller’s November 15, 1995, policy on roles and responsibilities of DOD components and DFAS be supplemented with strategies to hold organizations and individuals accountable for effectively milestones for monitoring implementation progress during the year, and periodic assessments during annual financial reporting cycles to ensure that the roles and responsibilities are continually enforced. In commenting on a draft of this report, DOD generally concurred with our findings and recommendations. However, DOD maintained that both DFAS and the Navy have taken and are continuing to take enormous strides in meeting the requirements of the CFO Act and GMRA. DOD stated that while, ideally, faster progress may be desirable, the significant progress that the department believes it has made since 1990 should be recognized. DOD stated that actions underway to better position it for the future, such as the financial management reform initiatives to improve processes and major reorganizations to reduce resources, should also be recognized. DOD further stated that it would be inaccurate to state that the Navy has made little progress in improving its financial management and reporting since passage of the CFO Act. DOD cited the progress made by the Navy in improving financial reporting for its DBOF activities and trust funds while recognizing that the Navy has not had to previously prepare financial statements for its general fund operations. This report acknowledges that the Navy has not previously been required to prepare financial statements for its general funds and that fiscal year 1996 is the first year for which the Navy will be required to prepare such statements. As a result, we focused our work on the required Treasury reports, not the more extensive financial statements required by the CFO Act, as expanded by the GMRA. Navy’s and DFAS’s inability to accurately prepare the less-comprehensive financial reports and the extent of the problems and deficiencies we identified with those reports is the focus of this report and raises serious questions regarding Navy’s and DFAS’s commitment and ability to prepare the fiscal year 1996 financial statements, which, for the most part, will be based on the same data sources. We state in our report that DOD has begun departmentwide initiatives that could help address the fundamental weaknesses we found in the Navy’s general fund financial management and reporting. However, our review showed that severe deficiencies, including billions of dollars in problem disbursements, grossly inaccurate and unreliable financial reports, and significant internal control breakdowns, pervade the Navy’s general fund financial operations. As a result, a great deal more progress must be achieved by the Navy and DFAS to meet the requirements of the CFO Act and prepare reliable financial statements by the date stipulated in law. Considering the enormity of the problems and deficiencies to be overcome, the progress made to date by the Navy and DFAS in the Navy’s general funds is relatively small and, in our view, warrants our finding that little progress has been made. DOD fully concurred with 16 of our recommendations and partially concurred with 2 others. First, DOD partially concurred with our recommendation that the Assistant Secretary of the Navy for Financial Management and Comptroller certify that the Navy’s financial reports comply with applicable requirements. DOD stated that the annual Navy financial statements prepared pursuant to the CFO Act are required to be accompanied by a management representation letter signed by the Secretary of the Navy or the Under Secretary of the Navy. In DOD’s view, the management representation letter is the appropriate medium to provide management comments on financial statements. With respect to our recommendation, we agree that management representation letters are an appropriate medium for certification of financial statements and, therefore, if properly used, should fulfill the intent of our recommendation. The letters should acknowledge management’s responsibility for the fair presentation of information in the accompanying financial statements. However, in instances where management has concerns reagrding the viability of its financial statements, management representation letters should be used to highlight and communicate those concerns to the statements’ auditors. Second, DOD partially concurred with our recommendation that the Navy and DFAS identify the specific offices or positions accountable for accomplishing actions established by strategies for preparing the Navy’s financial statements and monitoring progress throughout the year. Although DOD did not fully concur with the recommendation, its intended action—revising the Navy and DFAS CFO Project Plan to indicate participating organizations and responsible elements within those organizations—fulfills the intent of our recommendation. Once the participating organizations and responsible elements are identified, it is important that the Navy and DFAS monitor the progress of those organizations and elements to ensure that planned actions are effectively carried out within established milestones. DOD, for the most part, agreed with our findings in this report although it partially concurred with several findings and disputed the facts in one case. We have evaluated and addressed DOD’s comments to the extent necessary in the appropriate sections of this report. The full text of DOD’s comments is provided in appendix II. We are sending copies of this report to the Chairmen and the Ranking Minority Members of the Senate and House Committees on Appropriations, Subcommittees on Defense; the Senate Committee on Armed Services and its Subcommittee on Readiness; the Senate Committee on Governmental Affairs; and the House Committee on Government Reform and Oversight as well as its Subcommittee on Government Management, Information, and Technology. We are also sending copies to the Director of the Defense Finance and Accounting Service, the Secretary of the Treasury, and the Director of the Office of Management and Budget. We will make copies available to others upon request. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight within 60 days of the date of this report. You must also send a written statement to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made over 60 days after the date of this report. If you have questions regarding this report, please call Lisa G. Jacobson, Director, Defense Financial Audits, at (202) 512-9095, or Gerald W. Thomas, Assistant Director, Defense Financial Audits, at (202) 512-8841. Our objective was to determine the Navy’s readiness to prepare reliable financial statements for fiscal year 1996. We examined the overall reliability of the Navy’s fiscal year 1994 financial reports, and the adequacy of the processes and controls the Navy and DFAS used to prepare them; the adequacy of the Navy’s and DFAS’s financial management planning, staffing, and systems; and the effectiveness of accountability for ensuring the reliability of the Navy’s financial reporting. We examined the Navy’s fiscal year 1994 financial reports (the Treasury “SF-220” series) because (1) the information for these reports was derived from the sources the Navy and DFAS would, for the most part, use to prepare statutorily required financial statements and (2) the reliability of the fiscal year-end accounts balances used to prepare these reports is integral to the Navy’s accurately establishing the ending account balances for fiscal year-end 1995 and, consequently, beginning balances for fiscal year 1996. Inaccurate beginning account balances would affect the reliability of the Navy’s fiscal year 1996 financial statements. We have not, however, audited the Navy’s fiscal year 1994 financial reports and, therefore, express no opinion on them. To assess the overall reliability of the Navy’s financial reports we evaluated whether the reported data were logical and presented in accordance with Treasury, DOD, and Navy guidance and requirements; verified the mathematical accuracy of reported amounts; and traced reported amounts to available supporting documentation and reports at DFAS, Cleveland Center. In making our assessment, we considered the Navy’s previously reported financial management problems. We identified these problems from our prior audit reports and those of the DOD Inspector General and the Naval Audit Service and determined whether the problems continued. We also examined DOD and Navy reports of internal control and accounting systems weaknesses based on self-assessments made under the Federal Managers’ Financial Integrity Act of 1982. To examine the adequacy of the Navy and DFAS financial reporting processes and controls, we identified and reviewed pertinent financial management policies and procedures that the Navy and DFAS had in place. We also observed whether these processes and controls were working as the Navy and DFAS intended, and tested selected transactions affecting reported account balances. We also reviewed applicable Treasury, OMB, and DOD guidance and requirements for reporting financial transactions and preparing financial reports. To determine the adequacy of Navy financial management planning, staffing, and systems, we discussed with Navy and DFAS officials current plans and strategies for preparing the Navy’s financial statements for fiscal year 1996. We analyzed available documents relating to these plans and focused on whether they adequately (1) addressed the types of deficiencies we noted in assessing the Navy’s fiscal year 1994 financial reports and (2) supported meeting the statutory time frame for preparing financial statements. discussed financial reporting staffing issues with Navy and DFAS, Cleveland Center, officials. We also identified DFAS, Cleveland Center’s financial reporting staff level and experience, and compared them with the financial reporting staff levels and experience of other DFAS centers. identified and reviewed previously reported Navy and DFAS financial management systems deficiencies and financial systems modernization plans. To examine the organizational accountability established to ensure the reliability of the Navy’s financial reporting, we determined the financial management lines of authority and responsibility established by the Navy, DFAS, and DOD. In addition, we identified previously reported DOD problems in these areas, and discussed with DOD and Navy officials the current status of efforts to resolve them. We also obtained and analyzed a proposed new DOD Comptroller policy, Roles and Responsibilities of DFAS and Other DOD Components, and a draft DOD financial management regulation, “Reporting Policies and Procedures.” In a briefing on November 17, 1994, we advised the Assistant Secretary of the Navy for Financial Management and Comptroller and key DOD financial management officials on the preliminary results of our review. On April 20, 1995, we briefed the Director of the DFAS, Cleveland Center, and senior officials from the Navy Comptroller’s Office. During both meetings, we made suggestions for correcting financial management and reporting problems hindering the Navy’s development of reliable financial statements for future fiscal years. In addition to the adequacy of the Navy’s financial reporting, which is the subject of this report, we are also evaluating certain other aspects of the Navy’s financial management operations. We will report later on these areas. We conducted our work primarily at Navy and DFAS Headquarters in Washington, D.C., and at DFAS, Cleveland Center. Our work was performed from August 1993 through October 1995 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Defense letter dated February 9, 1996. 1. We have changed the title of this report to CFO Act Financial Audits: Increased Attention Must Be Given to Preparing Navy’s Financial Reports. 2. The “improvements and progress” listed by DOD represent actions from which envisioned benefits have yet to be achieved. While these actions may lead to improvements in the Navy’s financial management operations, in our view they do not materially affect our finding. 3. OMB, under authority established by the CFO Act, prescribes the form and content of agency financial statements prepared pursuant to that act and GMRA. Therefore, we have modified our recommendation to incorporate the OMB requirements. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the Navy's fiscal year (FY) 1994 consolidated financial reports, focusing on how the Navy and the Defense Finance and Accounting Service (DFAS) can: (1) improve the credibility of the Navy's FY 1995 financial reports; and (2) enhance their ability to prepare the Navy's FY 1996 annual financial statements. GAO found that: (1) the Navy's FY 1994 consolidated financial reports were not reliable and could not be used to assess the results of the Navy' operations, stewardship over assets, and use of budgetary resources; (2) the unreliability of the Navy financial reports adversely affects the reliability of the government's consolidated financial reports; (3) all aspects of the reports had inaccurate financial information, including omissions or misrecordings of assets and costs and failure to make required disclosures; (4) the Navy's financial reporting deficiencies are due to long-standing internal control weaknesses and the lack of accounting discipline; (5) in September 1995, DFAS directed its centers to pay closer attention to internal control problems, which may help correct the Navy's and other departments' deficiencies; (6) the joint DFAS and Navy Chief Financial Officers (CFO) Project Plan to improve the Navy's financial reporting is inadequate; (7) DFAS and the Navy will have to monitor and enforce efforts to improve financial control procedures and increase emphasis on preparing and executing improvement plans, assessing the skills, experience, and number of financial personnel needed, and improving financial systems; and (8) the Department of Defense (DOD) Comptroller needs to enforce the November 1995 DOD policy on DFAS and the Navy's financial management roles and responsibility. |
Autism—a complex and pervasive developmental disability—usually becomes evident in early childhood, although signs and symptoms vary.According to CDC, autism begins before age 3 and lasts throughout a person’s life. Some children show signs of autism within the first few months of life. In others, symptoms might not appear until 24 months or later. Still other children with autism seem to develop typically until 18 to 24 months of age and then stop gaining new skills or lose the skills they once had. Signs and symptoms of autism include a child not responding to his or her name by 12 months; not pointing at objects to show interest by 14 months; avoiding eye contact and wanting to be alone; repeating words or phrases over and over; and flapping hands, rocking, or spinning in circles. Individuals with autism might have challenges with showing or talking about their feelings and might also have trouble understanding the feelings of others. Diagnosing autism can be difficult; however, early intervention services can greatly improve a child’s development. There is no medical diagnostic test available for autism. As a result, doctors consider a child’s behavior and development to make a clinical diagnosis. By age 2, a diagnosis by an experienced professional can be considered very reliable. However, according to CDC most children do not receive a diagnosis until after age 4. There is no single cause of autism, but a variety of factors are suspected of causing or contributing to autism, including environmental, biological, and genetic sources. While there is no known cure, research shows that early intervention services can greatly improve a child’s development. Because of the complexity of this disorder, individuals with autism have diverse needs for medical and mental health care as well as an array of educational and social services. The CAA authorizes and directs HHS to conduct specific autism-related activities, which may include funding external organizations to conduct these activities through grants, contracts, and cooperative agreements. The CAA amended sections of the Children’s Health Act of 2000—which required HHS to conduct activities related to autism research, surveillance, and coordination—by revising some sections and repealing other sections of that law as well as establishing new requirements.CAA authorized, but did not appropriate, federal funding to carry out these activities in fiscal year 2007 through fiscal year 2011. HHS agencies responded to the CAA with new or continuing autism activities. In fiscal year 2008, HRSA created the Combating Autism Act Initiative in response to specific directives included in the CAA. Through this initiative, HRSA expanded its existing training programs to include an autism-specific component and established new autism research and state grants. HRSA conducts all of its Combating Autism Act Initiative programs under the authority of the CAA. HRSA staff told us that they have not analyzed whether the agency’s new programs could be conducted under other HRSA authority. HRSA expanded two of its preexisting training programs—the Leadership Education in Neurodevelopmental and Other Related Disabilities (LEND) and the Developmental-Behavioral Pediatrics (DBP) training programs— through supplemental funding to existing grantees and awards to new grantees. These two training programs account for the majority of HRSA spending under its Combating Autism Act Initiative; however, HRSA was funding these programs prior to enactment of the CAA.Combating Autism Act Initiative, LEND and DBP grantees are required to include an autism component in their training. Among other things, the programs train health care professionals, such as pediatric practitioners, residents, and graduate students, to provide evidence-based services to children with autism and other developmental disabilities and their families; and train specialists to provide comprehensive diagnostic evaluations to address the shortage of professionals who can confirm or Under the rule out an autism diagnosis. According to HRSA, as a result of these training programs, the number of health professionals enrolled in autism courses increased from 1,887 in academic year 2008-2009 to 4,256 in academic year 2010-2011 and the number of diagnostic evaluations increased from 12,390 in academic year 2008-2009 to 44,102 in academic year 2010-2011. Additionally, HRSA created new autism research programs to fund studies that are intended to advance the current autism knowledge base and lead to improvements in interventions that address the health and well-being of children and adolescents with autism and other developmental disabilities. HRSA also provided grants to establish two research networks that focus on the physical and behavioral health needs of children and adolescents with autism. These networks conduct research on evidence-based practices for interventions, promote the development of evidence-based guidelines for intervention, validate tools for autism intervention, and disseminate information to health professionals and the public, especially families affected by autism. HRSA also funded new state implementation and planning grants to implement plans to improve access to comprehensive, coordinated health care and related services for children and youth with autism and other developmental disabilities. Twenty-two states received grants from fiscal years 2008 to 2011 to implement their autism plans. These plans vary by state, but common elements include a focus on partnerships between professionals and families of children and youth with autism, access to a culturally competent family-centered medical home, access to adequate health insurance and financing of services, early and continuous screening for autism and other developmental disabilities, community services organized for easy use by families, and transition services for youth entering adult health care. Table 1 provides information on the specific autism-related programs HRSA initiated or expanded—by increasing funding and the number of grantees—as a result of the CAA. NIH and CDC continued the autism activities each implemented prior to the enactment of the CAA, but did not create new programs as a direct result of the CAA. Some of these activities had been undertaken in response to the Children’s Health Act of 2000, which, like the CAA, charges NIH with expanding, intensifying, and coordinating research on autism. In addition, under both laws, CDC is required to conduct activities related to establishing regional centers of excellence to collect and analyze certain information on autism. Since the enactment of the CAA, NIH continued to fund, expand, and coordinate autism research through its Autism Centers of Excellence and autism-specific grants and contracts. According to agency officials, NIH awards these grants and contracts under its general Public Health Service Act authorities and not under the specific authorities provided in the CAA.fund its regional centers of excellence for autism epidemiology and other activities, such as an awareness campaign on autism and other developmental disabilities. While enactment of the CAA did not result in any change to CDC’s autism activities, CDC officials stated that the CAA provided additional focus on these efforts. According to CDC officials, the CAA’s enactment also strengthened the agency’s Learn the Signs. Act Early. awareness campaign by elevating the importance of increasing awareness of developmental milestones to national visibility. See appendix I for a list of NIH’s and CDC’s autism efforts. As required by the CAA, the Interagency Autism Coordinating Committee (IACC)—initially established under the Children’s Health Act— restructured its membership and assumed additional responsibilities to coordinate autism efforts within HHS. The CAA reauthorized the IACC and specified that the IACC include both federal and nonfederal members. IACC membership expanded to include 11 nonfederal members that represented individuals with autism and parents of children with autism. In addition, it included members of the autism advocacy, research, and service-provider communities in accordance with the CAA’s membership requirements. The CAA also directed the IACC to develop and annually update a strategic plan and summary of advances in autism research, and monitor federal autism activities. Since fiscal year 2007, the IACC issued several reports as a means to coordinate HHS autism efforts and monitor federal autism activities, some of which were specifically required by the CAA, such as the development of an autism strategic plan and a summary of advances in autism research.appendix II for a description of the documents produced by the IACC. In addition to the changes to the IACC, in 2008, NIH created the Office of Autism Research Coordination (OARC) within the National Institute of Mental Health (NIMH) to coordinate and manage the IACC and related cross-agency activities, programs, and policies. OARC assists the IACC by conducting analyses and preparing reports for the IACC, assisting with the IACC’s strategic planning and autism research monitoring, and providing logistical support for IACC meetings. It also supports communications through the IACC website and press releases, and responds to inquiries from the public and other government agencies.OARC officials told us that although HHS could establish an advisory committee similar to the IACC under other authority, the CAA has provided the IACC with greater visibility and increased involvement of the public and federal agencies, through, for example, the annual update of the IACC’s autism strategic plan. While the CAA authorized appropriations for HRSA, NIH, and CDC autism activities, the CAA did not appropriate funds for this purpose. Instead, to fund these activities, HRSA, NIH, and CDC used funds appropriated to the agencies annually through the budget and appropriations process for the purpose of carrying out a variety of programs. Reinvestment Act of 2009. And, according to CDC officials, the agency redirected a portion of its funding for infant health activities to support pilot projects implementing the agency’s awareness campaign on autism and other developmental disabilities. The IACC’s funding increased significantly from fiscal year 2006 to 2011. From fiscal year 2008 through fiscal year 2011, as directed by Congress in the annual HHS appropriations act, the Secretary of Health and Human Services transferred funds to NIMH for the IACC. From fiscal year 2006 through fiscal year 2011, the IACC also received funds from the annual NIH appropriation. See appendix III for information on the funding for these agencies’ and the IACC’s autism-related activities. HRSA, the only HHS agency that awarded grants specifically as a result of the CAA, regularly collects and reviews information from grantees to oversee individual CAA grantees as well as to provide oversight to its CAA programs. HRSA awarded approximately $164 million in grants to 110 CAA grantees from fiscal years 2008 to 2011. The majority of funding—about $107 million—was awarded to 47 grantees within HRSA’s LEND training program, some of which were already receiving funds prior to the CAA. In addition, nearly $24 million was awarded to two grantees to support HRSA’s two autism intervention research networks. For all grantees, the amount of the grant award per year ranged widely from about $36,000 to $4 million depending on the CAA program, as shown in appendix IV. As part of the agency’s oversight of its CAA grantees, HRSA requires periodic reports from these grantees, which are reviewed by HRSA staff. HRSA project officers within the Maternal and Child Health Bureau—the bureau that administers the CAA programs—are responsible for working with CAA grantees in overseeing the programmatic and technical aspects of the grant. HRSA grants management specialists and their supervisors—grants management officers—oversee compliance with financial reporting requirements and agency grant policies and regulations. The required reports that are reviewed by HRSA staff include the following: Annual federal financial report. The annual federal financial report is an accounting of expenditures under the project in the budget period—the period for which HRSA has awarded funds to the grantee—and cumulatively for the project period.after the end of the budget year. Annual progress reports. The annual progress report is part of a grantee’s noncompeting continuing application and describes grantees’ progress on their grant objectives. Progress reports are due before the end of the budget period because HRSA staff use these reports to assess progress and, except for final progress reports, to determine whether to provide funding for the budget period subsequent to that covered by the report. Mid-project progress reports. Mid-project progress reports provide information on grantees’ progress on research objectives. These reports are required of certain research grantees and are due midway through the project period. Semiannual progress reports. Semiannual progress reports include information on the grantees’ most significant achievements and problems encountered during the reporting period as well as the grantees’ progress on established objectives. These reports are required of research network grantees and are due midway through each budget period. In addition to reports, HRSA also requires grantees to submit written requests before making certain changes to the grant project, known as prior-approval requests. For example, a change in the director of the grant project requires prior approval, as does a request to carry over unobligated funds to the next budget period or a request for a no-cost extension—an extension for a limited period beyond the end of the project period so that the grantee can complete project activities. When reviewing these reports and grantee prior-approval requests, HRSA staff are required to fill out checklists in the EHB in which they indicate their review and approval of the report or request. The content of the review checklists varies by the type of report or request being reviewed. For example, among other questions, a progress report checklist asks if the report reflects the program’s goals. The federal financial report checklist asks HRSA staff to compare the report with data in HRSA’s payment management system. All review checklists include a question where HRSA staff can indicate if they have identified any issues or concerns with the report or request. In addition, when reviewing grantee information, HRSA staff may request that a report be revised with additional or corrected information. Our review found that HRSA routinely collects and reviews information submitted by CAA grantees. Generally, grantees submitted required reports and HRSA staff documented their review of these reports. Specifically, the 22 grantees in our unbiased random sample submitted all of the 106 reports they were required to submit and most of these reports were submitted on time. We found that HRSA staff filled out checklists approving all of the reports submitted or required report revisions. In many cases, HRSA staff filled out a checkbox indicating a “yes,” “no,” or “n/a” response to the questions. However, we noted that there were some cases where staff provided a narrative description to support their response to the question, such as a description of how the grantee is meeting the program’s goals—a question in the progress report checklist. HRSA officials stated that staff are required to answer the questions in the checklist, but they are not required to provide a narrative supporting their answers. We observed that there were few instances of HRSA staff either documenting a concern or asking for a report revision before approving the report. We encountered seven instances where a project officer approved a report, but documented a concern in a checklist. In all these instances, the project officer provided narrative describing the concern. For example, in one instance, the project officer wrote that the grantee’s recruitment of study subjects—parents of children with autism or other developmental disabilities—was slow. However, the project officer also stated that the grantee modified its enrollment process, which seemed to be having some positive effect and that the project officer and grantee were working together to monitor enrollment. We also identified another seven instances where HRSA staff asked for a report to be revised either with additional or corrected information. In almost all instances, the grantees submitted a revised report and HRSA staff completed a checklist indicating approval of the revised report. The question in the checklist is: “Are there any areas of concern: programmatic, budgetary, or other?” the next budget period. For example, 13 of the 22 grantees in our review requested to carry over unobligated balances at least once during the period of our review, and many of them requested it for multiple years— equaling 32 separate requests. The amount of unobligated balances that grantees requested to carry over in a given year ranged from $1,518 to $172,514. In all instances, HRSA approved these requests as indicated by the issuance of a revised notice of award. Almost all requests related to awards in fiscal year 2010 and later contained an associated checklist in the EHB filled out by HRSA staff approving the request to carry over unobligated balances. In addition to reviewing information submitted by grantees, HRSA provides additional oversight to grantees. First, it conducts site visits in person or by means of the web. During a site visit, HRSA staff may collect information on preliminary research findings, data and analysis, and any challenges the grantee is facing. Site visits are only required of certain research grantees, although HRSA may conduct site visits with other grantees, depending on available resources. HRSA officials target site visits for CAA grantees on the basis of six criteria: (1) the grantee is new, (2) there has been a change in the grantee’s project director, (3) there has been a change in the grantee’s scope of work, (4) there are budgetary issues or the grantee has not made adequate progress on the project goals, (5) the grantee has requested technical assistance, or (6) there has been a change in the project officer overseeing the grantee. HRSA uses a site-visit report to document the visit and has guidance on what should be included in the report. For example, for training grants, the report should include a narrative summary of the visit including highlights, performance measure progress, strengths and challenges, and any technical assistance needed by the grantee. Second, HRSA officials stated that HRSA project officers provide routine technical assistance to certain grantees and others on an as-needed basis. For example, all research grantees have either a monthly, biweekly, or mid-project telephone call with HRSA project officers. Our review confirmed that HRSA has conducted a number of site visits to monitor CAA grantees. For example, nine of the grantees in our review had documentation indicating that a site visit had been conducted, with only two of these being required site visits. While none of the site-visit reports identified major issues that required corrective action, some did record challenges the grantees were facing or made suggestions. For example, one report stated that the grantee may encounter challenges in the recruitment of trainees. We identified documentation related to technical assistance that HRSA staff provided to some grantees but not all. For example, we did not always see documentation of routine telephone calls with research grantees that HRSA officials say occur on a regular basis. In response, HRSA officials stated that not all technical assistance is recorded in the EHB; only when a significant issue arises is the telephone call, e-mail, or other assistance recorded. Besides overseeing specific grantees, HRSA monitors its CAA activities at the program level by regularly collecting performance reports from grantees. The 22 CAA grantees in our sample submitted all the required performance reports. According to HRSA officials, the primary purpose of performance reports is to gauge program performance. For example, data in performance reports is currently being used by a HRSA contractor to prepare a report on the progress of the CAA programs for Congress. In addition, according to HRSA officials, performance data can be used to modify program performance measures over time. While performance reports are used to monitor CAA programs—as opposed to grantees— HRSA officials stated that some performance information is also included in annual progress reports, which are used to oversee specific grantees. For example, progress reports require grantees to include information on whether they are having problems meeting their performance measures. Finally, to further help oversee CAA programs and consolidate information on its monitoring approach for these programs, in December 2012 HRSA released a grant-management operations manual to outline its overall approach for monitoring these programs. According to HRSA officials, this manual will be included in the program folder of the EHB for each of its CAA programs and will be reviewed annually, consistent with HHS guidance. We provided a draft of this report to HHS for comment. HHS provided technical comments that we incorporated, as appropriate. We are sending a copy of this report to the Secretary of Health and Human Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. During fiscal year 2006 through fiscal year 2011, NIH and CDC funded a number of autism activities. Table 2 lists the activities these agencies funded, including the type and purpose of each activity. Appendix II: List of Interagency Autism Coordinating Committee (IACC) Reports Description According to the IACC, the Strategic Plan provides a blueprint for autism research that is advisory to the Department of Health and Human Services and serves as a basis for partnerships with other agencies and private organizations involved in autism research and services. The 2011 Strategic Plan is organized around seven questions asked by individuals with autism and their families (such as “When should I be concerned?”). Each of the seven sections includes a description of what is generally known from autism research for that particular question and what gaps remain, followed by what was learned during the previous year. The report also sets up short- and long-term research objectives based on autism research opportunities. The Combating Autism Act of 2006 (CAA) requires that the Strategic Plan be updated on an annual basis. The Portfolio Analysis features autism project and funding information for certain federal agencies and private organizations. According to officials within the National Institutes of Health Office of Autism Research Coordination (OARC), the agencies and organizations in these reports have been identified by the IACC and OARC as being involved in autism research and have agreed to participate. According to the IACC, the intent of these analysis reports is to better inform the IACC and interested stakeholders about the funding landscape for a particular year. Additionally, the analysis examines the extent to which a particular year’s funding and research topics align with the IACC’s most recent Strategic Plan. The IACC reports that the Portfolio Analysis may also be used by federal agencies and private research organizations to help guide future funding priorities by outlining current gaps and opportunities in autism research, as well as serving to highlight current activities and research progress. OARC officials told us that they plan to issue the 2011 report in 2013. Summary of Advances in Autism Spectrum Disorder Research (2007, 2008, 2009, 2010, 2011) Each year the IACC releases its list of scientific advances in autism research. As reported by the IACC, the report highlights studies on autism published in the previous year in peer-reviewed journals and selected by members of the IACC. The number of studies featured over the years ranges from 20 to 54. The CAA requires that the IACC produce the Summary of Advances annually. As reported by the IACC, this report describes several key aspects of worldwide autism research publications, which may be used to inform planning and strategic funding decisions for future autism research. Autism-related research articles published between 1980 and 2010 were analyzed to identify historical trends and publication outputs across the seven questions and research areas of the 2011 IACC Strategic Plan. Information found in research publications was also used to assess the institutions conducting autism research, funding organizations supporting the research publications, and the extent of collaboration between authors from different countries and research institutions. Additionally, measures, such as citation counts, were used as an assessment of the impact of the published research. OARC officials told us that there are no plans to update this report annually. In 2008 and 2010, OARC, National Institute of Mental Health, prepared this report on behalf of the IACC. In 2009, OARC, National Institute of Mental Health, and Acclaro Research Solutions, Inc., prepared this report on behalf of the IACC. HHS component Health Resources and Services Administration (HRSA) Centers for Disease Control and Prevention (CDC) HRSA’s totals include autism grant awards, as well as, for example, funding used for HRSA’s personnel expenses, travel, supplies, and overhead related to reviewing these grants. NIH’s totals include funding for research that is conducted outside of NIH’s autism-specific grant announcements. According to NIH officials, much of the autism research funded by NIH is done under general grant announcements soliciting biomedical research. IACC’s totals for fiscal years 2008 through 2011 include funding for the Office of Autism Research Coordination within NIMH. In fiscal year 2008, certain agencies, including HHS agencies, were subject to an across-the-board rescission. All nondefense discretionary programs were subject to an across-the-board rescission in fiscal year 2011. According to HRSA officials, HRSA spent less on its autism activities in these years as a result of the rescissions. NIH officials told us that the agency reduced funding for research grants as a result of the rescissions, but could not measure the precise effect on autism-related grants. According to CDC officials, CDC spent less on its autism activities in fiscal year 2011 as a result of the rescission in that year. In addition, the IACC received less funding in fiscal years 2008 and 2011 as a result of the rescissions. HRSA’s fiscal year 2006 and 2007 funding represents total funding for its Leadership Education in Neurodevelopmental and Other Related Disabilities and Developmental-Behavioral Pediatrics training programs, through which the agency awarded grants that could have had an autism-specific component; however, an autism-specific component was not a requirement of the grants. Beginning in fiscal year 2008, these training programs were required to have an autism-specific component. In fiscal year 2008, in response to the Combating Autism Act, the Health Resources and Services Administration (HRSA) created the Combating Autism Act Initiative. Under this initiative, HRSA has a number of programs that fund grants specific to autism. This appendix includes a description of the purpose of each program. Tables 3 through 11 list the grants that have been awarded under each program for fiscal years 2008 through 2011. Program: Leadership Education in Neurodevelopmental and Other Related Disabilities (LEND) Training Program. The purpose of this program is to improve the health of children who have, or are at risk for developing, neurodevelopmental and other related disabilities by training professionals to assume leadership roles, and to ensure high levels of interdisciplinary clinical competence in an effort to increase diagnosis of or rule out individuals’ developmental disabilities, including autism. Program: Developmental-Behavioral Pediatrics (DBP) Training Program. The purpose of this program is to train the next generation of leaders in developmental-behavioral pediatrics; and provide pediatric practitioners, residents, and medical students with essential biopsychosocial knowledge and clinical expertise. This program is focused on developmental disabilities, including autism. Program: National Combating Autism Interdisciplinary Training Resource Center. The purpose of this program is to improve the health of children who have, or are at risk for developing, autism and other developmental disabilities by providing technical assistance to LEND and DBP programs to better train professionals to utilize valid and reliable screening tools for diagnosing or ruling out autism, and provide evidence- based interventions for children. Program: Autism Intervention Research Program and Autism Intervention Secondary Data Analysis Studies Program. The purpose of this program is to support research on evidence-based practices for interventions to improve the health and well-being of children and adolescents with autism and other developmental disabilities. The Autism Intervention Secondary Data Analysis Studies Program utilizes the analysis of existing secondary data. Program: Autism Intervention Research Network on Physical Health. The purpose of this program is to establish and maintain a network infrastructure designed to be the platform from which to conduct research on evidence-based practices for interventions to improve the physical health and well-being of individuals with autism and other developmental disabilities; develop evidence-based guidelines and validate tools for interventions; and disseminate critical information on its research findings, guidelines, and tools. Program: Autism Intervention Research Network on Behavioral Health. The purpose of this program is to establish and maintain a network infrastructure designed to be the platform from which to conduct research on evidence-based interventions to improve the behavioral, mental, social, or cognitive health, or a mix of those, and well-being of children and adolescents with autism and other developmental disabilities; develop evidence-based guidelines and validate tools for interventions; and disseminate critical information on its research findings, guidelines, and tools. Program: Developmental-Behavioral Pediatrics Research Network. The purpose of this program is to establish a multicenter scientific and clinical research network that will promote coordinated research activities and address health issues. The program is intended to build a developmental behavioral pediatric research infrastructure that supports multidisciplinary research, focuses on the translation of research to practice, and provides the environment in which to train a new generation of developmental behavioral pediatric researchers. Program: State Implementation and Planning Grants. The purpose of this program is to improve access to comprehensive, coordinated health care and related services by implementing state plans to improve the system of services. Program: State Public Health Coordinating Center. The program purpose is to improve the health of children who have, or are at risk for developing, autism and other developmental disabilities by coordinating with the state demonstration grantees; and by developing a strategy for defining, supporting, and monitoring the role of state public health agencies in assuring early and timely identification, diagnosis, and intervention. In addition to the contact named above, Geri Redican-Bigott, Assistant Director; Katherine L. Amoroso; George Bogart; Deirdre Brown; Sandra George; Cathleen Hamann; Kristin Helfer Koester; Drew Long; and Sarah Resavy made key contributions to this report. | CDC considers autism to be an important public health concern. In 2012, CDC reported that an estimated 1 in 88 children in the United States has been identified as having autisma 23 percent increase from its estimate of 1 in 110 reported in 2009. Autism is a developmental disorder involving communication and social impairment. Symptoms usually become evident in early childhood. There are many suspected causes and no known cure. HHS agencies fund educational and support services for individuals diagnosed with autism and fund research in a variety of areas, such as identifying the causes of autism and intervention options. The CAA amended sections of the Childrens Health Act of 2000 related to autism and established new requirements. The CAA, enacted in December 2006, authorized the expansion of HHSs activities related to autism research, surveillance, prevention, intervention, and education through fiscal year 2011. The CAA authorized, but did not appropriate, federal funding to carry out these activities. In this report, GAO (1) describes the actions that HHS agencies have taken as a result of the CAA, and (2) examines the oversight of CAA grantees. To address these objectives, GAO reviewed CAA and HHS documents and interviewed agency officials to identify the autism activities resulting from the CAA. GAO also determined the amount certain HHS agencies spent on autism activities from fiscal year 2006prior to the CAAthrough fiscal year 2011. In addition, GAO reviewed files for a random sample of CAA grantees to examine oversight from 2008 to 2011. Department of Health and Human Services (HHS) agencies responded to the Combating Autism Act of 2006 (CAA) by establishing some new autism activities and continuing others. The Health Resources and Services Administration (HRSA) created a new initiative to address specific directives in the CAA. Through this initiative, HRSA expanded its existing training programs by requiring grantees to include training specific to autism. It also established new autism research grants and funded new state grants to improve services for children with autism. HRSA awards its autism grants under the authority of the CAA. The National Institutes of Health (NIH) and Centers for Disease Control and Prevention (CDC) continued their autism activities--some of which were undertaken in response to the Children's Health Act of 2000--but did not create new programs as a direct result of the CAA. NIH continued to fund, expand, and coordinate autism research through its Autism Centers of Excellence and autism-specific grants and contracts. CDC continued to fund its regional centers of excellence for autism epidemiology and other activities, such as an awareness campaign. HHS's Interagency Autism Coordinating Committee (IACC)--reauthorized by the CAA--assumed additional responsibilities to coordinate autism efforts within HHS and restructured its membership to include more nonfederal members. NIH created the Office of Autism Research Coordination to coordinate and manage the IACC. The CAA did not appropriate funds to any HHS agency. Nevertheless, overall spending on HRSA, NIH, CDC, and IACC autism activities increased from approximately $143.6 million in fiscal year 2006 to approximately $240.4 million in fiscal year 2011. HRSA, the only HHS agency that has awarded grants specifically as a result of the CAA, regularly collects and reviews information from grantees to oversee individual CAA grantees and programs. HRSA awarded approximately $164 million in grants to 110 CAA grantees from fiscal years 2008 to 2011; though, some of these grantees were already receiving funds prior to the CAA. To oversee these grantees, HRSA requires they regularly submit progress reports and financial reports. The agency also requires grantees to obtain prior approval before making certain changes to their projects. GAO reviewed documentation for an unbiased random sample of 22 grantees, which were representative of the 110 CAA grantees. GAO found that CAA grantees submitted all required reports. Many grantees submitted prior-approval requests for changes to their projects. Most frequently, grantees requested to carry over unobligated funds from the current year to the next budget period. GAO found that HRSA staff routinely collected and reviewed information submitted by the grantees and appropriately documented their review and approval of these submissions. HRSA also conducted site visits and provided technical assistance as a means of overseeing grantees. HRSA conducted site visits with 9 of the grantees in our sample during the period of our review, while only 2 of these were required sites visits. Besides overseeing grantees, HRSA monitors its overall CAA programs by regularly collecting performance reports from grantees. In addition, in December 2012, HRSA released a grant-management operations manual to outline its overall approach for monitoring its CAA programs. GAO provided a draft of this report to HHS for comment. In response, HHS provided technical comments that were incorporated, as appropriate. |
Approximately 88,000 miles in length, the nation’s marine coastline is composed of a variety of coastal ecosystem types (see fig. 1). The potential effects of climate change on these ecosystems are complex and often difficult to predict, according to the 2014 National Climate Assessment. For example, climate scientists have indicated high confidence that climate change will increase the frequency and intensity of coastal storms, but the exact location and timing of these events is unknown. Similarly, the effects of sea level rise are expected to vary considerably from region to region and over a range of temporal scales, according to the assessment. The 2014 National Climate Assessment further indicated that marine coastal ecosystems are dynamic and sensitive to small changes in the environment, including warming air and ocean temperatures and sea- level rise. Climate change may cause shifts in species’ distributions and ranges along coasts that may impact ecosystem character and functioning, according to the assessment. For example, eel grass, one type of submerged vegetation that provides coastal protection from storm surges, may die if water temperatures exceed its maximum tolerance level. Ecosystems along the coast are also vulnerable to climate change because many have been altered by human stresses, and climate change will likely result in further reduction or loss of the services that these ecosystems provide, according to the assessment. The federal government has a limited role in project-level planning central to helping increase the resiliency of marine coastal ecosystems to climate change because state and local governments are primarily responsible for managing their coastlines. However, the federal government plays a critical role in supporting state government efforts to increase resiliency to climate change, according to the President’s State, Local, and Tribal Leaders Task Force on Climate Preparedness and Resilience. The federal role includes ensuring that federal policies and programs factor in potential risks from climate change, providing financial incentives for enhancing resilience, and providing information and assistance to help states and others better understand and prepare for climate risks. NOAA, as a key federal agency whose mission is, in part, to manage and conserve marine coastal ecosystems, has identified enhancing ecosystem resilience as an important part of its broader goal of building community resilience. NOAA works toward this goal, in part, through its administration of the CZMA. Specifically, NOAA’s Office for Coastal Management administers the National Coastal Zone Management Program. To participate, states are to submit comprehensive descriptions of their coastal zone management programs—approved by states’ governors—to NOAA for review and approval. As specified in the act, states are to meet the following requirements, among others, to receive NOAA’s approval for their state programs: designate coastal zone boundaries that will be subject to state define what constitutes permissible land and water use in coastal propose an organizational structure for implementing the state program, including the responsibilities of and relationships among local, state, regional, and interstate agencies; and demonstrate sufficient legal authorities for the management of the coastal zone in accordance with the program, which includes administering land and water use regulations to control development to ensure compliance with the program and resolve conflicts among competing uses in coastal zones. The act provides the states flexibility to design programs that best address states’ unique coastal challenges, laws, and regulations, and participating states have taken various approaches to developing and carrying out their programs. States’ specific activities also vary, with some states focusing on permitting, mitigation, and enforcement activities, and other states focusing on providing technical and financial assistance to local governments and nonprofits for local coastal protection and management projects. If states make changes to their programs, such as changes to their coastal zone boundaries, enforceable policies, or organizational structures, states are to submit those changes to NOAA for review and approval. NOAA officials are responsible for, among other things, approving state programs and any program changes; administering federal funding to the states; and providing technical assistance to states, such as on the development of 5-year coastal zone enhancement assessment and strategy reports that identify states’ priority needs and projects. One primary incentive to encourage states to develop coastal zone management programs and participate in the National Coastal Zone Management Program is states’ eligibility to receive federal grants from NOAA to support the implementation and management of their programs. Specifically, NOAA provides two primary types of National Coastal Zone Management Program grants to participating states: Coastal zone management grants support the administration and management of state programs and require states to match federal contributions. Coastal zone enhancement grants support improvements in state programs in specified enhancement areas. Coastal zone enhancement grants do not require state matching funds and include both formula and competitive grants for projects of special merit. To be eligible for coastal zone enhancement grants, state coastal zone management programs are to develop an assessment of each of nine enhancement areas for their state every 5 years, including those areas that are a priority for the state. In conjunction with the assessment, state programs are to also develop a strategy for addressing the high priority needs for program enhancement within one or more enhancement area(s). NOAA reviews and approves this “assessment and strategy” document for each state and, if approved, states are eligible for formula grants and may also apply annually for competitive grants. In fiscal year 2016, a total of almost $50 million was allocated to the 22 participating marine coastal states for these two types of grants. By statute, a maximum of $10 million of the amount appropriated for CZMA management grants may be used for the coastal zone enhancement formula and competitive grants. States received a maximum of approximately $0.9 to $2.7 million per state for the two types of grants under the National Coastal Zone Management Program in fiscal year 2016. In addition, the CZMA authorizes NOAA to provide technical assistance, including by entering into financial agreements, to support the development and implementation of state coastal zone management program enhancements. The CZMA also established the National Estuarine Research Reserve System—a network of 28 coastal estuary reserves (25 of which are located in marine coastal states) managed through a state-federal partnership between NOAA and coastal states. NOAA provides financial assistance, coordination, national guidance for program implementation, and technical assistance, and coastal states are responsible for managing reserve resources and staff, providing matching funds, and implementing programs locally. The reserve system was established on the principle that long-term protection of representative estuaries provides stable platforms for research and education and the application of management practices that will benefit the nation’s estuaries and coasts, according to its 2011-16 strategic plan. State coastal zone managers may take various actions to manage marine coastal ecosystems and help increase their resilience to the potential effects of climate change. For example, managers may target land acquisition and conservation activities to areas of higher ground adjacent to coastal wetlands, mangroves, and other natural habitats to allow the habitats to migrate so they do not disappear if sea levels rise. In addition, state coastal zone managers may remove physical barriers, such as concrete structures, that prevent beach migration over time in favor of installing living shorelines along areas with a low impact from waves. Living shorelines are natural habitats, or a combination of natural habitat and manmade elements, put in place along coastal shorelines to reduce shoreline erosion. Management decisions about what actions may be appropriate for a specific area often depend upon detailed information about the current and expected future conditions of the area in question, such as shoreline elevation data, expected rates of sea level rise, and how the ecosystem may be expected to respond to future environmental changes. As we concluded in our 2009 report on climate change and strategic federal planning, new approaches may be needed to match new realities, and old ways of doing business—such as making decisions based on the assumed continuation of past climate conditions—may not work in a world affected by climate change. NOAA is taking a variety of actions under the CZMA to support states’ efforts to make their marine coastal ecosystems more resilient to climate change, and states generally view NOAA’s actions as positive steps. According to NOAA officials, the agency’s actions are largely embedded in its broader efforts to build community resilience, and through these efforts NOAA has emphasized the importance of healthy ecosystems, as there is increasing recognition of the critical role that ecosystems play in supporting resilient communities. The CZMA provides a foundation for managing marine coastal ecosystems and partnering with states to work towards the agency’s goals of achieving resilient coastal communities and healthy coastal ecosystems, according to the officials. Within this context, NOAA is taking such actions as providing financial incentives and technical assistance and supporting research through the National Estuarine Research Reserve System to help coastal states understand the potential effects of climate change and plan or implement projects to respond to these effects and enhance marine coastal resilience. We found that state coastal zone managers generally had positive views of the actions NOAA is taking. NOAA has targeted some of the financial incentives it provides to states under the CZMA for activities aimed at addressing the impacts of climate change and enhancing marine coastal resilience. For example, within the National Coastal Zone Management Program for fiscal years 2016 to 2020, NOAA designated coastal hazards—physical threats to life and property, such as sea level rise—as an enhancement area of national importance. In so doing, NOAA indicated that coastal zone enhancement competitive grants would be focused on projects that will further support approved state strategies related to this enhancement area. NOAA also increased the total amount available for these competitive grants from $1 million in fiscal years 2014 and 2015 to $1.5 million for fiscal year 2016. NOAA officials said that many of the applications they received in 2015 and 2016 were for projects that were intended to directly or indirectly address climate risks and enhance the resilience of states’ coastal ecosystems. For example, in 2015, NOAA awarded one grant for about $200,000 to a state to undertake a mapping study to identify vulnerable habitats along its coastline and use the results of the study to prioritize those habitats considered most vulnerable to climate change for the state’s restoration and resilience efforts. In addition, starting in fiscal year 2015, NOAA initiated a Regional Coastal Resilience Grant Program to fund projects that focus on regional approaches to helping coastal communities address vulnerabilities to extreme weather events, climate hazards, and changing ocean conditions using resilience strategies. State and local governments, nonprofit organizations, and others are eligible to apply for these grants. NOAA awarded six applicants grants totaling $4.5 million in each of fiscal years 2015 and 2016, according to NOAA officials. Projects eligible for grants may be targeted to a variety of efforts that support resilience, including actions focused on marine coastal ecosystem resilience. For example, in 2016, NOAA awarded one grant for nearly $900,000 to a regional partnership of state governments, nonprofit organizations, and academia involved in a project aimed at mitigating the impacts of weather events on natural resources, among other things. Specifically, the project intends to assess potential coastal storm impacts and increase the implementation of nature-based infrastructure approaches to buffer the effects of coastal storms, among other things. Officials from all 25 state coastal zone management programs said that financial assistance provided by NOAA has been critical for planning projects designed to enhance marine coastal ecosystem resilience and reduce the potential impacts of climate change. Officials from nearly all state coastal zone management programs expressed concern, however, that the amount of financial assistance available is insufficient to address states’ needs in implementing projects. For example, officials from 15 of the 25 state programs said that coastal zone management grants have been the primary source of funding from NOAA that they have used for efforts related to ecosystem resilience. However, these grants generally cannot be used to purchase land or for construction projects—activities the states identified as important for improving the resilience of their coastlines. In addition, officials from 20 of the 25 state programs said that they have had to leverage funds from multiple sources, such as state funds, nonprofit organizations, or other federal agencies, to implement projects aimed at enhancing ecosystem resilience. NOAA officials agreed that there is a high demand for funding for these types of projects, noting, for example, that the Regional Coastal Resilience Grant Program received 132 qualified applications requesting a total of $105 million during its first application period in fiscal year 2015, when a total of $4.5 million was available for the grants. Through its administration of the National Coastal Zone Management Program, NOAA has also provided technical assistance to coastal states to help them understand and address the potential impacts of climate change on marine coastal ecosystems. NOAA officials said they work regularly with state coastal zone managers and they look for opportunities to provide assistance to help the states take actions designed to enhance resilience. For instance, through their reviews of states’ 5-year coastal zone enhancement assessment and strategy reports, NOAA officials said they identify information needs and priorities of state coastal zone management programs. For example, NOAA officials said they found that states had a common interest in knowing more about valuing the economic benefits of coastal ecosystems, such as estimating the financial benefit that ecosystems provide for flood control. As a result, NOAA officials said they presented information on this topic at a 2016 annual meeting with state coastal zone managers. This type of information can help states develop cost-benefit analyses that may more accurately capture the value of services provided by coastal ecosystems—as opposed to man-made infrastructure such as seawalls and levees—when states are exploring options for coastal projects, according to the officials. NOAA officials also said they share the states’ needs and priorities that they identify with other NOAA offices, as well as with external partners such as other federal agencies and nonprofit organizations, to increase the awareness of state program needs and priorities and facilitate coordination and the alignment of resources across programs. NOAA provides a wide range of technical assistance in the form of technical information, guidance, and training related to better understanding and addressing the potential impacts of climate change on marine coastal ecosystems, including: Technical information. NOAA has provided various types of technical information to states to help them understand and incorporate climate information into their state coastal zone management programs. For example, in 2007, in partnership with nonprofit entities, NOAA helped develop a publically available repository of information—called the Digital Coast—to help state coastal zone managers and others analyze potential climate risks and determine how to address those risks. The Digital Coast provides information and tools on such topics as climate models and statistical analyses that coastal managers can use to incorporate climate information into their management activities. For instance, the Digital Coast contains an interactive tool that allows users to estimate sea level rise and simulate different sea level rise scenarios using elevation and surface data to help identify coastal areas that may be affected by rising sea levels in a changing climate. Coastal zone managers from one state said they used this tool to determine the vulnerability of their state’s shoreline to potential sea level rise, which has helped them better target financial assistance to areas they identified as most vulnerable. Officials from all 25 state coastal zone management programs said that the technical information NOAA provides has generally helped them incorporate climate information into their state programs. However, officials from 20 of the 25 state programs said they often need more local or site-specific information for planning and implementing projects. The information NOAA provides is mostly at a national or regional scale, given staffing and resource levels, according to NOAA officials. These officials added that data available through the Digital Coast may be used as a starting point for coastal managers to modify and build in more site-specific elements. For example, one state customized a Digital Coast tool for estimating sea level rise to create a map outlining potential coastal flooding areas under various sea level rise scenarios at a site-specific scale within its state. Guidance. NOAA has developed several guidance documents to help state coastal zone managers and others identify specific ways marine coastal ecosystems may be used to help withstand the potential impacts of climate change and enhance the resilience of coastal areas. For example, in 2015, NOAA developed its Guidance for Considering the Use of Living Shorelines to provide information on how to use natural ecosystems, such as oyster reefs or marshes, to reduce coastal erosion caused by intense storms, wave erosion, or sea level rise. In addition, in 2016, NOAA issued a Guide for Considering Climate Change in Coastal Conservation. The guide provides coastal managers a step-by-step approach to considering climate change in coastal conservation planning, with links to relevant tools, information, and other resources. Officials from 19 of the 25 state coastal zone management programs said NOAA’s guidance was generally useful, but officials from 14 of these programs said that NOAA’s guidance alone is not sufficient to plan or implement actions that enhance ecosystem resilience and address climate change risks. Officials from one state, for example, said that NOAA’s guidance on using living shorelines is helpful for general purposes such as educating the public on the benefits of this technique, but that the guidance does not cover all shoreline types, such as gravel beaches, exposed rocky shores, or tidal flats. NOAA officials said that using living shorelines as a strategy to enhance coastal resilience is a relatively new technique, and as coastal managers gain more experience with its use on different types of shorelines, NOAA plans to incorporate this information into the assistance it provides to coastal states. NOAA officials also said that the guidance was not intended to be a comprehensive source of technical information on living shoreline techniques, but rather to provide information on key technical and policy considerations in planning and designing shoreline management projects. Training. NOAA has developed instructor-led and online training on topics such as the use of marine coastal ecosystems for improving community resilience and understanding how to use the tools found in the Digital Coast as a way to plan for and take action to address the potential impacts of climate change. For example, from 2014 to 2016, NOAA provided training to over 250 state coastal zone managers and other state practitioners across the country on identifying various types of flooding in coastal areas and methods for mapping potential flooding scenarios. Similarly, during the same time period, NOAA officials said they offered 11 3-day climate adaptation workshops across the country that covered a variety of climate-related topics including methods for assessing the vulnerability of coastal areas and options for using ecosystems, such as wetlands, to provide flood protection. Officials from 16 of the 25 state coastal zone management programs told us that they viewed the training provided by NOAA on topics related to climate change and ecosystem resilience as helpful. NOAA officials said they take steps to ensure their training topics meet states’ needs by discussing potential training topics with state coastal zone managers before developing courses and by collecting participant feedback after training courses are provided. For example, NOAA officials said they reach out to all training participants after each course to ask the participants the extent to which they believe they will be able to apply the training to their work. NOAA officials said that the National Estuarine Research Reserve System is important for marine coastal ecosystem resilience, in part, because the reserves serve as “living laboratories” for the study of estuaries and natural and man-made changes, including the impacts of climate change. For example, in 2014, coastal zone managers from one state partnered with the state’s research reserve staff, along with NOAA and others, to study and map marsh migration patterns across the state’s coastline to determine how marsh ecosystems may respond to rising sea levels. The study results were then incorporated into state efforts to help local communities plan for and take action to adapt to the effects of climate change, according to a state coastal zone manager. In the research reserve system’s 2011-16 strategic plan, NOAA and the states identified climate change as one of three strategic areas of focus and investment for the 5-year period. Activities identified in the plan include, among others, generating and disseminating periodic analyses of water quality, habitat change, and the effects of climate change and other stressors at local and regional scales. Officials from 19 of the 25 state coastal zone management programs said that the work carried out through their respective research reserves plays an important role in furthering their understanding of how climate change may affect the structure and function of estuarine ecosystems. NOAA officials agreed that this research plays a critical role in supporting states’ efforts to enhance coastal resilience to climate change. The officials said they are updating the reserve system’s strategic plan for 2017, which they expect to complete in January 2017, and plan to continue highlighting climate change and resilience as key issues to focus their research at the reserves. We provided the Department of Commerce a draft of this report for review and comment. NOAA provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Commerce, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix I. Anne-Marie Fennell, (202) 512-3841 or fennella@gao.gov. In addition to the contact named above, Alyssa M. Hundrup (Assistant Director), Michelle Cooper, John Delicath, Cindy Gilbert, Jeanette M. Soares, and Rajneesh Verma made key contributions to this report. Also contributing were Michael Hill, Armetha Liles, Christopher Pacheco, Janice Poling, Steve Secrist, and Joseph Dean Thompson. | Coastal areas—home to over half of the U.S. population—are increasingly vulnerable to catastrophic damage from floods and other extreme weather events that are expected to become more common and intense, according to the 2014 Third National Climate Assessment. This assessment further indicated that less acute effects from changes in the climate, including sea level rise, could also have significant long-term impacts on the people and property along coastal states. Marine coastal ecosystems—including wetlands and marshes—can play an important role in strengthening coastal communities' resilience to the impacts of climate change, such as protecting eroding shorelines from sea level rise. Under the CZMA, NOAA is responsible for administering a federal-state partnership that encourages states to balance development with the protection of coastal areas in exchange for federal financial assistance and other incentives. GAO was asked to review federal efforts to adapt to potential climate change effects on coastal ecosystems. This report provides information about NOAA's actions to support states' efforts to make marine coastal ecosystems more resilient to the impacts of climate change and states' views of those actions. GAO reviewed the CZMA and relevant NOAA policies and guidance; interviewed officials from NOAA headquarters and six regional offices; and conducted structured interviews with officials from the 25 state coastal zone management programs in all 23 marine coastal states. NOAA provided technical comments on this report. The Department of Commerce's National Oceanic and Atmospheric Administration (NOAA) is taking a variety of actions to support states' efforts to make their marine coastal ecosystems more resilient to climate change, and states generally view NOAA's actions as positive steps. The Coastal Zone Management Act (CZMA) provides a foundation for managing these ecosystems and partnering with states to work towards the agency's goals of achieving resilient coastal communities and healthy coastal ecosystems, according to NOAA officials. Through the federal-state partnership established under the CZMA, GAO found that NOAA has taken actions, including: Financial incentives. NOAA has targeted some of its financial incentives for activities aimed at addressing the impacts of climate change. For example, NOAA designated coastal hazards—physical threats to life and property, such as sea level rise—as the focus of CZMA competitive grants. States competed for a total of $1.5 million in grants in fiscal year 2016. Officials from all 25 state programs that GAO interviewed said funding provided by NOAA has been critical for planning projects related to ecosystem resilience, but also expressed concern that the amount of funding is insufficient to address states' needs in implementing projects. For instance, officials from 15 state programs further indicated that coastal zone management grants have been a primary source of funding from NOAA, but that they generally cannot be used to purchase land or for construction projects, activities states identified as important for improving coastal resilience. Technical assistance. NOAA has provided assistance largely through technical information, guidance, and training to help states better understand and address the potential impacts of climate change on marine coastal ecosystems. For example, NOAA helped develop an interactive digital tool to simulate different sea level rise scenarios. NOAA also developed guidance to help identify ways ecosystems may be used to enhance the resilience of coastal areas, such as using natural shorelines to buffer the effects of erosion. In addition, NOAA developed training on topics such assessing the vulnerability of coastal areas. State managers GAO interviewed had generally positive views of the technical assistance provided by NOAA. For example, officials from all 25 state programs said that the technical information NOAA provides has generally helped them incorporate climate information into their state programs. National Estuarine Research Reserve System. NOAA, in partnership with coastal states, manages 25 marine-based estuary reserves, in part, to study natural and man-made changes to estuaries (bodies of water usually found where rivers meet the sea), including the potential impacts of climate change. For example, in 2014, one state used its research reserve to study and map marsh migration patterns across the state's coastline to determine how these ecosystems may respond to rising sea levels. Officials from 19 of the 25 state programs said that work carried out through the research reserves plays an important role in furthering their understanding of how climate change may affect the structure and function of estuarine ecosystems. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.